Permission denied when trying to capture Intel RAID1 image



  • I’ve been following the thread over here https://forums.fogproject.org/topic/7851/intel-raid0-image-capture in order to figure out how to image some hardware that I have that uses the “fake” Intel RAID. We specifically use RAID 1 on these machines.

    Well I found what I needed and I set the image to Single Disk Resizable and set the Host Kernal Arguments to mdraid=true and the Host Primary Disk to /dev/md126. I kick off the capture task, but then I get the error below.

    *Mounting partition (/dev/md126p1)...Failed
    *Could not mount /dev/md126p1 (removePageFile)
    Args Passed: /dev/md126p1
    Reason: ntfs-3g-mount: mount failed: Permission denied
    

    Weird that I’m getting permission denied because if I get rid of the mdraid=true and /dev/md126 and just capture the image like a regular non-RAID machine it captures with no issue. Why would these arguments spit out a permissions error?



  • YOU GUYS ARE AMAZING!

    Both the capture and deploy tasks completed successfully! Thank you all so much for your help! Now I can have our staging team start using Fog for our equipment with RAID.

    Thanks again!



  • I’m not sure what that error was caused by the first time around, but I was able to successfully capture the image on my second pass. Now I’m deploying the captured image. So far so good. No errors yet. Fingers crossed…



  • My first attempt to capture the image after downloading the new init files I got an error.

    Partclone fail, please check /var/log/partclone.log !
    
    Failed to complete capture (savePartition)
       Args Passed: /dev/md126p1 1 /images/00270e3b9d1e all
        Exit code: 1
        Maybe check the fog server to ensure disk space is good to go?
    

    After the host rebooted it started a CHKDSK and I’m waiting for it to finish now.
    I may need to restore the original image again just to get a fresh start. I’ve also checked the image storage and there’s plenty of space available. I’ll update once I’ve gotten a few good tests. Thanks!


  • Moderator

    @george1421 I’ve seen that on the latest kernels on normal installs too, doesn’t seem to hurt anything afaik.


  • Moderator

    @Sebastian-Roth Deployment was successful.

    I did notice there was an error message I have not seen before, but I don’t think its related to intel raid.

    db_root: can not open: /etc/target
    

    Everything works OK so I don’t know maybe if its a warning we can ignore.

    Also FWIW I’m also running on the latest FOS kernels (4.18.11)


  • Moderator

    @Sebastian-Roth Looks like you have it worked out.

    The only thing I did was install the updated inits and then pxe boot into a debug console. It worked right out of the box. Well done!

    I’m going to start a normal mode deploy to ensure it can push the image out to the disk, but I’m sure it will work as it did with the raid0 setup.

    0_1538653432542_mdm_initz.png


  • Developer

    @Zerpie @george1421 Build is done. Please find new test binaries here 64 bit and 32 bit.

    Download and put in /var/www/fog/service/ipxe on your FOG server. Rename to match the original names - do not overwrite but also rename the original init’s I would say just in case.

    Please test and let us know. Caputre, deploy… working?


  • Moderator

    @Zerpie Sebastian was working through why the program was missing from the inits. I think he was planning on patching the inits for now and then going back to the buildroot project with the error why its not including the file when its selected. FOG uses Buildroot to create the customized linux engine that runs on the target computers.
    It should be just a day or two before we have a workable solution for you. Its not something you did or didn’t do, its related to mdadm and the missing monitoring application for raid 1 arrays.



  • @george1421 That’s good to know. Does that mean I’ll have to wait for a future update for this to work with my equipment?

    Thanks for all the work you guys have put into this. I really appreciate how much this community is willing to help.


  • Moderator

    @george1421 OK I have a track on how to make it work. I just need to see how we can get the fix into the official FOS release.

    0_1538587716521_mdraid-syncing.png

    We can patch it, but it needs to be fixed right.



  • This post is deleted!

  • Moderator

    With the raid array configured for raid-1 (Mirror) I am getting the same results as the OP. I can’t seem to switch it from raid-only to read write to start the resync process. I’m still working on the issue as time permits today. But there IS something up here, I feel its hardware/array related. It should work because all of the bits are aligned right as soon as I can get it out of read-only mode.

    0_1538573176989_raid1-proc.png

    0_1538573310811_mdadm-detail.png




  • Developer

    @Zerpie Try mdadm --detail /dev/md126

    See chat bubble in the right upper corner…



  • @george1421 It doesn’t recognize the command mdstat.


  • Moderator

    @Zerpie I can see that my test is a bit off point since I “though” you were configuring for a stripped array (raid-0), but looking at your picture below you are using a mirrored array (raid-1). So my screen shots are not really valid other than proving that FOS can support intel raid arrays.

    What does the output of this command look like?
    mdstat --detail /dev/md126

    Hint reboot first since you disassembled the /dev/md126 device with the stop command.



  • @Sebastian-Roth Here’s what I got. This was the same debug session. Not sure if I need to start a new session first.

    0_1538510645422_pic3.jpg


  • Moderator

    @george1421 Well this is what I see when I configured the 780 with 2 250GB disk setup in raid 0 move (stripped)

    0_1538509662011_mdstat.png

    and for lsblk
    0_1538509672956_lsblks.png

    I can say that I have a global kernel parameter of mdraid=true always set. When I watched FOS boot, I saw message about mdadm container assembled and then the container started. I haven’t tried to deploy to this system just yet because I need to edit the host configuration to include /dev/md126, but I’m pretty sure it will image like this. The raid array appears to be in good health.

    [Edit] Yes I can affirm that it imaged correctly Windows OOBE is currently running on that system.


  • Moderator

    @Zerpie I’m thinking that md127 being set to inactive is also a clue. I just grabbed an old Opltiplex 780 and I’ll spin up a test workstation with the intel raid configured to see if I can duplicate the results here.


 

510
Online

41.9k
Users

12.4k
Topics

116.7k
Posts