Intel Raid0 Image Capture
-
@jpmartin I think he was just typing. Ultimately, remove the “Here” as I highly doubt you all have a “/dev/md126Here” labeled device. :smiley_face_here:
As we don’t have smilies I suppose it would be better to add a bit that I’m just playing around.
-
@Tom-Elliott That’s what I was thinking too. But was just checking to make sure I didn’t miss anything.
Have the changes been pushed to the svn trunk so I can update and test?
-
@jpmartin said in Intel Raid0 Image Capture:
@george1421 where does “Here” come from in “/dev/md126Here” in your post just below?
Sorry trying to do my job and play at the same time, victim of copy/paste. Tom is right its just /dev/md126
-
Yes, changes have been pushed and init’s are updated. Reinstalling will work as well, but I still say to just update.
-
@Tom-Elliott Excellent. I’ll update and report back.
-
Bingo!! my test bench system is deploying!!
-
Running a Resizable Capture now.
Fog is resizing the file system currently.
Will test deploy after this image is captured.
-
Resizable capture worked perfectly.
Running deploy now, should be finished in ~6 minutes, all is looking good so far.
-
@jpmartin That’s great!! I’ll have to write this up in a tutorial so that its document properly now.
(Shameless donation request)
Please, if your company finds value in the FOG management tools, please consider donating to the FOG Project. The level of support, bug fixes, and rapid feature advancements companies receive from the FOG Project are at levels much higher than almost all Tier 1 support groups. If all companies and organizations that productively use FOG within their site(s), just contributed $50USD to the FOG project, it would really help to support a very worthwhile ecosystem. -
Thank you guys for working together and getting this fixed in just about 24 hours! This is amazing!
-
@george1421 I restored the image as multicast and it errored on one machine after the image was downloaded. I forget the error it threw specifically but it complained about not being able to mount or unmount a partition.
That machine rebooted, then advanced to the same spot that the other was sitting.
It was a partclone screen saying “restoring image (-) to drive/device (/dev/sda1)” and I wasn’t able to hangout long enough to see if it advanced.
I’ll be back in the office in an hour or so, I can give an update then and provide any additional information that may be helpful if the imaging process didn’t finish. I’m hoping it figured itself out after I left for a meeting.
I’m an intern exploring Fog Project as a potential solution to some of our imaging issues, I’ll be sure to make the point that if we end up implementing Fog Project that we should make a donation to support the efforts.
-
@jpmartin Are you trying to recover a RAID based image to a regular disk, or maybe that host you forgot to update the primary device?
-
Image was a Resizable Single Disk being restored to identical machines (other than the RAID volume name in the Intel RAID Configuration may be different) in Raid0 as the machine that the image was captured from. The machine that didn’t show the error was the machine that the image was captured from. If one of the raid volumes has a different name in the Intel Raid Config, it’d be the one that errored.
During the restore of the image, both machines showed that the image was being restored to /dev/md126p1 (I think the name is correct, either way it was restoring to the same location that the resizable image was captured from).
I believe the kernel host arguments and primary disk were updated to mdraid=true and /dev/md126.
Will double check when I get back to the office.
-
@jpmartin How many of these systems are you trying to restore at one time?
Understand that each of these systems will need to be registered in FOG and their “Host Kernel Arguments” and “Host Primary Disk” updated to the new settings. I would recommend you create a new group assign these hosts to that group and then use the group update function to set these parameters for all hosts in that group. That way you can be sure they are all the same.
Please also understand we are waking the bleeding edge here. We did just prove that unicast imaging worked. Throwing multicasting into the picture may expose some other bugs (since that wasn’t tested). The random /dev/sda1 is troubling. Now I didn’t go in and change the volume name to see if it messed with the /dev/md126 naming (I’ll do that later for completeness) I did notice under /dev/md/ there was a device with the name of the volume I created Volume0_0, but again we are referencing the logical names of /dev/md126 so the actual volume name should not matter.
The windows boot partition did get restored to /dev/md126p1 and the drive to /dev/md126p2
-
@george1421 said in Intel Raid0 Image Capture:
@jpmartin How many of these systems are you trying to restore at one time?
Just 2 right now. If it makes it to production it could easily be 25+ at a time.
Understand that each of these systems will need to be registered in FOG and their “Host Kernel Arguments” and “Host Primary Disk” updated to the new settings. I would recommend you create a new group assign these hosts to that group and then use the group update function to set these parameters for all hosts in that group. That way you can be sure they are all the same.
This is actually what I did. When I tried to edit those settings for the group, the Primary Disk and Bios Exit Type (Grub_First_HDD) didn’t “stick”. I’d click update and those fields would return to default values. When I viewed the 2 hosts individually, the values above were also cleared/reset to default so that very easily could have been the problem right there.
I just set them back to the correct settings individually and just finished a unicast deployment to the machine the image was created with.
I deleted the existing RAID0 array on that machine and renamed it to something different as well. Fog didn’t care. So I don’t think that was the issue.
Please also understand we are waking the bleeding edge here. We did just prove that unicast imaging worked. Throwing multicasting into the picture may expose some other bugs (since that wasn’t tested). The random /dev/sda1 is troubling. Now I didn’t go in and change the volume name to see if it messed with the /dev/md126 naming (I’ll do that later for completeness) I did notice under /dev/md/ there was a device with the name of the volume I created Volume0_0, but again we are referencing the logical names of /dev/md126 so the actual volume name should not matter.
Good to know. I’ll do what I can to break stuff and report what happened.
EDIT: Just to update, I corrected the settings for each host individually instead of using group management, created a multi-cast task for the group (I didn’t change any host settings on the group management page hoping it would use the host specific settings) and successfully imaged both hosts via multicast.