• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. jpmartin
    3. Posts
    J
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 16
    • Best 2
    • Controversial 0
    • Groups 0

    Posts made by jpmartin

    • RE: Intel Raid0 Image Capture

      @george1421

      @george1421 said in Intel Raid0 Image Capture:

      @jpmartin How many of these systems are you trying to restore at one time?

      Just 2 right now. If it makes it to production it could easily be 25+ at a time.

      Understand that each of these systems will need to be registered in FOG and their “Host Kernel Arguments” and “Host Primary Disk” updated to the new settings. I would recommend you create a new group assign these hosts to that group and then use the group update function to set these parameters for all hosts in that group. That way you can be sure they are all the same.

      This is actually what I did. When I tried to edit those settings for the group, the Primary Disk and Bios Exit Type (Grub_First_HDD) didn’t “stick”. I’d click update and those fields would return to default values. When I viewed the 2 hosts individually, the values above were also cleared/reset to default so that very easily could have been the problem right there.

      I just set them back to the correct settings individually and just finished a unicast deployment to the machine the image was created with.

      I deleted the existing RAID0 array on that machine and renamed it to something different as well. Fog didn’t care. So I don’t think that was the issue.

      Please also understand we are waking the bleeding edge here. We did just prove that unicast imaging worked. Throwing multicasting into the picture may expose some other bugs (since that wasn’t tested). The random /dev/sda1 is troubling. Now I didn’t go in and change the volume name to see if it messed with the /dev/md126 naming (I’ll do that later for completeness) I did notice under /dev/md/ there was a device with the name of the volume I created Volume0_0, but again we are referencing the logical names of /dev/md126 so the actual volume name should not matter.

      Good to know. I’ll do what I can to break stuff and report what happened.

      EDIT: Just to update, I corrected the settings for each host individually instead of using group management, created a multi-cast task for the group (I didn’t change any host settings on the group management page hoping it would use the host specific settings) and successfully imaged both hosts via multicast.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Tom-Elliott

      Image was a Resizable Single Disk being restored to identical machines (other than the RAID volume name in the Intel RAID Configuration may be different) in Raid0 as the machine that the image was captured from. The machine that didn’t show the error was the machine that the image was captured from. If one of the raid volumes has a different name in the Intel Raid Config, it’d be the one that errored.

      During the restore of the image, both machines showed that the image was being restored to /dev/md126p1 (I think the name is correct, either way it was restoring to the same location that the resizable image was captured from).

      I believe the kernel host arguments and primary disk were updated to mdraid=true and /dev/md126.

      Will double check when I get back to the office.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @george1421 I restored the image as multicast and it errored on one machine after the image was downloaded. I forget the error it threw specifically but it complained about not being able to mount or unmount a partition.

      That machine rebooted, then advanced to the same spot that the other was sitting.

      It was a partclone screen saying “restoring image (-) to drive/device (/dev/sda1)” and I wasn’t able to hangout long enough to see if it advanced.

      I’ll be back in the office in an hour or so, I can give an update then and provide any additional information that may be helpful if the imaging process didn’t finish. I’m hoping it figured itself out after I left for a meeting.

      I’m an intern exploring Fog Project as a potential solution to some of our imaging issues, I’ll be sure to make the point that if we end up implementing Fog Project that we should make a donation to support the efforts.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      Resizable capture worked perfectly.

      Running deploy now, should be finished in ~6 minutes, all is looking good so far.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      Running a Resizable Capture now.

      Fog is resizing the file system currently.

      Will test deploy after this image is captured.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Tom-Elliott Excellent. I’ll update and report back.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Tom-Elliott That’s what I was thinking too. But was just checking to make sure I didn’t miss anything.

      Have the changes been pushed to the svn trunk so I can update and test?

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @george1421 where does “Here” come from in “/dev/md126Here” in your post just below?

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @george1421 Have you been able to capture an image from the machine?

      Do you think it’d be possible to capture a Single Disk Resizable image from these fake raid machines?

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Sebastian-Roth said in Intel Raid0 Image Capture:

      @jpmartin George’s mdadm -D /dev/md126 seems to be very handy and informative. Give that a try!

      No image file found that would match the partitions to be restored
      args passed /dev/md126 /images/WIN7ENTX64 all
      

      Guess that’s just a matter of tuning the init scripts to make this work. Will have a look tomorrow. Marking this unread… 😉

      Here you go:

      mdadm -D /dev/md126.txt

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @george1421 We’re on Raid0, if that makes a difference.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Sebastian-Roth said in Intel Raid0 Image Capture😆

      The tools we use in the inits are not the ones you usually have on your normal linux desktop or server as this is a buildroot “toolchain”. The options are mostly similar to all the conventional linux tools but some are different. lsblk is definitely one having quite different options. While you are in debug mode run lsblk --help to see it’s options.

      Great you found my old post and tried all the commands. While it’s still true that I don’t have a system to test I guess we can take this one step further if you keep on posting valuable information.

      • mdadm examine output looks ok to me.
      • mdstat not great as md127 is inactive. Maybe this helps?
      • possibly this is just an issue because we don’t have /etc/mdadm.conf - maybe? Take a look at that file on your system before running the debug upload task. Maybe put a copy on your FOG server (/images/dev), boot in FOG debug and copy from FOG server to the client. Then see if you can assemble the array properly?!

      These are Windows 7 systems, so I don’t have a mdadm.conf to go off of.

      I went to that link and started down the list, but it got over my head pretty quickly.

      These are the results of mdadm -examine of sda and sdb.

      mdadm_examine_sdb.txt
      mdadm_examine_sda.txt

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @Tom-Elliott said in Intel Raid0 Image Capture:

      if you can reboot a few times and you have the mdraid=‘true’ argument set on this host, can you verify the name ‘/dev/md126’ is always the same?

      If it is, you will need to have ‘/dev/md126’ added to the host’s primary device as well.

      I didn’t specify in my earlier posts, but I believe /dev/md126 was consistent. I set Host primary disk to /dev/md126 and tested a couple different times.

      As Multiple Partition - Single Disk: It could find the disk (array), but not the partitions.
      As Multiple Partition - All Disks: It was unable to find the disk (array).

      @george1421 said in Intel Raid0 Image Capture:

      @jpmartin It would be interesting to know the output of

      mdadm –D /dev/md126

      Will get that output for you shortly.

      I’m going to run a bunch of different tests and dump the outputs to .txt files. Let me know if there is a command that you need me to run in addition to the one just above, fdisk -l, and from this post:

      @Sebastian-Roth said in Intel RAID:

      I am sorry but I feel pretty lost with this as I don’t have such a machine here to test!

      The only thing I can offer is to go through this again from the start step by step. Just an offer. It’s up to you if you want to.

      Let’s start with the client you are getting the image from. Configure a complete new image for that client in FOG and make it Multiple partition image - single disk (not resizeable). The run a debug session for that client, boot it up and wait till you get to the command shell. Then run the following commands and post the full output here in the forums or upload the text files. Replace x.x.x.x with the IP address of your FOG server.

      mkdir -p /mnt
      mount -t nfs -o rw,nolock x.x.x.x:/images/dev /mnt
      mdadm --examine /dev/sd? > /mnt/mdadm_examine.txt
      cat /proc/mdstat > /mnt/mdstat.txt
      mdadm --assemble --scan --verbose > /mnt/mdadm_assemble.txt
      ls -al /dev/md* > /mnt/md_devices.txt
      umount /mnt
      

      I just typed and copied those commands without testing. There might be typos.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel Raid0 Image Capture

      @george1421 said in Intel Raid0 Image Capture:

      I can say that dealing with the hybrid software / hardware raid (fake raid) is kind of a pita.

      Yes it is.

      What you say when you reference /sda and /sdb is that you are talking to the individual disks of the array not the array. This is true. While I haven’t attempted to mess with this is the logical array referenced with /dev/md126 from above? I might expect the array itself to be something like /dev/md/{array0} or something similar.

      /dev/md126 is what as best I can tell is the Raid array. mdstat.txt in OP is the output of:

      cat /proc/mdstat
      

      while in debug - capture mode.

      I was hoping that if I switched to the multiple disks that it would look for the individual disks and image each of them instead of looking for the Raid array. But, it seems as though partclone is looking for the total number of blocks for the entire array and is trying to read/write all of them from/to sda1.

      posted in FOG Problems
      J
      jpmartin
    • Intel Raid0 Image Capture

      Original topic is here: https://forums.fogproject.org/topic/4218/intel-raid?page=1

      I’ve done everything in that topic and am still having issues.

      I’m attempting to capture an image of a Lenovo Thinkpad W530 in Raid0.

      Here are the .txt files you requested (not OP but having a hard time imaging an Intel Raid0 system).

      On latest trunk as of today at 9am.

      mstat.txt
      mdadm_examine.txt
      mdadm_assemble.txt (This file is empty. I ran the command several times and it never dumped any data into the .txt file)
      md_devices.txt

      Host options are :

      kernel arguments: mdraid=true
      primary disk: /dev/md126

      When running as “single disk - multiple partitions”, it goes great until it checks for partitions and then returns:

      “Could not find Partitions”

      I’m not very experienced with linux (specifically .sh scripting), but the portion of the functions.sh script that looks for partitions doesn’t make much sense to me, especially this function:

      diskSize=$(lsblk --bytes -dplno SIZE -I 3,8,9,179,259 $hd)
                         [[ $diskSize -gt 2199023255552 ]] && layPartSize="2tB"
                         echo " * Using Disk: $hd"
      

      I looked everywhere for the -dplno switch, but haven’t found any documentation of it.

      functions.txt

      Running a capture now as “Multiple Partitions - All Disks”

      Host options:

      kernel arguments: mdraid=true
      primary disk: [Empty]

      This run only imaged sda1 and didn’t touch sdb, so I essentially got a “half” image that was the size of the single HDD.

      Would love to be able to image RAID0 systems as resizable so I’m not generating ~1TB RAW images to deploy.

      Will run another test where I disable the Intel raid in the BIOS and remove “mdraid=true” from the kernel arguments. Hopefully this will allow partclone to RAW image both disks.

      Any input or help is appreciated.

      posted in FOG Problems
      J
      jpmartin
    • RE: Intel RAID

      @Developers , @Senior-Developers

      I’m attempting to capture an image of a Lenovo Thinkpad W530 in Raid0.

      On latest trunk as of today at 9am.

      mstat.txt
      mdadm_examine.txt
      mdadm_assemble.txt (This file is empty. I ran the command several times and it never dumped any data into the .txt file)
      md_devices.txt

      Host options are :

      kernel arguments: mdraid=true
      primary disk: /dev/md126

      When running as “single disk - multiple partitions”, it goes great until it checks for partitions and then returns:

      “Could not find Partitions”

      Running a capture now as “Multiple Partitions - All Disks”

      Host options:

      kernel arguments: mdraid=true
      primary disk: [Empty]

      Only showing sda1 in part clone view, hopefully it will start sdb immediately after.

      EDIT: It appears it didn’t. Only made a RAW image of sda1.

      posted in Hardware Compatibility
      J
      jpmartin
    • 1 / 1