Host Startup; Booting into LVM Disk Fails
-
@dholtz-docbox Well thats a bit unexpected.
sda is a unpartitioned ~1tb disk
sdb is a bit confusing since there are two small partitions and then a large partition (where the lvm probably is). The two small partitions is what I’m questioning. One I might understand since that is a boot partition since you are using lvm on sdb3.
sdc looks like maybe a 8GB flash drive?
Is this an accurate assessment?
-
@george1421 said in Host Startup; Booting into LVM Disk Fails:
Second, when you installed ubuntu you are sure that you didn’t install the any type of boot loader onto that first drive?
When I install Ubuntu, I go through the guided installation of LVM, using the entire disk. From there I let it create and write the partitions. I only install Ubuntu on the second disk.
I haven’t done anything to the other disk yet - it’s still just a factory-default drive. I don’t have an option to format this disk when I go through the process, at least I don’t feel like I do. I am use to the desktop installer, I guess. That said, should I try to also format this drive?
@george1421 said in Host Startup; Booting into LVM Disk Fails:
Third, In the AMD bois for this, when you tell it to boot from the disk with the ubuntu on it. In bios is this listed as the second hard drive? So the bios is seeing it as disk 2 or is it seeing it as another number disk? (I’m not sure if number is base 0 or base 1 for this discussion).
I posted this earlier in the thread, let me dig it up. That said, I am pretty sure it’s…
- NIC
- ubuntu (not sure who this one is, but it’s the only one I can boot from)
- ubuntu (not sure who these guys are)
- sata drive (which should be /dev/sdb)
@george1421 said in Host Startup; Booting into LVM Disk Fails:
I still plan on testing a similar setup with a dell 780 and 2 hard drives to see if I can boot off the second drive.
Oh, that’s awesome! Thank you very much.
-
@george1421 said in Host Startup; Booting into LVM Disk Fails:
@dholtz-docbox Well thats a bit unexpected.
sda is a unpartitioned ~1tb disk
sdb is a bit confusing since there are two small partitions and then a large partition (where the lvm probably is). The two small partitions is what I’m questioning. One I might understand since that is a boot partition since you are using lvm on sdb3.
sdc looks like maybe a 8GB flash drive?
Is this an accurate assessment?
Pretty spot-on
Let me take a snapshot of what the partitioner does to the disk…
Edit> Installer Album
Edit> This is how it looks after installing ubuntu, all I do is move the IBA option to the top for network boot priority
-
@dholtz-docbox can you move the values at option 3 and option 4 into the slot 1 and 2 positions?
With a quick registration and then a boot to to iPXE and then local hard drive booted the ubuntu boot loader correctly with the default sanboot.
-
This is just a wild idea I have that probably will not work but what if you set the primary hdd in fog to sdb for capturing and set another hosts primary disk to sda and deploy ? Would the deployed machine pass through the hdd exit process fog has?
-
@george1421 : I feel I have tried this, but let me give it a go. I have tried so many permutations of these settings, it’s hard to say which combinations haven’t been checked, heh. Do you have the Host Primary Disk setup too?
-
@Wayne-Workman After I finish a project I’m working on I’ll document what I did, and it worked correctly. The system always booted ubuntu from a second disk even when I swapped the initial /dev/sda out with a virgin hard drive.
-
@dholtz-docbox said in Host Startup; Booting into LVM Disk Fails:
@george1421 : I feel I have tried this, but let me give it a go. I have tried so many permutations of these settings, it’s hard to say which combinations haven’t been checked, heh. Do you have the Host Primary Disk setup too?
I did nothing more than install a second hard drive in the 780, the original first hard drive had windows on it, installed ubuntu on the second hard drive changed the boot order for second hard drive first then nic booted and ubuntu booted, then swapped the nic and ubuntu disk, pxe booted, registered using quick registration, rebooted into the ipxe menu and then let it timeout to boot the hard drive. It booted ubuntu, from there (and just to make sure it didn’t install a boot loader on disk 1) I swapped disk 1 out with a virgin ssd, pxe booted, let the timeout happen and then it booted into ubuntu. I never touched the fog web gui, my default bios exit mode is sanboot in FOG.
-
@george1421 : Hmm… There must be something on /dev/sda because quick registration handled it for /dev/sda instead of /dev/sdb. That said, when it boots, it boots with an empty cursor. It would state it in lsblk if it had a boot part, right? I feel we are close…
-
I wanted to update the thread on where we currently are with this issue.
I had a several hour chat session with the OP. What it appears is this device they are building is uefi based. It appears the disk appears to also be setup as GPT. What is really confusing me right now is the OP said this device boots with undionly.kpxe and ipxe.efi being sent to the device. This is the first I’ve ever see a device be dynamic like this? But if the device is in UEFI mode that would explain the firmware configuration boot order, why ubuntu is specifically called out as a boot device. I asked the OP to contact the device manufacturer to find out for sure what mode/format this device is. When I say device, this is not a typical computer but an embedded device in some kind of equipment that runs stock ubuntu (14.04 I think). So if this is uefi then playing with the bios exit modes will not have any impact on the boot quality since the uefi exit mode setting in fog would be in play. Setting the exit mode for uefi to refind did not help. My personal opinion is that there is a combination of issues here including the way the image is created on the destination disk. (this here is a personal opinion). The way it is currently setup the OP has 2 disks installed in this device (that is OK), and they are deploying ubuntu to the second disk (still ok). But from there the OP created a single lvm partition and installed the OS and swap on that single lvm. This in its self is not bad, but from my personal opinion I would just create the disks using traditional partitions and leave the LVM overhead out of the picture since the unbuntu disk space will never grow where you might need the LVM features. There is no right or wrong way here, I would just use normal partitions to make it easier to manage.
I left the OP with the task of setting up a similar setup using a VM to prove out if their design and fog will be successful. I recommended that they use a vm to create their golden image anyway, that is how we create them in the windows world. The VM will also abstract the golden image from the underlying hardware, plus it will give them the ability to create snapshots and roll them back as they perfect their golden image. Once they have it perfected then capture it with FOG and deploy to the target hardware (once they fully understand how that device is defined).
At this point FOG is working as its was designed, there are just some unknowns with the hardware that need to be worked out before he can start imaging these devices.
-
@george1421 From my testing FOG is working as designed. I built up a similar configuration (that I understood at the time) using a Dell o780 in bios mode (uefi is not supported in this model either, but at the time I set it up I was under the impression the device was a bios system).
The o780 was one that was pulled from our production environment so hard disk 1 has a functioning version of windows 7 on it. I removed the sata cable from the cd rom and plugged that into a laptop hard drive I had handy. So this gave me a 250GB disk 1 and a 500GB disk 2. The size is really doesn’t matter, I just picked different drive sizes so I could tell them apart during imaging. Since I used the cdrom sata cable for this second hard drive I used a USB cdrom to load ubuntu desktop 15.10 onto the second hard drive. I usb booted ubuntu and selected install. When I got to the point of hard drive selection I picked other (because I wanted to control which drive the installer used) and then manually deleted the contents of the second hard drive. From there I created a 4GB swap partition and then the rest as a LVM partition (trying to mimic the target device). I ensured the boot loader was installed on the second hard drive. Then I let the installer continue to completion.
Upon reboot I went into the bios and change the boot order so the second hard drive was the only option No other boot devices were selected. When I rebooted the computer ubuntu booted from the second hard drive (the boot loader gave me the option to boot windows from the first hard drive). I rebooted again and made the first hard drive the boot device and windows booted. I again I rebooted and make the second hard drive first and the nic second. Ubuntu booted. I rebooted again and made the nic first and the second hard drive second. This time the 780 booted into the fog iPXE menu. From there I quick registered the 780 and rebooted back into the iPXE boot menu. After the time out iPXE exited to the boot ubuntu (from the second hard drive). My default exit mode for bios mode in FOG is sanboot. Just to be sure there was no boot loader accidentally loaded on disk 1 (windows disk) I removed the windows disk and installed a virgin ssd drive and then again rebooted the 780. The system pxe booted into the iPXE menu and then timed out and booted ubuntu as it should. I never touched the FOG web gui so no host specific changes were made. I just register the target computer with FOG and then pxe booted it.
So at this point FOG is working correctly and it will exit to and boot from second hard drive using sanboot without any intervention in my testlab.
Now since I created this POC, new information came about that this device was kind of, sort of UEFI on a GPT disk. So that negates what I proved that work. But, now we know fog is solid!!
-
@dholtz-docbox On Dells, they can be in UEFI mode, but if legacy option ROMs is enabled, that UEFI system will boot from something like undionly.kpxe. But when this happens, it will never properly exit to a GPT disk. I’ve seen this before at work.
Most likely, the answer here is to configure DHCP to hand out only ipxe.efi.