Dual boot (2 disks) unable to boot grub
-
@ClementK Which version of FOG do you use?
So when you look at the second disk using systemrescuecd, do you see the partitions at all or is the disk still empty?
Did you see any errors in screen when deploying or did it go all the way through without an issue?
As you mention UEFI I think this could be just the UEFI entry missing for the Debian Linux install. FOG does not mess with the UEFI boot entries. This is something you need to take care of.
-
@Sebastian-Roth I have fog version 1.5.9.
Yes there are all the partitions and the files are present.
There was no error when deploying.
When I installed debian on the master PC, I could boot grub and from there windows or debian. I deployed the image on the computer i used to do the master and I had zero problem. It’s only when I’m deploying the image to another computer that the problem occur.
Maybe I need to run a postinstall script to update the UEFI boot entries?
Thanks for your reply,
ClémentK.
-
@ClementK said in Dual boot (2 disks) unable to boot grub:
Maybe I need to run a postinstall script to update the UEFI boot entries?
Exactly. Look into using the efibootmgr tool.
-
@Sebastian-Roth
I added this command as a postinstall script:efibootmgr -c -d /dev/nvme0n1 -p 1 -L "Debian" -l "\EFI\debian\shimx64.efi"
and it worked.
Thanks for your help!
-
@ClementK Well done! We shall add this to the docs!
-
-
-
I believe this lack of a EUFI entry is the same reason why I am finding that I need to enable CSM booting after I image Windows.
Is there a way to set up this script so that it always uses the correct device? For instance, some machines are NVME, some SSD, some rust, etc. So they would have different device names and partition numbers.
On one motherboard I just setup FOG on I couldn’t even see the nvme at all as a boot option in the BIOS until I enabled the CSM boot.
Thanks.
-
@Flyer said in Dual boot (2 disks) unable to boot grub:
Is there a way to set up this script so that it always uses the correct device? For instance, some machines are NVME, some SSD, some rust, etc. So they would have different device names and partition numbers.
Have you looked into using the magic post deploy scripts yet? While I can’t give you a full script example just from memory (don’t have one at hand either) I am pretty sure you have some valuable environment variables at hand when the script is called. So it might be as easy as
efibootmgr -c -d $hd -p 1 -L "Debian" -l "\EFI\debian\shimx64.efi"
-
@Sebastian-Roth Oh, nice! I had not seen that. Still pretty new to the ecosystem. Trying to learn all I can.
Will check it out. Thanks.
-
@Sebastian-Roth When I try to run efibootmgr from the postdownload script I am getting modprobe telling me it can’t find the efivars kernel module. Any idea as to the best way to get that where it needs to be, and more importantly where does it need to be?
I am curious how @ClementK was able to run it.
Anyone?
Edit: Perhaps I don’t need to run modprobe, and the motherboard truly does not support efi vars, which is the error I am getting from efibootmgr without first calling modprobe.
Errno 2 “EFI variables are not supported on this system.”
-
@Flyer said in Dual boot (2 disks) unable to boot grub:
Any idea as to the best way to get that where it needs to be
While you figured out why this failed on that particular machine I might still add that the FOS kernel is lacking module load support. All the needed modules are being compiled into the kernel binary.
-
@Sebastian-Roth OK on the monolithic kernel vs. module loading. That explains the modprobe error I received (I had tried adding a manual call to modprobe in my script).
It turns out the issue must have been with that particular motherboard/BIOS. It is a much older machine (for anyone searching it was an ASUS Sabertooth X79 with BIOS version 3501). But there may have been other issues as well. I have yet to go back and try my now-seemingly-working script on that machine.
Below is my postdownload script I am now using to update the UEFI boot records for some newer machines (7950X on Gigabyte x670 boards). So far this is working well. I need to test it with an existing boot record with the same name (which is the default Windows name). I know efibootmgr will throw a warning in this case. It took me a bit to realize that I had my Windows image in MBR format, and needed to convert it to GPT (web search for mbr2gpt).
#!/bin/bash # # Note that the Windows disk needs to be in GPT format, not MBR or this # will appear to succeed, but still not boot. # # Public Domain - Modify and use at will. No Warranty. # . /usr/share/fog/lib/funcs.sh [[ -z $postdownpath ]] && postdownpath="/images/postdownloadscripts/" case $osid in 5|6|7|9) clear echo "OS Type is Windows" getHardDisk if [[ -z $hd ]]; then handleError "Could not find hdd to use for EFI Boot Entry" fi echo "Found Install Disk: $hd" dots "Creating Win EFI Boot Entry for Disk: $hd Part: 1 using efibootmgr" #Windows UEFI partition is usually partition 1 efibootmgr -c -L "Windows Boot Manager" -l "\\EFI\\Microsoft\\Boot\\bootmgfw.efi" -d $hd -p 1 if [[ ! $? -eq 0 ]]; then echo "Failed. Error: $?" # We opt not to fail here, as some machines may not support EFI Boot entries. #debugPause #handleError "Failed to Create Windows EFI Boot Entry. Error: $?" echo " *** Failed to Create Windows EFI Boot Entry. Error: $? *** (pause 15sec)" sleep 15 fi echo "done" echo "" # Output the resulting boot records for reference/debug efibootmgr -v echo "" debugPause ;; *) echo "Non-Windows Deployment" debugPause return ;; esac
I do think that the postdownload scripts should be per host or per image. A global post download script means you then need to check the image or host in the script, which is not a clean way to do it. Better if this is moved into the UI so there is a way to set post download scripts for Images and for Hosts. This way you could run separate scripts for either or both.
-
@Flyer You can do quite a lot with post install scripts to decode the hardware so if you need to make hardware specific changes in your script you can.
I have a few tutorials on this. In this example you can get the model number and system manufacturer of the computer the script is running on using dmidecode.
https://forums.fogproject.org/post/88293Also here is a post install script for installing the proper windows drivers onto the target hardware: https://forums.fogproject.org/topic/8889/fog-post-install-script-for-win-driver-injection The point of this script isn’t to show you how to inject drivers, but the method to extract data to make your post install script more flexible.
This post here is just random snippets of code that look at IP address and how to get fog run time variables into your post install script. https://forums.fogproject.org/post/69725
While I agree that having a post install script per host might be a good idea, I think it would only be a very edge case and be more of a management nightmare if you had a campus with 1000s of hosts.
-
@george1421 Thanks for the reply. I had seen a lot of your (and others’) scripts. It does look like there are methods to do it from the script, no doubt.
It just seems like the scripts are generally either host or image specific, which is why I was thinking it would be good to link those in the DB for easy management. Perhaps they could be applied to groups or some other collection of hosts/images such that they could applied to many hundreds or thousands of hosts at once. I have not looked at groups; so I have no clue what functionality they offer at this point. I do understand it may be more trouble than it’s worth, since you can do it right now using the methods in the script.
As a side note, is there a way to drop into a shell in the OS some time between when the image is written and reboot? I guess the postdownload script could just invoke a shell? Would be useful for debugging without have to re-image repeatedly.
-
@Flyer said in Dual boot (2 disks) unable to boot grub:
As a side note, is there a way to drop into a shell in the OS some time between when the image is written and reboot? I guess the postdownload script could just invoke a shell? Would be useful for debugging without have to re-image repeatedly.
When I’m debugging post install scripts I do this in debug mode. You actually will go through image deployment but is a bit faster.
Schedule a deploy task, but before you hit the schedule task button, tick the debug checkbox. Then schedule the task. pxe boot the target computer. This will drop your to a command prompt after a few screens of text that you need to clear with the enter key.
Sidebar for remote editing: now that you are at the fos linux command prompt. Get the IP address of the machine with
ip a s
command. Give root a password withpasswd
command. Make the password something simple like hello. Now you can connect to the target computer using putty or ssh from a comfortable location.At the FOS Linux command prompt key in
fog
to start the imaging process. You will have to hit enter at every breakpoint in the deployment script. Hint: add a customecho
comment at the beginning of your post install script so you know when it starts. UsedebugPause
breakpoints in your script at important locations. If needed you can open up a new command shell either via the console or new ssh connection. If you get to a point where your script fails, hit ctrl-c to exit the deployment. Fix what you need then enterfog
again to start the deployment over again. Hint: it helps if you have a small image for deployment testing to make the deployment part go quicker. -
@george1421 Great stuff. Thanks!
-
-
-
-