How to Add Boot to MEMDISK Option - Syntax Question
-
@cmcgonag I would be a small step from here: https://forums.fogproject.org/topic/10944/using-fog-to-pxe-boot-into-your-favorite-installer-images to provide a diskless netboot environment. As Sebastian said this is outside of the scope of FOG, but I’ve seen people do some pretty amazing things with FOG that’s not related to system imaging because its is that flexible and built on opensource tools.
iSCSI booting is just ensuing you have the right bits in order when you call SANBoot in iPXE. The FOG iPXE menuing environment could do the rest.
-
Thanks George. I have read that tutorial. What I was seeing though was this (Ubuntu 17 section):
“In the fog WebGUI go to FOG Configuration->iPXE New Menu Entry
Set the following fields
Menu Item: os.Ubuntu.Desktop.17.10
Description: Ubuntu Desktop 17.10
Parameters:
kernel tftp://${fog-ip}/os/ubuntu/Desk17.10/vmlinuz.efi
initrd tftp://${fog-ip}/os/ubuntu/Desk17.10/initrd.lz
imgargs vmlinuz.efi root=/dev/nfs boot=casper netboot=nfs nfsroot=${fog-ip}:/images/os/ubuntu/Desk17.10/ locale=en_US.UTF-8 keyboard-configuration/layoutcode=us quiet splash ip=dhcp rw
boot || goto MENU
Menu Show with: All Hosts”None of that code specifies a memdisk or a ramdisk. I assumed I needed to chain “memdisk iso raw” to that tutorial. As it stands, I assume it would load to sda.
-
Also I apologize if this question is stupid. I am a mechanical engineer by training and have had to teach myself all this stuff.
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
None of that code specifies a memdisk or a ramdisk. I assumed I needed to chain “memdisk iso raw” to that tutorial. As it stands, I assume it would load to sda.
you are right because we are netbooting linux and not using memdisk to load the iso image? You may ask why? Because memdisk loads the entire iso image into RAM and then executes it from ram, reducing the available memory to linux.
Also memdisk is bios mode only. There is no equivalent tool for UEFI systems (AFAIK).
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
Also I apologize if this question is stupid. I am a mechanical engineer by training and have had to teach myself all this stuff.
No problem, that is why we are here. Mainly to help with FOG imaging, but the platform is flexible and opensource so that you can make it into what you need if you have enough ambition.
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
kernel tftp://${fog-ip}/os/ubuntu/Desk17.10/vmlinuz.efi
initrd tftp://${fog-ip}/os/ubuntu/Desk17.10/initrd.lzThis right here is telling… vmlinuz.efi is… what you might think in windows is the operating system or kernel. initrd.lz think of it as a virtual hard drive. To get linux to boot you need a kernel (OS) and a hard drive (initrd). From there… that’s call the boot loader, that tiny OS has enough brains to reach back out to the NFS server (FOG) to get the rest of the operating system and to load it into memory. That is netbooting.
You also could take the sanboot approach (I don’t have an examples of that), but have iPXE basically mount the iSCSI volume and start executing the boot blocks like it was a local hard drive. Of course the OS that you boot via SANBOOT needs to be smart enough to take over the booting process once iPXE releases control to the main OS. Sanbooting is a bit complicated to understand, but once you are there you can do interesting things with it. Sanbooting is harder to initially setup than netbooting but both have their positives and negatives.
-
@george1421 said in How to Add Boot to MEMDISK Option - Syntax Question:
Thanks george! That is super helpful. I am trying to stay away from iSCSI (dont know much about it either); if I am not mistaken, each node would need its own iSCSI instance to run? I was worried that multiple clients connected to the same iSCI target would have issues.
So in your netbooting example:
kernel tftp://${fog-ip}/os/ubuntu/Desk17.10/vmlinuz.efi
initrd tftp://${fog-ip}/os/ubuntu/Desk17.10/initrd.lz“This right here is telling… vmlinuz.efi is… what you might think in windows is the operating system or kernel. initrd.lz think of it as a virtual hard drive. To get linux to boot you need a kernel (OS) and a hard drive (initrd). From there… that’s call the boot loader, that tiny OS has enough brains to reach back out to the NFS server (FOG) to get the rest of the operating system and to load it into memory. That is netbooting.”
I guess what I was confused on is how do the calls differ, like if I wanted to PXE boot my node now it completely wipes the drive and loads the OS over top, vs trying to boot to memory. How does it know to store the image on sda (currently) vs how to store it in ram (in your example)? My assumption was the base call (as you listed it) would AUTOMATICALLY overwrite sda (or whatever drive was specified in the host primary disk parameter), and move on. But maybe that is not the case?
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
@george1421 said in How to Add Boot to MEMDISK Option - Syntax Question:
I am trying to stay away from iSCSI (dont know much about it either); if I am not mistaken, each node would need its own iSCSI instance to run? I was worried that multiple clients connected to the same iSCI target would have issues.
You mentioned sanboot in a previous post so I assumed you were trying to create a persistent vm environment like vmware horizon. Yes each iscsi device is a block level device so each booted system would needs its own iscsi volume. Block level sharing is not allowed just like with a physical hard drive.
I guess what I was confused on is how do the calls differ, like if I wanted to PXE boot my node now it completely wipes the drive and loads the OS over top, vs trying to boot to memory. How does it know to store the image on sda (currently) vs how to store it in ram (in your example)? My assumption was the base call (as you listed it) would AUTOMATICALLY overwrite sda (or whatever drive was specified in the host primary disk parameter), and move on. But maybe that is not the case?
Well I think you have a few concepts confused (or I’m confused on what your goal is here). What is your end goal?
Do you want to make a traditional FOG imaging environment where you create a master image and deploy it to the hard drive of many computers?
Or do you want to make a diskless (in reference to the target nodes) netboot system that loads and executes everything out of RAM (actually the hard drive on the target computer is not needed at all)?
-
I started using fog to “to create a master image and deploy it to the hard drive of many computers.” But now I want to transition to “a diskless (in reference to the target nodes) netboot system that loads and executes everything out of RAM (actually the hard drive on the target computer is not needed at all).”
So whatever I need to do to make that transition, I want to do, even if it means I shouldnt be using fog. I just know fog the best, hence why I am trying to use it.
Ultimate goal is to be diskless.
Thanks.
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
Ultimate goal is to be diskless.
Ok great now we have a target.
Next question: What are you going to be doing or how will you use these diskless client computers?
Will they be character based computing nodes, or full xwindows client computer?
Have you picked a linux distribution?
-
@george1421 said in How to Add Boot to MEMDISK Option - Syntax Question:
character based computing nodes
They will only run ubuntu as a node.
-
@cmcgonag said in How to Add Boot to MEMDISK Option - Syntax Question:
They will only run ubuntu as a node.
Stick with me here, so it will be character mode ubuntu? Is there a reason to use ubuntu over any other distro?
What will these nodes do?
What special services will run on this node?
What resources will these nodes need? (disk, memory, applications,??)
Are these nodes all ia86/amd64 processors or will they be arm based?
What is your scale on the number of nodes in this cluster?
I am driving to a destination with these questions as random as they appear.
-
Sorry I was away on vacation. Just now getting back to this.
All nodes currently are on intel 64bit processors. We use AMD GPUs mostly, some NVIDA to test, for compute. 105 Nodes currently deployed (more or less 100kw)
We are “trying” to design them to run various compute level code. Mining, blockchain test code, ai, whatever, if it runs on Ubuntu it should run [I totally get that there is a mirad of ways this can fail]. I have tested a ton of different packages out there with more or less success with my current configuration.
I essentially want for someone to to give me an image, I then push it to the cluster and it runs. So far it has worked great with FOG (minus my ssd failures).
I use ubuntu because it is what I am the most familiar with and it seems to have the best driver level support all around. And the use case is mostly ubuntu.
I have been running them in GUI mode. If something fails I have a KVM I can link over and see what happened. That isnt to say I couldnt run character mode, but I dont know why I would.
My goal is to scale to 1000 nodes then lease out the capacity, think a kind of bare metal AWS, but way more ghetto.That being said, a LUN may be the way to go, but I dont see why I need that. I should be able to do all this in ram, but then again, I am just the engineer trying to figure out how to cool all this stuff down (hence the engineering part). I also dont want my network getting bogged down on iSCSI traffic. I am only running 1GBe in a pretty limited spanning tree config. I want to be like a refinery for processing data, maybe monday I do some compute, tuesday ai, Wednesday blockchain, all for way cheaper than anybody else (at least that is my goal lol).
-
@cmcgonag So, as far as I understand you want to load a fully installed Ubuntu purely into client memory and run off that.
I don’t know if this is the best strategy, but modifying the LiveCD/USB and then booting those as per earlier instructions over PXE should work fine.
-
@cmcgonag Well where I was heading with this is that you can build your own customized version of linux (akin to redhat or ubuntu) using a tool called buildroot. The advantage of buildroot is that you are able to customize your target OS to only include the features you need. This makes a very resource light and fast linux OS. The FOG Project uses buildroot to build FOS (the customized linux OS that captures and deploys image to target computer). If you look at it from a size consideration FOS linux kernel is 8MB in size and the virtual hard drive (initrd/init.xz) is 19MB. So the FOG Developers have created an entire linux OS that fits into 30MB of memory. Understand this isn’t a general purpose linux distrobution with xwindows and such. You can build that with buildroot if needed. The idea is if you are building a virtualization environment you want your compute nodes that run under your hypervisor to be as small and as fast as possible that way you can run more compute nodes on the same physical hardware over a traditional general purpose linux like ubuntu.
Understand creating a customized linux using build root is not hard, but its not easy either. Having a fast buildroot compiler node is very helpful.
With FOG you can change the default iPXE menu to your custom ipxe menu entry to boot your nodes. I think that even works if you hide the iPXE menu so it jumps right to booting your node.
-
George,
Thanks for the ideas. I am going to try to build my own customized LIVECD version and see if I can get it to run. Will report back (will most likely take me a while!).
Thanks for the insight.
Colin