• Has anyone gotten a subset of this: https://askubuntu.com/questions/1235723/automated-20-04-server-installation-using-pxe-and-live-server-image working with FOG yet? I’m currently only trying to tackle making it liveboot. I haven’t tested an autoinstall yet, since that is new to me with 20.04, I’m used to preseeds.

    I get as far as an initramfs shell telling me “Unable to find a live file system on the network”

    my initial parameters:
    kernel tftp:///${fog-ip}/os/ubuntu/20.04/casper/vmlinuz
    initrd tftp:///${fog-ip}/os/ubuntu/20.04/casper/initrd
    imgargs vmlinuz initrd=initrd ip=dhcp root=/dev/nfs boot=casper netboot=nfs nfsroot=/${fog-ip}:/images/os/ubuntu/20.04/ locale=en_US.UTF-8 quiet splash ip=dhcp rw
    boot || goto MENU

    any help is greatly appreciated!

    Edit:
    I updated my paramaters to closely match what was listed in the above link and got it to work to boot to a live disk.

    kernel tftp://${fog-ip}/os/ubuntu/20.04/casper/vmlinuz
    initrd tftp://${fog-ip}/os/ubuntu/20.04/casper/initrd
    imgargs vmlinuz initrd=initrd ip=dhcp url=http://${fog-ip}/os/20.04/ubuntu-20.04-live-server-amd64.iso locale=en_US.UTF-8 quiet splash ip=dhcp rw
    boot || goto MENU

  • Senior Developer

    @faboulous said in 20.04 autoinstall:

    If someone manage to do so, it would be interesting to check what is_casper_path and matches_uuid does and return.

    I don’t have a setup to dive in this right now and so I just started looking at the code. But maybe that’s of any help to you too.

    Looking at the definition of is_caspar_path I get the impression that there is no possible way for this to return 0 (true in bash logic) to actually proceed to the next check.

    Edit: Ok, forget what I said here. This is using bash globbing to expand filenames when a file exists matching the glob used.

    Anyhow, I figured out why it would not return successful from the do_nfsmount() call. It checks UUIDs that are stored within the ISO in path .../.disk/casper-uuid*. Now when we prepare things and copy contents from the ISO we are missing that hidden folder (starting with a dot)! To fix that run the following command on your FOG server with the ISO mounted in /mnt/loop:

    cp -R /mnt/loop/.disk /images/os/mint/20
    

    I’m able to PXE boot into the Mint20 XFCE Live system with that fix on a VirtualBox VM (hosted on Debian 10) with only 768 MB of RAM set for the VM.

    @george1421 We might think about switching to rsync -a /mnt/loop/ /images/os/... in your great tutorials on PXE booting installers to prevent that from happening. What do you think?

    @londonfog I just tried Ubuntu Server 20.04 as well and it is working fine too - even using kernel and initrd from the ISO and not download the netboot ones! The important part is that you need to copy the .disk folder over as mentioned above.


  • Hi I run into the same issue trying to boot a linuxmint 20 (and also ubuntu 20). After some digging, it appears that the Unable to find a live file system on the network show up because do_nfsmount() from the casper script chipped inside the initrd isn’t able to mount correctly the nfs share.

    I extracted (using unmkinitramfs) the nfsmount binary to test it standalone on a working nfs share using the same option provided by this casper script (nfsmount -o nolock -o ro ${NFSOPTS} ${NFSROOT} ${mountpoint}) which work well.

    I also tried to use the initrd from a linuxmint 19 to load the linuxmint 20 iso and… it worked but without any big surprise I ran into issue while I tried to install the os.

    The differences between mint19 and mint20 are some check performed after nfsmount call:

    do_nfsmount from mint 20

    do_nfsmount() {
        rc=1
        modprobe "${MP_QUIET}" nfs
        if [ -z "${NFSOPTS}" ]; then
            NFSOPTS=""
        else
            NFSOPTS=",${NFSOPTS}"
        fi
    
        [ "$quiet" != "y" ] && log_begin_msg "Trying nfsmount -o nolock -o ro ${NFSOPTS} ${NFSROOT} ${mountpoint}"
        # FIXME: This while loop is an ugly HACK round an nfs bug
        i=0
        while [ "$i" -lt 60 ]; do
            if nfsmount -o nolock -o ro${NFSOPTS} "${NFSROOT}" "${mountpoint}"; then
                if is_casper_path $mountpoint && matches_uuid $mountpoint; then
                    rc=0
                else
                    umount $mountpoint
                fi
                break
            fi
            sleep 1
            i="$(($i + 1))"
        done
        return ${rc}
    }
    

    do_nfsmount from mint 19

    do_nfsmount() {
        rc=1
        modprobe "${MP_QUIET}" nfs
        if [ -z "${NFSOPTS}" ]; then
            NFSOPTS=""
        else
            NFSOPTS=",${NFSOPTS}"
        fi
    
        [ "$quiet" != "y" ] && log_begin_msg "Trying nfsmount -o nolock -o ro ${NFSOPTS} ${NFSROOT} ${mountpoint}"
        # FIXME: This while loop is an ugly HACK round an nfs bug
        i=0
        while [ "$i" -lt 60 ]; do
            nfsmount -o nolock -o ro${NFSOPTS} "${NFSROOT}" "${mountpoint}" && rc=0 && break
            sleep 1
            i="$(($i + 1))"
        done
        return ${rc}
    }
    

    I tried to add some debug to identify what is going wrong but I couldn’t manage to “repack” the extracted initrd into a clean working one ending up on a kernel panic (I spend quite some time trying out, initrd archive structure seems to have recently evolve to chip intel and amd firmware inside). The only changed I made was adding some echo into the casper file. If someone manage to do so, it would be interesting to check what is_casper_path and matches_uuid does and return.

    As I wasn’t able to produce a custom initrd, I tried to configure a http server to serve the iso on similar way as suggested on the comment bellow by @pacman366. But ubuntu/mint desktop iso are quite big (2GB) and 4GB ram isn’t enough to be able to extract the iso. I also tried to boot from http following these instructions: https://www.plop.at/en/ploplinux/live/networkboot-linux.html#pxel61 without any success. I’m not sure how | in the url work (no request received on the nginx web server).

    I’d really like to be able to fix this, any idea/help would be appreciated 🙂

  • Moderator

    @fogman4 The problem is that neither iPXE or FOS Linux are signed. That is where the problem comes in.

    I can PXE boot a signed kernel using GRUB. with the shimx64.efi and grubx64.efi signed. Then grub loading the kernel. But that won’t help with doing anything with FOG.


  • @george1421 Thank you very much for those suggestions. I’ll dig and share a way to do it.

    My purpose is to do a Fully automated ubuntu 20.04.2 Desktop (UEFI) installation via FOG’s iPXE.

    I got more than 100 Workstations and need to make it viable with an Ansible post-installation.

    In the first time i wanted to do it with preseed but autoinstall seems to be the new shit so i’ll try it out.

    To do Secure boot with Ubuntu 20.04 you’re right we need the signed kernels.

    I need to RTM and do some tests.

    Don’t you think chainloading signed grub via iPXE in FOG could be a more efficient and easier way to do tasks centralized over all grub’s options ?

    For example those files (chainloaded) could boot via GRUB2 a secure boot installation for example :

    grubx64.efi.signed

    grubnetx64.efi.signed

    Regards.

  • Moderator

    @fogman4 Very close to what you are asking https://forums.fogproject.org/topic/15129/preseeded-unattended-netboot-uefi-debian-installation Since ubuntu is based on debian the logic should be pretty close.

    It seems url= is only for http and ftp am i wrong ?

    You are correct

    Is there a way to chainload grub2 in fog ? It could be the easiest.

    Yes you can, but to what end?

    To leave secure boot enabled? If yes, that will work until you attempt to run a kernel that has not been signed then booting will stop.


  • Hi .

    I’m wondering is there anyone succeeded in fully automated iPXE ubuntu 20.04 install via fog.

    I tried many parameters with no success.

    It seems url= is only for http and ftp am i wrong ?

    By the way i managed to boot Fedora workstations , Debian via a regular pxe with grub chainloaded in EFI Secure boot with the shim.signed.

    Is there a way to chainload grub2 in fog ? It could be the easiest.


  • @pacman366 This is great, thank you! I had some other projects to work on, so I hadn’t touched this in a while. Where did you get your vmlinuz and initrd files from? Did you extract from the iso?


  • @londonfog I’ve also gotten autoinstall to work with the following imgargs

    imgargs vmlinuz initrd=initrd root=/dev/ram0 ramdisk_size=1800000 ip=dhcp url=http://172.16.13.3/ubuntu-20.04.1-live-server-amd64.iso net.ifnames=0 autoinstall ds=nocloud-net;s=http://172.16.13.3/2004/ ro
    boot
    

    172.16.13.3 is my fog servers ip address, emphasis on this string

    autoinstall ds=nocloud-net;s=http://172.16.13.3/2004/
    

    That part of the sting tells subiquity were to find the user-data and the meta-data files. http://172.16.13.3/2004/user-data has the yaml containing all the autoinstall parameters, http://172.16.13.3/2004/meta-data is just a blank file. Both files need to exist in order for subiquity to perform an autoinstall. subiquity is also extremely strict about the formatting of the yaml within the user-data file. I found this out when attempting to copy/paste another user-data example from the internet only to crash the live installer. I had to hand write all the parameters in user-data in order for it all to work. this is what my user-data file looks like.

    #cloud-config
    autoinstall:
      interactive-sections:
        - storage
      apt:
        geoip: true
        preserve_sources_list: false
        primary:
        - arches: [amd64, i386]
          uri: http://us.archive.ubuntu.com/ubuntu
        - arches: [default]
          uri: http://ports.ubuntu.com/ubuntu-ports
      identity: {hostname: 69changeme69, password:  PASSWORD HASH,
        realname: companyadmin, username: companyadmin}
      keyboard: {layout: us, toggle: null, variant: ''}
      locale: C
      network:
        ethernets:
          eth0: {dhcp4: true, dhcp-identifier: mac}
        version: 2
      ssh:
        allow-pw: true
        authorized-keys: []
        install-server: true
      packages:
        - ubuntu-desktop
        - landscape-client
        - openjdk-8-jdk
        - libpwquality-tools
        - wpasupplicant
        - python
        - python-dbus
        - python-argparse
      late-commands:
        - | cat <<EOT >> /target/lib/systemd/system/postinstall.service
          [Unit]
          Wants=network-online.target
          [Service]
          Type=oneshot
          ExecStart=/opt/postinstall.sh
          RemainAfterExit=true
          [Install]
          WantedBy=multi-user.target
          EOT
        - systemctl daemon-reload
        - wget http://172.16.13.3/2004/postinstall.sh -P /target/opt/
        - chmod +x /target/opt/postinstall.sh
        - echo 'companyadmin ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/companyadmin
      version: 1
    

    This installs mostly automatic. I have mine set to prompt for partitioning. Hope this helps

  • Moderator

    @pacman366 said in 20.04 autoinstall:

    url=http://172.16.13.3/ubuntu-20.04-live-server-amd64.iso

    I haven’t had a chance to work on this yet. But I did help someone get a HP server update iso to pxe boot. It was a problem but we got it to work via pxe to nfs. The point is that url= reminded me of that experience. In that case we had to provide the uri in a format similar to this url=nfs://${fog-ip}/images/os/ubuntu/20.04/ it kind of replaced the url=http:// protocol with the nfs uri url=nfs:// I don’t know if that is what ubuntu did here or not.


  • @pacman366 this is pretty similar to how I’ve gotten it to “work” Have you tried setting up the autoinstall method? I just started to attempt this today, and no luck so far.


  • I wasn’t able to get 20.04 to pxe boot using the traditional nfs methods outlined in these forums. I did get it booting with the new 20.04 approach. I copied ubuntu-20.04-live-server-amd64.iso to /var/www, mounted the iso, copied the kernel and initrd from the iso to /tftpboot/os/ubuntu/Server20.04/ and create the following menu entry using these parameters

    kernel tftp://${fog-ip}/os/ubuntu/Server20.04/vmlinuz
    initrd tftp://${fog-ip}/os/ubuntu/Server20.04/initrd
    imgargs vmlinuz initrd=initrd root=/dev/ram0 ramdisk_size=1800000 ip=dhcp url=http://172.16.13.3/ubuntu-20.04-live-server-amd64.iso ro
    boot
    

    One note when using this method, it only works on systems with >= 4G ram because it loads the whole iso into ram. Otherwise it works pretty well for me. Desktop and Mint variants of 20.04 also boot using this method.


  • @george1421 said in 20.04 autoinstall:

    https://forums.fogproject.org/topic/10944/using-fog-to-pxe-boot-into-your-favorite-installer-images/2?_=1594759012162

    I wonder if it will be different though, maybe not initially, but soon. See below:

    http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/current/ points to a legacy version. And if you click it, you go here: http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/current/legacy-images/ to which it mentions this will no longer be supported in the future and mentions the new version stating: now supports all Server hardware platforms, unattended autoinstall, offline installation, network-gapped install, PXE and HTTP boot, RAID, LVM, LUKS, among other things.

  • Moderator

    @londonfog I haven’t tried 20.04 yet, but here is the solution for 19.10. I don’t expect 20.04 to be to much different.

    https://forums.fogproject.org/topic/10944/using-fog-to-pxe-boot-into-your-favorite-installer-images/2?_=1594759012162

    The NFS route is what you want. Also be aware that you need the netboot version of the kernel and initrd and not the ones off the iso image.

238
Online

8.4k
Users

15.2k
Topics

142.6k
Posts