Run a post deploy script



  • My goal is to format and mount a 2nd hard disk after the deploy is done.

    Right now i can’t really find a way to do it with fog.
    tried using snapin and postdownload script non work.
    When using snapin nothing really happens.
    And when using postdownload script it yells at me that he can’t find the sh script.

    Right now I’m using ansible to do this task it would be nice having one solution for both deployment and post tasks.

    The target machines are ubuntu



  • Yes I’m aware of this behavior, I’m working on that from my side…


  • Developer

    @obeh said in Run a post deploy script:

    It’s not always sda and nvme i have 3 combinations,
    nvne (system) ssd (storage)
    nvme (system) nvme (storage)
    ssd (system) ssd (storage)

    Hmmmm, that might turn out to be troublesome. We have seen that systems with two NVMe drives can randomly change the device enumeration on boots. So if you boot up the machine one time you might have /dev/nvme0n1p1 as system drive while the next boot up /dev/nvme1n1p1 (second one) might be the systemdrive and nvme0n1p1 storage. See a lengthy discussion on this topic here: https://forums.fogproject.org/topic/12959/dell-7730-precision-laptop-deploy-gpt-error-message

    This is not something FOG is causing but it’s simply the Linux kernel enumerating NVMe drives in unreliable order. This is not much trouble if you have Linux installed on the drive because it can use UUIDs to work with in fstab. But as we can’t use UUID identifiers in the FOG world it’s pretty much impossible to reliably work with two NVMe drives in one machine.



  • I’ll give it a try tomorrow.
    Thank you


  • Moderator

    @obeh With this script I tried to integrate it into my existing post install script.

    #!/bin/bash
    . /usr/share/fog/lib/funcs.sh
    [[ -z $postdownpath ]] && postdownpath="/images/postdownloadscripts/"
    case $osid in
        5|6|7|9)
            clear
            [[ ! -d /ntfs ]] && mkdir -p /ntfs
            getHardDisk
            if [[ -z $hd ]]; then
                handleError "Could not find hdd to use"
            fi
            getPartitions $hd
            for part in $parts; do
                umount /ntfs >/dev/null 2>&1
                fsTypeSetting "$part"
                case $fstype in
                    ntfs)
                        dots "Testing partition $part"
                        ntfs-3g -o force,rw $part /ntfs
                        ntfsstatus="$?"
                        if [[ ! $ntfsstatus -eq 0 ]]; then
                            echo "Skipped"
                            continue
                        fi
                        if [[ ! -d /ntfs/windows && ! -d /ntfs/Windows && ! -d /ntfs/WINDOWS ]]; then
                            echo "Not found"
                            umount /ntfs >/dev/null 2>&1
                            continue
                        fi
                        echo "Success"
                        break
                        ;;
                    *)
                        echo " * Partition $part not NTFS filesystem"
                        ;;
                esac
            done
            if [[ ! $ntfsstatus -eq 0 ]]; then
                echo "Failed"
                debugPause
                handleError "Failed to mount $part ($0)\n    Args: $*"
            fi
            echo "Done"
            debugPause
            . ${postdownpath}fog.copydrivers
            # . ${postdownpath}fog.updateunattend
            umount /ntfs
            ;;
    	50)
    		clear
    		case $img in
                        "UBN1704")
    			    echo "Creating the second disk partition and format it"
    				debugPause
    				parted -a opt /dev/sdc mkpart primary ext4 0% 100%
    				
    				echo "Mounting the primary disk"
    				debugPause
    				mkdir /linfs
    				mount /dev/sda1 /linfs
    				mkdir /linfs/disk2
    				echo "Patching Ubuntu's fstab"
    				debugPause
    				echo "/dev/sdc1 /disk2 ext4 defaults 0 1" >>/linfs/etc/fstab
    				echo "Unmounting the primary disk"
    				debugPause				
    				umount /linfs 
    				;;
    			*)
    				echo "nothing to do with this image"
    				;;
    		esac 
    		;;
                *)
            echo "Non-Windows Deployment"
            debugPause
            return
            ;;
    esac
    

    So what is missing from your script is that you need to mount the OS disk and insert the mount stuff into the fstab on the eventual OS fstab and not in FOS’ fstab.



  • It’s not always sda and nvme i have 3 combinations,
    nvne (system) ssd (storage)
    nvme (system) nvme (storage)
    ssd (system) ssd (storage)

    So i need something to determine which is which.


  • Moderator

    @obeh So your system has 2 disks (no surprise here). One is an nvme disk an the other is a SATA attached disk. In your environment will the nvme disk ALWAYS be the OS disk? If so you can then generalize that /dev/sda will be your data disk. There is no need to do anything fancy, just assign disk to be /dev/sda

    I started to mock up the script yesterday and then got side tracked, your script looks pretty close but is missing a key thing.



  • Tell me what information you are missing and I’ll gladly provide it.
    I’ve tried using a very simple script:
    my systems are composed of laptops with 1 or 2 disks (so my script is not expecting more than 2 disks)

    ## get all drives  that are not the root drive and not a usb drive (if there only one drive it will return an empty string)
    disk=$(lsblk  -e7,11  -lpdn -o NAME,TRAN | grep -v usb | grep -v $(lsblk -no pkname $(lsblk -l -o  NAME,MOUNTPOINT  -e7 -e11 -p  | grep -w '\/$' | awk '{print $1}')) | awk '{print $1}')
    
    ## now i'm creating the partions:
     if [[ ! -z $disks ]]; then 
         parted -s $disk mklabel gpt mkpart pri 0% 100%
    
    ## now I'm formatting it:
         mkfs.ext4 -F ${disk}1 
    
    ## getting it's UUID
         UUID=blkid ${disk}1 -sUUID -ovalue
    
    ## insert the disk to fstab
         echo -e "UUID=${UUID} \t /storage \t ext4 \t defaults \t 0 \t 0" | tee -a /etc/fstab 
    
    ## mount it
         mount -a
    fi
    
    

    now looking on the pitchers I see that the lsblk does show the second disk but the disks are not mounted so my script fails…
    VideoCapture_20190710-225624.jpg

    VideoCapture_20190710-225637.jpg


  • Moderator

    @obeh said in Run a post deploy script:

    So I’ve tried several things but in the end i realized that the 2nd drive is not loaded while deploying so i can’t do anything with it.

    Frankly I can’t understand this unless there is special hardware in front of that disk.

    Lets start with this:

    1. Schedule a debug deployment/capture (don’t care). Schedule another task but tick the debug checkbox before submitting the task.
    2. PXE boot the target computer.
    3. After a few screens of text that require the enter key to clear you will be dropped to the FOS Linux command prompt.
    4. At the FOS Linux prompt, give root a password with passwd Make it something simple like hello. The password will be reset at the next reboot so the password only matters for this debugging session.
    5. Get the ip address of the target computer with ip addr show
    6. With those set, now you can connect to FOS Linux using putty or ssh. (these steps makes it easier to copy and paste into FOS Linux)
    7. Key in the following and post the results here:
      7.1 lsblk
      7.2 df -h
      7.3 lspci -nn
    8. Please identify the manufacture and model of this target computer.

    Lets see the structure of this target computer before picking the next steps.


  • Developer

    @obeh We make extensive use of grep und also have lsblk in that environment. I can’t think of many issues that could prevent FOG from seeing your second disk. Though I am fairly sure user land tools are not to blame.

    But this is just me guessing because we really don’t get enough information to be able to help you much. Tell us more about the hardware, mainly the second disk. Is it connected to a RAID Controller? That might be a solvable issues.



  • So I’ve tried several things but in the end i realized that the 2nd drive is not loaded while deploying so i can’t do anything with it.
    Also the busybox has a very limited grep binary.
    And does not include lsblk (it’s only get used when piping, as much as i could see)
    For now i’ll stick with ansible to do the reset of the work.

    Unless you have any advice that can help me achieve the target…



  • @Sebastian-Roth I’m not in front of the computer right now i’ll provide both tomorrow.

    @george1421 I’ll give it a try tomorrow.

    thank you both for the quick response


  • Moderator

    This is possible to do if you can craft a bash shell script.

    The script goes in /images/postinstall directory.

    I have many examples (targeted to MS Windows, but in your case not much different)
    https://forums.fogproject.org/topic/7740/the-magical-mystical-fog-post-download-script
    https://forums.fogproject.org/topic/8889/fog-post-install-script-for-win-driver-injection
    https://forums.fogproject.org/topic/11126/using-fog-postinstall-scripts-for-windows-driver-injection-2017-ed

    The intention to linking those scripts in is to show the the structure and format you need for your bash script.

    While the files are stored on the server, the bash script executes on the target computer within the FOS Linux environment. Depending on your needs, you could even write the script in such a way to only execute the disk recreation based on a specific image name. So for image X it would create the second disk and for the other images it would bypass the disk creation code. There is a lot of things you can do in a post install script.


  • Developer

    @obeh Please post your script and a picture of the error.


Log in to reply
 

439
Online

6.2k
Users

13.6k
Topics

128.0k
Posts