Run a post deploy script
-
So I’ve tried several things but in the end i realized that the 2nd drive is not loaded while deploying so i can’t do anything with it.
Also the busybox has a very limited grep binary.
And does not include lsblk (it’s only get used when piping, as much as i could see)
For now i’ll stick with ansible to do the reset of the work.Unless you have any advice that can help me achieve the target…
-
@obeh We make extensive use of grep und also have lsblk in that environment. I can’t think of many issues that could prevent FOG from seeing your second disk. Though I am fairly sure user land tools are not to blame.
But this is just me guessing because we really don’t get enough information to be able to help you much. Tell us more about the hardware, mainly the second disk. Is it connected to a RAID Controller? That might be a solvable issues.
-
@obeh said in Run a post deploy script:
So I’ve tried several things but in the end i realized that the 2nd drive is not loaded while deploying so i can’t do anything with it.
Frankly I can’t understand this unless there is special hardware in front of that disk.
Lets start with this:
- Schedule a debug deployment/capture (don’t care). Schedule another task but tick the debug checkbox before submitting the task.
- PXE boot the target computer.
- After a few screens of text that require the enter key to clear you will be dropped to the FOS Linux command prompt.
- At the FOS Linux prompt, give root a password with
passwd
Make it something simple like hello. The password will be reset at the next reboot so the password only matters for this debugging session. - Get the ip address of the target computer with
ip addr show
- With those set, now you can connect to FOS Linux using putty or ssh. (these steps makes it easier to copy and paste into FOS Linux)
- Key in the following and post the results here:
7.1lsblk
7.2df -h
7.3lspci -nn
- Please identify the manufacture and model of this target computer.
Lets see the structure of this target computer before picking the next steps.
-
Tell me what information you are missing and I’ll gladly provide it.
I’ve tried using a very simple script:
my systems are composed of laptops with 1 or 2 disks (so my script is not expecting more than 2 disks)## get all drives that are not the root drive and not a usb drive (if there only one drive it will return an empty string) disk=$(lsblk -e7,11 -lpdn -o NAME,TRAN | grep -v usb | grep -v $(lsblk -no pkname $(lsblk -l -o NAME,MOUNTPOINT -e7 -e11 -p | grep -w '\/$' | awk '{print $1}')) | awk '{print $1}') ## now i'm creating the partions: if [[ ! -z $disks ]]; then parted -s $disk mklabel gpt mkpart pri 0% 100% ## now I'm formatting it: mkfs.ext4 -F ${disk}1 ## getting it's UUID UUID=blkid ${disk}1 -sUUID -ovalue ## insert the disk to fstab echo -e "UUID=${UUID} \t /storage \t ext4 \t defaults \t 0 \t 0" | tee -a /etc/fstab ## mount it mount -a fi
now looking on the pitchers I see that the lsblk does show the second disk but the disks are not mounted so my script fails…
-
@obeh So your system has 2 disks (no surprise here). One is an nvme disk an the other is a SATA attached disk. In your environment will the nvme disk ALWAYS be the OS disk? If so you can then generalize that /dev/sda will be your data disk. There is no need to do anything fancy, just assign disk to be /dev/sda
I started to mock up the script yesterday and then got side tracked, your script looks pretty close but is missing a key thing.
-
It’s not always sda and nvme i have 3 combinations,
nvne (system) ssd (storage)
nvme (system) nvme (storage)
ssd (system) ssd (storage)So i need something to determine which is which.
-
@obeh With this script I tried to integrate it into my existing post install script.
#!/bin/bash . /usr/share/fog/lib/funcs.sh [[ -z $postdownpath ]] && postdownpath="/images/postdownloadscripts/" case $osid in 5|6|7|9) clear [[ ! -d /ntfs ]] && mkdir -p /ntfs getHardDisk if [[ -z $hd ]]; then handleError "Could not find hdd to use" fi getPartitions $hd for part in $parts; do umount /ntfs >/dev/null 2>&1 fsTypeSetting "$part" case $fstype in ntfs) dots "Testing partition $part" ntfs-3g -o force,rw $part /ntfs ntfsstatus="$?" if [[ ! $ntfsstatus -eq 0 ]]; then echo "Skipped" continue fi if [[ ! -d /ntfs/windows && ! -d /ntfs/Windows && ! -d /ntfs/WINDOWS ]]; then echo "Not found" umount /ntfs >/dev/null 2>&1 continue fi echo "Success" break ;; *) echo " * Partition $part not NTFS filesystem" ;; esac done if [[ ! $ntfsstatus -eq 0 ]]; then echo "Failed" debugPause handleError "Failed to mount $part ($0)\n Args: $*" fi echo "Done" debugPause . ${postdownpath}fog.copydrivers # . ${postdownpath}fog.updateunattend umount /ntfs ;; 50) clear case $img in "UBN1704") echo "Creating the second disk partition and format it" debugPause parted -a opt /dev/sdc mkpart primary ext4 0% 100% echo "Mounting the primary disk" debugPause mkdir /linfs mount /dev/sda1 /linfs mkdir /linfs/disk2 echo "Patching Ubuntu's fstab" debugPause echo "/dev/sdc1 /disk2 ext4 defaults 0 1" >>/linfs/etc/fstab echo "Unmounting the primary disk" debugPause umount /linfs ;; *) echo "nothing to do with this image" ;; esac ;; *) echo "Non-Windows Deployment" debugPause return ;; esac
So what is missing from your script is that you need to mount the OS disk and insert the mount stuff into the fstab on the eventual OS fstab and not in FOS’ fstab.
-
I’ll give it a try tomorrow.
Thank you -
@obeh said in Run a post deploy script:
It’s not always sda and nvme i have 3 combinations,
nvne (system) ssd (storage)
nvme (system) nvme (storage)
ssd (system) ssd (storage)Hmmmm, that might turn out to be troublesome. We have seen that systems with two NVMe drives can randomly change the device enumeration on boots. So if you boot up the machine one time you might have /dev/nvme0n1p1 as system drive while the next boot up /dev/nvme1n1p1 (second one) might be the systemdrive and nvme0n1p1 storage. See a lengthy discussion on this topic here: https://forums.fogproject.org/topic/12959/dell-7730-precision-laptop-deploy-gpt-error-message
This is not something FOG is causing but it’s simply the Linux kernel enumerating NVMe drives in unreliable order. This is not much trouble if you have Linux installed on the drive because it can use UUIDs to work with in fstab. But as we can’t use UUID identifiers in the FOG world it’s pretty much impossible to reliably work with two NVMe drives in one machine.
-
Yes I’m aware of this behavior, I’m working on that from my side…