No space left on device
-
Ok, so here is in interesting one. We use the golden image, postdownloadscripts method of driver deployment. everything works great until I encounter a OptiPlex 790, (and i think 990). When it comes time to copy the drivers over, it appears to mount /dev/sda2 which is correct, then errors out saying there is no space left on the device. Now i did encounter this with M.2 Sata drives trying to copy to the 1st partition and not the second, however in this case, it IS mounting the 2nd partition, which is correct.
I will also add, we altered the script to be able to distinguish between the normal SDA and the NVMe version. But it works for all other devices so far, except these two models.
I could post the script if needed. Anyone seen this before?
As a side note, I can image 790s at other locations, that do not have the special script section to tell the drives apart, but i need it in there as we have an entire lab that uses the M.2 Satas.
-
What version are you running?
-
@adukes40 If you were to schedule a debug deployment, that would drop you to a linux prompt on the target machine. Key in
fog
and then single step through the deployment until you get to your post install script. Once you are at that point hit crtl-c and break out of the installer script. This will give you a chance to mount the windows disk (as your script would) then you will be in a position to make sure your script is in sync with reality of the drive. -
@Tom-Elliott 8649
-
@george1421 When I get a few minutes I will give this a try.
-
@adukes40 how much space is on the mount point? I’m driving so you can get that while in debug mode.
-
@george1421 when I type fog…it says…An error has been detected…fatal error unknown request type null
-
@adukes40 did you schedule a debug deploy? I’ve only seen this error before when you use the USB boot stick. The fog server should set the type kernel parameter.
-
@george1421 I got in earlier. I did a normal debug, and not a debug deploy… I am trying to putty into the machine now. I have done it before, during debug, but cant remember the credentials I used.
-
@adukes40 From the FOS engine console, set root’s password with
passwd
. Once that is set you should be able to putty in as root and what ever password you set.Yeah the normal debug doesn’t pass the required parameters you need. The deploy debug will drop you at a linux prompt. Then key in fog<cr> and it will single step you through deployment. This what you want because you want to blow out of the deployment script where your post install script will run. This will give you a chance to check the drive geometry and actually key in (copy paste) your script until you reach the spot where it fails.
-
@george1421 AH, so i have to set the password from the machine physically. That will have to wait i guess until Monday, don’t think I will be in at all this weekend.
-
@adukes40 Yeah, security its such a PITA some times.
This is the design, you don’t want someone hacking into the image deployment process, really. So this IS a security measure, but also gives the IT support people a way to debug their process, under their control and not by some common back door.
-
@george1421 Oh I’m perfectly fine with the design. I just remember I puttyd in before on a debug machine…but could not remember how I did it.
-
Is this the exact same image you’re trying to deploy?
An important thing to note is that if you have UEFI images (with GPT layout ofc), then you need to mount sda4 instead of sda2
-
@Quazz We only have 3 images, and currently we are deploying the 32bit one. All the images are the same. plus this image in particular, is the one that was captured, as it is on the subnet where the master node resides.
Tomorrow i will be back at the building, so I will be able to dive into it more.
-
I am having the same issue the pc has a 500GB drive so shouldnt the mount point show that as available? Am i missing a step?
-
-
@Quazz I do not have anything further. Here is what I do have. this first script works for all devices with mechanical SATA, and SSD’s, but not M.2 SATA:
#!/bin/sh
osdiskpart=“/dev/sda2”;
driverver=“Win7”
mkdir /ntfs 2>/dev/null
mount.ntfs-3g “${osdiskpart}” /ntfs 2>/tmp/mntfail
mkdir /ntfs/Drivers 2>/dev/null
if [ -d “/ntfs/Windows/SysWOW64” ]
then
setarch=“x64”;
else
setarch=“x86”;
fi
machine=dmidecode -s system-product-name
;
machine=“${machine%”${machine##[![:space:]]}“}”;
echo "Detected [${machine}] [${driverver}] with this arch [${setarch}] " >> /ntfs/Drivers/machine.txt
rm -f /tmp/mydrivers;
ln -s “/images/Drivers/${driverver}/${machine}/${setarch}/” /tmp/mydrivers;
if [ -d “/tmp/mydrivers” ]
then
cp -r /tmp/mydrivers/ /ntfs/Drivers;
fi
regfile=“/ntfs/Windows/System32/config/SOFTWARE”
key=“\Microsoft\Windows\CurrentVersion\DevicePath”
devpath=“%SystemRoot%\inf;C:\Drivers”;
reged -e “$regfile” &>/dev/null <<EOFREG
ed $key
$devpath
q
y
EOFREG
rm -f /tmp/mydrivers;
umount /ntfsThe following scirpt works with the M.2 SATA and mechanical SATAs, and the SSD’s. HOWEVER, it will not work with the OptiPlex 790’s nor 990’s They are the only two models giving the “space” problem.
#!/bin/sh
if [[ $hd == /dev/sda* ]]
then
osdiskpart=“/dev/sda2”;
else [[ $hd == /dev/nvme* ]]osdiskpart="/dev/nvme0n1p2";
fi
driverver=“Win7”
mkdir /ntfs 2>/dev/null
mount.ntfs-3g “${osdiskpart}” /ntfs 2>/tmp/mntfail
mkdir /ntfs/Drivers 2>/dev/null
if [ -d “/ntfs/Windows/SysWOW64” ]
then
setarch=“x64”;
else
setarch=“x86”;
fi
machine=dmidecode -s system-product-name
;
machine=“${machine%”${machine##[![:space:]]}“}”;
echo "Detected [${machine}] [${driverver}] with this arch [${setarch}] " >> /ntfs/Drivers/machine.txt
rm -f /tmp/mydrivers;
ln -s “/images/Drivers/${driverver}/${machine}/${setarch}/” /tmp/mydrivers;
if [ -d “/tmp/mydrivers” ]
then
cp -r /tmp/mydrivers/ /ntfs/Drivers;
fi
regfile=“/ntfs/Windows/System32/config/SOFTWARE”
key=“\Microsoft\Windows\CurrentVersion\DevicePath”
devpath=“%SystemRoot%\inf;C:\Drivers”;
reged -e “$regfile” &>/dev/null <<EOFREG
ed $key
$devpath
q
y
EOFREG
rm -f /tmp/mydrivers;
umount /ntfsfor now I reverted back to the first script for imaging as we only have 1 lab of the M.2 SATA machines. I have not had the time to look at this yet. If anyone has an idea on what to change for the scirpts I would be all ears, but for now I do not have a resolution. (due to time)
-
@adukes40 said in No space left on device:
if [[ $hd == /dev/sda* ]]
then
osdiskpart=“/dev/sda2”;
else [[ $hd == /dev/nvme* ]]
osdiskpart=“/dev/nvme0n1p2”;fi
I’m not an expert in bash scripts but this seems incorrect to me.
if [ $hd == /dev/sda* ] then osdiskpart="/dev/sda2" else [ $hd == /dev/nvme* ] osdiskpart="/dev/nvme0n1p2" fi
Seems better already. They key part seems to be that the incorrect partition is selected on those Optiplexes. So perhaps the $hd thingy isn’t super reliable?
I would do something like :
if [ -b /dev/sda2 ] then osdiskpart="/dev/sda2" else [ -b /dev/nvme0n1p2 ] osdiskpart="/dev/nvme0n1p2" else echo "No usable partition detected!"; fi
Seeing as you’ll be using those partitions anyway, you might as well test against their existence in a direct fashion anyway, since it won’t be able to do anything if they don’t exist.
You only need ; in specific circumstances as well if I’m not mistaken, shouldn’t be necessary for simply setting variables.
-
@Quazz A couple of things come to mind here. First the script structure looks wrong (maybe I’m a class-A nit picker).
I would expect a bash script to look something similar
if [ $hd == /dev/sda* ]; then osdiskpart="/dev/sda2"; elif [ $hd == /dev/nvme* ]; osdiskpart="/dev/nvme0n1p2"; fi
From there I’m not sure if wild cards are supported in the test as well as the if comparison should be against a string not /dev/sda*
With that said, I might rewrte that code as:
if [ $hd == *"/dev/nvme"* ]; then osdiskpart= "${hd}0n1p2"; else osdiskpart= "${hd}2"; fi