Ubuntu Help - Boot Problem #N00b
-
Hi All,
I’m a complete noob when it comes to Linux and it was working fine for a while but now wont “boot” fully. Errors show “OS Error ERRNO 28 No space left on device”
It’s a VM running on Hyper-V. VHDX is a 500gb max disk with current file size of 97gb?
FDISK shows:
root@tie-fogdeploy-01:~# fdisk -l
Disk /dev/sda: 500 GiB, 536870912000 bytes, 1048576000 sectors Disk model: Virtual Disk Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 62FE1AF7-BC7B-4B14-90C3-A9037D6F882C Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot /dev/sda2 4096 4198399 4194304 2G Linux filesystem /dev/sda3 4198400 1048573951 1044375552 498G Linux filesystem Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes root@tie-fogdeploy-01:~# Disk /dev/sda: 500 GiB, 536870912000 bytes, 1048576000 sectors Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot /dev/sda2 4096 4198399 4194304 2G Linux filesystem /dev/sda3 4198400 1048573951 1044375552 498G Linux filesystem Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
What do I do to get this back up? I’m guessing it’s something simple as expanding the OS to see the 500gb but I dont know how to do this and dont want to break this server
Cheers in advance
Roger -
@RogerBrownTDL Please run the following commands and port output here:
df -h du -h --max-depth=1 / mount lsblk
-
@Sebastian-Roth said in Ubuntu Help - Boot Problem #N00b:
root@tie-fogdeploy-01:~# df -h Filesystem Size Used Avail Use% Mounted on udev 5.7G 0 5.7G 0% /dev tmpfs 1.2G 118M 1.1G 11% /run /dev/mapper/ubuntu--vg-ubuntu--lv 98G 98G 0 100% / tmpfs 5.8G 0 5.8G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup /dev/loop0 92M 92M 0 100% /snap/lxd/23991 /dev/loop1 64M 64M 0 100% /snap/core20/1822 /dev/loop2 92M 92M 0 100% /snap/lxd/24061 /dev/loop3 50M 50M 0 100% /snap/snapd/17950 /dev/loop5 50M 50M 0 100% /snap/snapd/18357 /dev/loop4 64M 64M 0 100% /snap/core20/1778 /dev/sda2 2.0G 207M 1.6G 12% /boot
root@tie-fogdeploy-01:~# du -h --max-depth=1 / 21M /tftpboot 207M /boot 118M /run 72K /home 4.0K /mnt 0 /dev 1.4G /snap 1.7G /var 48K /root 1.5G /opt 4.0K /tftpboot.prev du: cannot access '/proc/42729/task/42729/fd/3': No such file or directory du: cannot access '/proc/42729/task/42729/fdinfo/3': No such file or directory du: cannot access '/proc/42729/fd/4': No such file or directory du: cannot access '/proc/42729/fdinfo/4': No such file or directory 0 /proc 0 /sys 2.7G /usr 89G /images 7.0M /etc 4.0K /media 12K /srv 16K /lost+found 28K /tmp 100G /
root@tie-fogdeploy-01:~# mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=5941220k,nr_inodes=1485305,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1197472k,mode=755) /dev/mapper/ubuntu--vg-ubuntu--lv on / type ext4 (rw,relatime) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=16183) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime) /var/lib/snapd/snaps/lxd_23991.snap on /snap/lxd/23991 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/core20_1822.snap on /snap/core20/1822 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/lxd_24061.snap on /snap/lxd/24061 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/snapd_17950.snap on /snap/snapd/17950 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/snapd_18357.snap on /snap/snapd/18357 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/core20_1778.snap on /snap/core20/1778 type squashfs (ro,nodev,relatime,x-gdu.hide) /dev/sda2 on /boot type ext4 (rw,relatime) tmpfs on /run/snapd/ns type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1197472k,mode=755) tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
root@tie-fogdeploy-01:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 91.8M 1 loop /snap/lxd/23991 loop1 7:1 0 63.3M 1 loop /snap/core20/1822 loop2 7:2 0 91.9M 1 loop /snap/lxd/24061 loop3 7:3 0 49.8M 1 loop /snap/snapd/17950 loop4 7:4 0 63.3M 1 loop /snap/core20/1778 loop5 7:5 0 49.9M 1 loop /snap/snapd/18357 sda 8:0 0 500G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 2G 0 part /boot └─sda3 8:3 0 498G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm / sr0 11:0 1 1024M 0 rom root@tie-fogdeploy-01:~#
-
@RogerBrownTDL Ok, your /images directory is on the root partition and just filled it up. Not too much trouble - let’s hope it didn’t break the database.
Run
ls -al /images/dev
and post output here. Pretty sure there will be old broken image captures that we can delete. -
@Sebastian-Roth said in Ubuntu Help - Boot Problem #N00b:
ls -al /images/dev
root@tie-fogdeploy-01:~# ls -al /images/dev total 16 drwxrwxrwx 4 fogproject root 4096 Mar 3 15:43 . drwxrwxrwx 7 fogproject root 4096 Mar 1 19:31 .. drwxrwxrwx 2 root root 4096 Mar 3 15:45 c8d9d2d4c8ac -rwxrwxrwx 1 fogproject root 0 Sep 7 20:42 .mntcheck drwxrwxrwx 2 fogproject root 4096 Sep 7 20:42 postinitscripts root@tie-fogdeploy-01:~#
-
@RogerBrownTDL Ok, there is really only one. Probably the one that failed to upload because it filled the disk. I suggest you remove that because with a full disk that is not going to be a valid image in any case.
rm -rf /images/dev/c8d9d2d4c8ac
Make sure you don’t mess with this command (e.g. use spaces in other places or *) because it can do a lot of harm if used the wrong way. Just wanted to get out this beforehand as you told us you are a Linux beginner.
Now run
df -h
to check the disk space available and reboot the whole VM. -
@Sebastian-Roth said in Ubuntu Help - Boot Problem #N00b:
df -h
Okay, removed the offending one using rm -rf /images/dev/c8d9d2d4c8ac df -h shows:
root@tie-fogdeploy-01:~# df -h Filesystem Size Used Avail Use% Mounted on udev 5.7G 0 5.7G 0% /dev tmpfs 1.2G 112M 1.1G 10% /run /dev/mapper/ubuntu--vg-ubuntu--lv 98G 86G 7.8G 92% / tmpfs 5.8G 0 5.8G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup /dev/loop0 92M 92M 0 100% /snap/lxd/23991 /dev/loop1 64M 64M 0 100% /snap/core20/1822 /dev/loop2 92M 92M 0 100% /snap/lxd/24061 /dev/loop3 50M 50M 0 100% /snap/snapd/17950 /dev/loop5 50M 50M 0 100% /snap/snapd/18357 /dev/loop4 64M 64M 0 100% /snap/core20/1778 /dev/sda2 2.0G 207M 1.6G 12% /boot root@tie-fogdeploy-01:~#
Rebooted VM. df -h shows
root@tie-fogdeploy-01:~# df -h Filesystem Size Used Avail Use% Mounted on udev 5.7G 0 5.7G 0% /dev tmpfs 1.2G 1.1M 1.2G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 98G 86G 7.8G 92% / tmpfs 5.8G 0 5.8G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup /dev/loop1 64M 64M 0 100% /snap/core20/1778 /dev/loop0 64M 64M 0 100% /snap/core20/1822 /dev/loop2 50M 50M 0 100% /snap/snapd/18357 /dev/loop5 92M 92M 0 100% /snap/lxd/24061 /dev/loop3 50M 50M 0 100% /snap/snapd/17950 /dev/loop4 92M 92M 0 100% /snap/lxd/23991 /dev/sda2 2.0G 207M 1.6G 12% /boot tmpfs 1.2G 0 1.2G 0% /run/user/0 root@tie-fogdeploy-01:~#
It’s a 500gb VHDX though, surely this shouldnt be full?
-
@RogerBrownTDL On the long run you need to look into purging some of the images and/or extending the disk space.
The other outputs you posted show that the root partition (LVM) really is only 100 GB in size while the whole VMDK container is 500 GB.
Extending the space is a bit more advanced but should be possible for you to achieve as well because things seem to be prepared already. After the reboot of the VM run the following commands and post output here:
pvs vgs lvs
-
@Sebastian-Roth said in Ubuntu Help - Boot Problem #N00b:
lvs
Output is as follows:root@tie-fogdeploy-01:~# pvs PV VG Fmt Attr PSize PFree /dev/sda3 ubuntu-vg lvm2 a-- <498.00g <398.00g root@tie-fogdeploy-01:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert ubuntu-lv ubuntu-vg -wi-ao---- 100.00g root@tie-fogdeploy-01:~# vgs VG #PV #LV #SN Attr VSize VFree ubuntu-vg 1 1 0 wz--n- <498.00g <398.00g root@tie-fogdeploy-01:~#
-
lvextend -L +398g ubuntu-lv --resizefs
You may need to reboot for it to take effect. This will use all the remaining LVM volume space for your logical volume. You can’t go smaller after the fact, so be sure you want to use it all first.
-
@lukebarone said in Ubuntu Help - Boot Problem #N00b:
lvextend -L +398g ubuntu-lv --resizefs
to confirm, this will mean I can actually use the box to it’s full 500gb drive allocation?
-
@RogerBrownTDL Yes.
lvextend - Increase the volume size -L - Specify the size to increase it by ubuntu-lv - The name of your logical volume (as reported) --resizefs - Resize the EXT{2|3|4} file system
-
In which case based on the output here:
root@tie-fogdeploy-01:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- <498.00g <398.00g
root@tie-fogdeploy-01:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 100.00g
root@tie-fogdeploy-01:~# vgs
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz–n- <498.00g <398.00g
root@tie-fogdeploy-01:~# lvextend -L +398g ubuntu-lv --resizefs
Please specify a logical volume path.
Run `lvextend --help’ for more information.
root@tie-fogdeploy-01:~#Would it be lvextend -L +398g ubuntu-vg --resizefs ?
-
@RogerBrownTDL Try this instead:
lvextend -L +398g ubuntu-vg/ubuntu-lv --resizefs
-
@lukebarone response: root@tie-fogdeploy-01:~# lvextend -L +398g ubuntu-vg/ubuntu-lv --resizefs
Insufficient free space: 101888 extents needed, but only 101887 available
root@tie-fogdeploy-01:~# -
@lukebarone thoughts?
-
@lukebarone I am wondering if it’s wise to create another volume instead of extending the existing one. This way the images could be put on the separate volume/partition and would not be able to fill the root partition. But that’s just me trying to prevent this from happening again.
The downside of my proposal is that the existing 100 GB root partition can’t be used by images and therefore are kind of wasted because the FOG server itself will never use 100 GB disk space with the images on a separate partition.
So maybe it’s actually wise to just extend as proposed and @RogerBrownTDL needs to keep an eye on the disk space from now on.
So if you wanna go this path, do:
lvextend --extents +100%FREE ubuntu-vg/ubuntu-lv --resizefs
-
@Sebastian-Roth So obviously as a Linux noob, I know very little… I’m purely (I know, insert puke emoji here) a Windows Sysadmin. If I understand it correctly the way i’ve got it setup is like having a single big C:\ and all i’m wanting to do is get the entire fog server to see it all rather than just a partition of it… If I follow that command
“lvextend --extents +100%FREE ubuntu-vg/ubuntu-lv --resizefs”
Will it do that so that the full 500gb would be visible and the images stored remain intact? -
@RogerBrownTDL said in Ubuntu Help - Boot Problem #N00b:
Will it do that so that the full 500gb would be visible and the images stored remain intact?
Yes! Although I am not a 100% sure the filesystem resize (to full size) will work while the system is running. Should work but I can’t promise you it will. As well just for safety reasons I always suggest for people to take a backup copy before doing these kind of operations on a production server. Should be really easy taking a snapshot in Hyper-V before going ahead.
The other option I was talking about would be like adding a D:\ drive and moving all the images to that new partition. As I said it’s wise on the one hand so images can’t fill up your important C:\ drive (with the database on it) but on the other side you waste free space on C:…
-
@Sebastian-Roth further info print i’ve got if this makes more sense:
root@tie-fogdeploy-01:~# sudo parted /dev/sda unit MiB print
Model: Msft Virtual Disk (scsi)
Disk /dev/sda: 512000MiB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:Number Start End Size File system Name Flags
1 1.00MiB 2.00MiB 1.00MiB bios_grub
2 2.00MiB 2050MiB 2048MiB ext4
3 2050MiB 511999MiB 509949MiBIf the system isnt running, surely I wouldnt be able to do anything, so it needs to be online?