CentOS 6 GUI login loop.
-
@ManofValor Try to ssh in.
-
@Wayne-Workman
I can do that. Everything seems normal from the command line. -
@ManofValor You need to check things. Seems and is are two different things. The first thing to check is free disk space.
-
@ManofValor
Try logging into text mode with root account. when this happens to me, I can only log in with the root account and locally, because root login with ssh is disabled in our environment. -
I think turning the NAS back on will have fixed the disk space issue cause I haven’t done but a couple images since I got it going. Here is the df output.
[root@localhost fogadmin]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos00-root00 20535120 20389424 0 100% / devtmpfs 1921780 0 1921780 0% /dev tmpfs 1936948 2820 1934128 1% /dev/shm tmpfs 1936948 9076 1927872 1% /run tmpfs 1936948 0 1936948 0% /sys/fs/cgroup /dev/sda5 991512 336764 587164 37% /boot /dev/mapper/fog-opt_fog_images 413199680 233741596 158445604 60% /opt /dev/sdb1 8674999312 54524064 8183255204 1% /images tmpfs 387392 12 387380 1% /run/user/42 tmpfs
-
@ManofValor Your root partition is 100% full. That’s the problem.
-
@Wayne-Workman
So I can’t login cause it’s full? -
@Wayne-Workman
I don’t understand, I haven’t put anything on there to fill it up? -
@ManofValor because your /home is at the same place at your / and when you try to login, some files maybe touched and this prevent you to enter.
-
I ran this:
[root@localhost Downloads]# df -ah --o Filesystem Type Inodes IUsed IFree IUse% Size Used Avail Use% File Mounted on rootfs - - - - - - - - - - / sysfs sysfs 0 0 0 - 0 0 0 - - /sys proc proc 0 0 0 - 0 0 0 - - /proc devtmpfs devtmpfs 470K 494 469K 1% 1.9G 0 1.9G 0% - /dev securityfs securityfs 0 0 0 - 0 0 0 - - /sys/kernel/security tmpfs tmpfs 473K 6 473K 1% 1.9G 2.8M 1.9G 1% - /dev/shm devpts devpts 0 0 0 - 0 0 0 - - /dev/pts tmpfs tmpfs 473K 658 473K 1% 1.9G 8.9M 1.9G 1% - /run tmpfs tmpfs 473K 13 473K 1% 1.9G 0 1.9G 0% - /sys/fs/cgroup cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/systemd pstore pstore 0 0 0 - 0 0 0 - - /sys/fs/pstore cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/cpu,cpuacct cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/net_cls cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/blkio cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/hugetlb cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/perf_event cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/devices cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/memory cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/cpuset cgroup cgroup 0 0 0 - 0 0 0 - - /sys/fs/cgroup/freezer configfs configfs 0 0 0 - 0 0 0 - - /sys/kernel/config /dev/mapper/centos00-root00 ext4 1.3M 155K 1.2M 13% 20G 20G 0 100% - / systemd-1 - - - - - - - - - - /proc/sys/fs/binfmt_misc mqueue mqueue 0 0 0 - 0 0 0 - - /dev/mqueue debugfs debugfs 0 0 0 - 0 0 0 - - /sys/kernel/debug hugetlbfs hugetlbfs 0 0 0 - 0 0 0 - - /dev/hugepages sunrpc rpc_pipefs 0 0 0 - 0 0 0 - - /var/lib/nfs/rpc_pipefs nfsd nfsd 0 0 0 - 0 0 0 - - /proc/fs/nfsd /dev/sda5 ext4 63K 365 63K 1% 969M 329M 574M 37% - /boot /dev/mapper/fog-opt_fog_images ext4 26M 9.7K 26M 1% 395G 223G 152G 60% - /opt /dev/sdb1 ext4 261M 29 261M 1% 8.1T 52G 7.7T 1% - /opt/fog/images /dev/sdb1 ext4 261M 29 261M 1% 8.1T 52G 7.7T 1% - /images tmpfs tmpfs 473K 14 473K 1% 379M 12K 379M 1% - /run/user/42 gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - 0.0K 0.0K 0.0K - - /run/user/42/gvfs fusectl fusectl 0 0 0 - 0 0 0 - - /sys/fs/fuse/connections tmpfs tmpfs 473K 1 473K 1% 379M 0 379M 0% - /run/user/1000 binfmt_misc binfmt_misc 0 0 0 - 0 0 0 - - /proc/sys/fs/binfmt_misc
What does it all mean and how do I know what to delete or rearrange?
How did it fill up when I didn’t do anything? -
@ManofValor Seeing as /images (or whatever your relevant storage point is) is expected to be on the NAS, I’d recommend cleaning out the current storage location as it’s located on your main disk. You could just do something like:
Unmount the current NAS if it is already mounted with:
umount /images
Then check the /images directory and remove any files in there. This is entirely up to you at this point, I’m giving you how to fix, make sure you don’t delete all your current NAS images.
rm -rf /images/*
Then try to mount the nas.
-
@Tom-Elliott Very good thinking, Tom. If the NAS was off, mounting failed, and any images you captured would have gone straight onto the root partition. ESPECIALLY since I remember ManOfValor running the installer so many times before having the NAS correctly configured, the needed .mntcheck files and dev folder would absolutely be present, so any upload would go until space ran out.
@ManofValor Try Tom’s advice - but you must be careful, you MUST successfully unmount the /images directory for the NAS, and you SHOULD confirm and reconfirm that it unmounted correctly. Being new to linux, after this command I’d even suggest you turn the NAS off after unmounting. Then run the delete commands that Tom posted. Then run
df -h
to check free space. If the root partition is substantially free’d up, then hook everything back up, turn it on, give the server a reboot, and see if things work.Tom’s commands would also ensure that uploading will never work without the NAS being properly mounted - unless of course you run the installer without the NAS being mounted, as the installer would re-create those needed files/folders for image capture on the root partition.
Let us know how it goes.
-
Also, the problem was found, but it’s not solved yet, so I marked the thread unsolved. I don’t want others to not bother reading & helping because they think it’s solved.
-
@Wayne-Workman
Good morning,
Everything on the NAS is just tests, so if I don’t care what’s deleted then I don’t need to unmount, right? -
@ManofValor You would still need to unmount, because the space is used by your main system. Simply removing all the data in the mounted partition will only free up space on the mount point.
-
-
@Tom-Elliott
Ok, this thread is solved. After I deleted everything I was able to login. Thanks again guys.