How can I troubleshoot the Diskusage Problem on the Webinterface?
-
@Sebastian-Roth said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
@Gamienator Ahhhhh I see. Makes all more sense now. But still I am wondering about some of the things you posted. So let me ask, are there any other FOG servers or is it just that single “Frankfurt Node”? From my understanding there should be at least one more server for things to fully add up for me. But maybe it’s just me not getting my head around.
No Problem. So we got two VMs (ESXi Hypervisor) here in the Network, KCNode1 (.103) and KCServer1 (.104). On the Server is the FOG installation as Server and on the Node (how surprising) the Storage installation. Since I got SSDs and a HDD on that Server I setup on the node following setup:
1 VirtualHarddisk with 50 GB on a SSD for the OS (sda)
1 VirtualHarddisk with 450 GB on the other SSD for the Imagespace on “fast” storage (sdb)
1 VirtualHarddisk with 900 GB on the HDD as a archive (sdc)My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup. For the moment! It’s all Work in progress, RAID card is on the way to have a RAID 1 with the SSD and so on.
Since our company has more locations we want to link them and install a node on every location. Thats the reason I installed the node here to test everything.
Please run
lsblk
anddf -h
on your Frankfurt Node and post output here.No Problem
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1K 0 part ├─sda2 8:2 0 48,1G 0 part / └─sda5 8:5 0 1,9G 0 part [SWAP] sdb 8:16 0 460G 0 disk └─sdb1 8:17 0 460G 0 part /image1 sdc 8:32 0 870G 0 disk └─sdc1 8:33 0 870G 0 part /image2 sr0 11:0 1 1024M 0 rom
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf udev 992M 0 992M 0% /dev tmpfs 201M 8,2M 193M 5% /run /dev/sda2 48G 3,1G 42G 7% / tmpfs 1003M 0 1003M 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 1003M 0 1003M 0% /sys/fs/cgroup /dev/sdb1 452G 30G 400G 7% /image1 /dev/sdc1 856G 77M 812G 1% /image2 tmpfs 201M 0 201M 0% /run/user/0
Cheers,
Gamie -
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup.
Well then that’s a totally different story. Still that wouldn’t be the way I’d do it but that’s fine. I’d just do a simple
rsync
to another disk (preferably a totally different host!) if it’s just about backup.So we are back to the initial question of why disk usage is not being shown properly in your case. Is it only the SSD part not showing? What happens if you select the HDD one? And what about the KCServer1’s disk usage?
-
@Sebastian-Roth Okay, then maybe our “idea” was wrong the hole time
So like I said before, we have different locations. In our headquarter we wanted to create ONE golden Image of Windows 10. After that we wanted to install a storage node on every location, group it correctly and let it replicate. After that we wanted to let everyone boot to our MAIN FogServer via VPN and let it then image from their node. Therefore we would have control of all locations AND an awesome inventory of all machines. My Collegues working on it already for two months now, but don’t have the Linux knowledge, since their only using Windows. Now that I’m there the project made more progress. But for what I understood I thought to enable the location plugin with user controll would achieve our goal. But it seems like I was wrong
@Sebastian-Roth said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
So we are back to the initial question of why disk usage is not being shown properly in your case. Is it only the SSD part not showing? What happens if you select the HDD one? And what about the KCServer1’s disk usage?
Thats the interesting part: The HDD is also not working! After that I edited the Node to show it at /images but with the same result, no space shown. Or does it take some time until it gets refreshed? KCServer1’s disk usage is shown perfectly after adding it again. Since I don’t wanted to use the KCServer1 space (it only got 50 GB on /) I removed it from FOG as storage node . Then I edited the Node again and let root connect, with the same result. But after you telling me about the rsync part, maybe I want to reinstall the node VM, let the SSD show to /images and install an rsync job that clones to /dev/sdc (ofc correctly mounted )
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
So like I said before, we have different locations. In our headquarter we wanted to create ONE golden Image of Windows 10. After that we wanted to install a storage node on every location, group it correctly and let it replicate. After that we wanted to let everyone boot to our MAIN FogServer via VPN and let it then image from their node. Therefore we would have control of all locations AND an awesome inventory of all machines. My Collegues working on it already for two months now, but don’t have the Linux knowledge, since their only using Windows. Now that I’m there the project made more progress. But for what I understood I thought to enable the location plugin with user controll would achieve our goal. But it seems like I was wrong
This is not wrong. This is by design how FOG works.
I haven’t read each detail line, but I think where things went wrong in your design is on the storage nodes, where you had 2 disks. While I haven’t looked in the code, I’m pretty sure the storage nodes do not support having multiple disk by creating multiple storage node configurations on the master server. If you did, this would create each disk on the storage node containing exactly the same. For the storage nodes, if you want to have multiple disks then you need to create a LVM volume (kind of like a software span raid) where you can add multiple disks. I don’t want to go down that rabbit hole if I misunderstood what you are doing.
But having a master FOG server at HQ and storage nodes at each location then using the location plugin to assign hosts and storage nodes to locations is how FOG is designed.
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
1 VirtualHarddisk with 50 GB on a SSD for the OS (sda)
1 VirtualHarddisk with 450 GB on the other SSD for the Imagespace on “fast” storage (sdb)
1 VirtualHarddisk with 900 GB on the HDD as a archive (sdc)
My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup. For the moment! It’s all Work in progress, RAID card is on the way to have a RAID 1 with the SSD and so on.For your sanity with this design I would make the SSD mounted over /images instead of /images1. As for FOG it doesn’t / shouldn’t care about the HDD. That is not in scope of FOG. BUT you can implement pretty easy with a CRON job (think windows scheduled task) that runs rsync to clone /images to /images2. That way you have your backup and fog won’t care.
-
@george1421 Thanks for that additional informations! I’ll reinstall the complete node VM and try that.
One more question for the node: Is there a reason why the node also starts a MariaDB Server?
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
Is there a reason why the node also starts a MariaDB Server
From a storage node function, I can’t think of a reason why the MarinaDB server is installed and running. The storage nodes connect back to the master node to use that database remotely.
-
@george1421 Okay, then after a reinstall and confirming of function I’ll stop the server ans see whats happening
-
@george1421 Hi again,
sadly it still doesn’t work. Still with the issue, can’t retrieve server information. I’ll try now with a complete different PC on that. But is somewhere a cache that needs to be cleared?Guys, I’ll maybe found out why it wasn’t working. I’m now confirming it with some changes I’ll make. Latest tomorrow I’ll write back what was wrong the whole time!
-
Okay, everyone, I found the reason why it wasn’t working. @george1421 and @Sebastian-Roth it was NOT because I used /image1. It was a complete dumb reason!
I set up the server, that every HTTP traffic gets redirected to HTTPS. My dumb mistake: I forgot to setup SSL on the NODE! So it looks like that because I used https://fog.xyz/fog it trys to reach the nodes via SSL as well! After setting up SSL on the node, Boom:
So sorry for directing you on a complete wrong path