How can I troubleshoot the Diskusage Problem on the Webinterface?
-
Hi there,
I got the following FOG Setup:
Two VMs, one as the server with a small HDD and a storage node. Both on Debian stretch. Well everything works fine: Capturing Image from the Node, getting the images and so on. Whats weird is the webinterface:
And when I click on it it shows me this:
Can someone give me a hint where I can troubleshoot this?
Cheers,
Gamie
-
Okay, everyone, I found the reason why it wasn’t working. @george1421 and @Sebastian-Roth it was NOT because I used /image1. It was a complete dumb reason!
I set up the server, that every HTTP traffic gets redirected to HTTPS. My dumb mistake: I forgot to setup SSL on the NODE! So it looks like that because I used https://fog.xyz/fog it trys to reach the nodes via SSL as well! After setting up SSL on the node, Boom:
So sorry for directing you on a complete wrong path
-
@Gamienator The master FOG server sends a HTTP POST request to the storage node to get the disk usage stats. So maybe HTTP communication is blocked between the nodes or something is failing on the storage node.
You could also try sending that request from another PC manually. Open this URL in your browser http://ip.of.storage.node/fog/management/index.php?node=home&sub=diskusage&id=2 (the ID is just a guess - could be a different one in your case, just try or look it up in the storage node configuration - ID is in the URL when you edit the storage node)
See if you have any errors in the apache logs. Log file paths are mentioned in my signature.
-
@Sebastian-Roth Ah! Thanks! Could that mean that I have to setup on the NODE my additional Image Folders? Because I’m not saving in /images on the node. There were two harddisk added and mounted on /image1 and /image2
When I open the adress you sent me I get the following message:
This is a storage node, please do not access the web ui here!
Cheers,
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
When I open the adress you sent me I get the following message:
This is a storage node, please do not access the web ui here!Ah sorry, as I didn’t have a storage node at hand I only tested this on my master node. AFAIK the same is used for the storage node but in the code it uses HTTP POST. So you could try sending a request from your master server using CURL:
curl -d "sub=diskusage;id=2" http://ip.of.storage.node/fog/management/index.php?node=home
Have you checked the apache logs on the storage node? Any errors you see in there?
Could that mean that I have to setup on the NODE my additional Image Folders? Because I’m not saving in /images on the node. There were two harddisk added and mounted on /image1 and /image2
Personally I’d use LVM to join those two disks into only logical volume and mount that as
/images
. Causes you a lot less problems and can be extended with more disks if needed. If you want to stick to what you have please post a screenshot of the storage node settings you have in the web UI? On the master that is… -
Thanks for your answer!
So the SSH output is:
root@KCFOGServer1:~# curl -d “sub=diskusage;id=2” http://172.16.10.103/fog/management/index.php?node=home
This is a storage node, please do not access the web ui here!root@KCFOGServer1:~#And here is the screenshot:
I’ve never used LVM, maybe I’ll have a look into that too. On the Apache Error Log was nothing sadly
-
@Gamienator Hmmm, it’s been a fair while since I last looked into this and somehow got it wrong. Please try the following:
curl "http://172.16.10.103/fog/status/freespace.php?path="$(echo -n "/image1" | base64)
Can you please post a screenshot of the storage nodes and groups overview as well? I wonder a bit about why you have this storage node set to be a master node. Could be fine but I don’t get the full picture yet.
About LVM you might want to read my post here: https://forums.fogproject.org/post/118716 (maybe the whole topic is interesting for you)
-
@Sebastian-Roth said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
curl “http://172.16.10.103/fog/status/freespace.php?path=”$(echo -n “/image1” | base64)
Oh, there’s now a different result:
root@KCFOGServer1:~# curl "http://172.16.10.103/fog/status/freespace.php?path="$(echo -n "/image1" | base64) {"free":"428730351616","used":"31650299904"}root@KCFOGServer1:~#
Sure, he are the screenshots:
It’s a preparation, we’re new in FOG and planning to distribute images on different locations. All with Side-to-Side VPN and Nodes on every location.
-
@Gamienator Ahhhhh I see. Makes all more sense now. But still I am wondering about some of the things you posted. So let me ask, are there any other FOG servers or is it just that single “Frankfurt Node”? From my understanding there should be at least one more server for things to fully add up for me. But maybe it’s just me not getting my head around.
Anyway, why did you set up two storage node definitions for the same FOG installation (same IP) in the first place? Probably to use the newly added disk space on that second drive. While FOG should be able to handle it I wouldn’t advise you to do as it adds complexity for possible errors. To understand the concept of Storage Nodes more in depth you might want to start reading here: https://wiki.fogproject.org/wiki/index.php/Managing_FOG#Storage_Management
Please run
lsblk
anddf -h
on your Frankfurt Node and post output here. -
@Sebastian-Roth said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
@Gamienator Ahhhhh I see. Makes all more sense now. But still I am wondering about some of the things you posted. So let me ask, are there any other FOG servers or is it just that single “Frankfurt Node”? From my understanding there should be at least one more server for things to fully add up for me. But maybe it’s just me not getting my head around.
No Problem. So we got two VMs (ESXi Hypervisor) here in the Network, KCNode1 (.103) and KCServer1 (.104). On the Server is the FOG installation as Server and on the Node (how surprising) the Storage installation. Since I got SSDs and a HDD on that Server I setup on the node following setup:
1 VirtualHarddisk with 50 GB on a SSD for the OS (sda)
1 VirtualHarddisk with 450 GB on the other SSD for the Imagespace on “fast” storage (sdb)
1 VirtualHarddisk with 900 GB on the HDD as a archive (sdc)My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup. For the moment! It’s all Work in progress, RAID card is on the way to have a RAID 1 with the SSD and so on.
Since our company has more locations we want to link them and install a node on every location. Thats the reason I installed the node here to test everything.
Please run
lsblk
anddf -h
on your Frankfurt Node and post output here.No Problem
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1K 0 part ├─sda2 8:2 0 48,1G 0 part / └─sda5 8:5 0 1,9G 0 part [SWAP] sdb 8:16 0 460G 0 disk └─sdb1 8:17 0 460G 0 part /image1 sdc 8:32 0 870G 0 disk └─sdc1 8:33 0 870G 0 part /image2 sr0 11:0 1 1024M 0 rom
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf udev 992M 0 992M 0% /dev tmpfs 201M 8,2M 193M 5% /run /dev/sda2 48G 3,1G 42G 7% / tmpfs 1003M 0 1003M 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 1003M 0 1003M 0% /sys/fs/cgroup /dev/sdb1 452G 30G 400G 7% /image1 /dev/sdc1 856G 77M 812G 1% /image2 tmpfs 201M 0 201M 0% /run/user/0
Cheers,
Gamie -
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup.
Well then that’s a totally different story. Still that wouldn’t be the way I’d do it but that’s fine. I’d just do a simple
rsync
to another disk (preferably a totally different host!) if it’s just about backup.So we are back to the initial question of why disk usage is not being shown properly in your case. Is it only the SSD part not showing? What happens if you select the HDD one? And what about the KCServer1’s disk usage?
-
@Sebastian-Roth Okay, then maybe our “idea” was wrong the hole time
So like I said before, we have different locations. In our headquarter we wanted to create ONE golden Image of Windows 10. After that we wanted to install a storage node on every location, group it correctly and let it replicate. After that we wanted to let everyone boot to our MAIN FogServer via VPN and let it then image from their node. Therefore we would have control of all locations AND an awesome inventory of all machines. My Collegues working on it already for two months now, but don’t have the Linux knowledge, since their only using Windows. Now that I’m there the project made more progress. But for what I understood I thought to enable the location plugin with user controll would achieve our goal. But it seems like I was wrong
@Sebastian-Roth said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
So we are back to the initial question of why disk usage is not being shown properly in your case. Is it only the SSD part not showing? What happens if you select the HDD one? And what about the KCServer1’s disk usage?
Thats the interesting part: The HDD is also not working! After that I edited the Node to show it at /images but with the same result, no space shown. Or does it take some time until it gets refreshed? KCServer1’s disk usage is shown perfectly after adding it again. Since I don’t wanted to use the KCServer1 space (it only got 50 GB on /) I removed it from FOG as storage node . Then I edited the Node again and let root connect, with the same result. But after you telling me about the rsync part, maybe I want to reinstall the node VM, let the SSD show to /images and install an rsync job that clones to /dev/sdc (ofc correctly mounted )
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
So like I said before, we have different locations. In our headquarter we wanted to create ONE golden Image of Windows 10. After that we wanted to install a storage node on every location, group it correctly and let it replicate. After that we wanted to let everyone boot to our MAIN FogServer via VPN and let it then image from their node. Therefore we would have control of all locations AND an awesome inventory of all machines. My Collegues working on it already for two months now, but don’t have the Linux knowledge, since their only using Windows. Now that I’m there the project made more progress. But for what I understood I thought to enable the location plugin with user controll would achieve our goal. But it seems like I was wrong
This is not wrong. This is by design how FOG works.
I haven’t read each detail line, but I think where things went wrong in your design is on the storage nodes, where you had 2 disks. While I haven’t looked in the code, I’m pretty sure the storage nodes do not support having multiple disk by creating multiple storage node configurations on the master server. If you did, this would create each disk on the storage node containing exactly the same. For the storage nodes, if you want to have multiple disks then you need to create a LVM volume (kind of like a software span raid) where you can add multiple disks. I don’t want to go down that rabbit hole if I misunderstood what you are doing.
But having a master FOG server at HQ and storage nodes at each location then using the location plugin to assign hosts and storage nodes to locations is how FOG is designed.
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
1 VirtualHarddisk with 50 GB on a SSD for the OS (sda)
1 VirtualHarddisk with 450 GB on the other SSD for the Imagespace on “fast” storage (sdb)
1 VirtualHarddisk with 900 GB on the HDD as a archive (sdc)
My Idea was: Image everything in SSD (mounted on /image1) and replicate on sdc to have a backup. For the moment! It’s all Work in progress, RAID card is on the way to have a RAID 1 with the SSD and so on.For your sanity with this design I would make the SSD mounted over /images instead of /images1. As for FOG it doesn’t / shouldn’t care about the HDD. That is not in scope of FOG. BUT you can implement pretty easy with a CRON job (think windows scheduled task) that runs rsync to clone /images to /images2. That way you have your backup and fog won’t care.
-
@george1421 Thanks for that additional informations! I’ll reinstall the complete node VM and try that.
One more question for the node: Is there a reason why the node also starts a MariaDB Server?
-
@Gamienator said in How can I troubleshoot the Diskusage Problem on the Webinterface?:
Is there a reason why the node also starts a MariaDB Server
From a storage node function, I can’t think of a reason why the MarinaDB server is installed and running. The storage nodes connect back to the master node to use that database remotely.
-
@george1421 Okay, then after a reinstall and confirming of function I’ll stop the server ans see whats happening
-
@george1421 Hi again,
sadly it still doesn’t work. Still with the issue, can’t retrieve server information. I’ll try now with a complete different PC on that. But is somewhere a cache that needs to be cleared?Guys, I’ll maybe found out why it wasn’t working. I’m now confirming it with some changes I’ll make. Latest tomorrow I’ll write back what was wrong the whole time!
-
Okay, everyone, I found the reason why it wasn’t working. @george1421 and @Sebastian-Roth it was NOT because I used /image1. It was a complete dumb reason!
I set up the server, that every HTTP traffic gets redirected to HTTPS. My dumb mistake: I forgot to setup SSL on the NODE! So it looks like that because I used https://fog.xyz/fog it trys to reach the nodes via SSL as well! After setting up SSL on the node, Boom:
So sorry for directing you on a complete wrong path