Continuous Fog Storage Node Replication problem
Hi all! Let me first say how awesome Fog is and thank you for making it so amazing.
I am having an issue with replication between my primary FOG node (running centos 7) and a remote storage node, running on a synology system. I have followed the synology storage node guides and have everything set up at the remote site. It all works perfectly except that the main node continually replicates the image files to the secondary node. Not sure why. Any ideas?
so… as an update…
I stopped/disabled the fog replication service and used the same command as fog used to replicate the folder over from the fog images store to the synology images store.
After this, imaging from the storage node works as desired.
I’d really like to know how fog winds up pulling the Synology home page during the file compare… is it using http? what path is it trying to hit? I can set up a packet trace and try to find out, but I’m hoping someone can just chime in with the info.
Wayne Workman last edited by Wayne Workman
it appears that the fog server replication connects to the storage node over FTP and pulls the target file size.
I don’t think that’s how it’s doing it. I think it’s making an HTTP call to a PHP script on the storage nodes to get file size. And that’s why it’s outputting an entire webpage instead of a size (because the NAS is redirecting the call to it’s landing page). The FOG Image Replicator is supposed to revert to FTP for filesize if the HTTP call doesn’t work. I believe it’s only checking the HTTP status code and not verifying that the output is a single number - so that’s why an entire web page is being pushed to the replication logs - and why it’s not reverting to FTP for filesize. @developers
@mpsadmin My intent was to set this back up in my lab but day to day work has got in the way.
From looking at the code it appears that the fog server replication connects to the storage node over FTP and pulls the target file size. If they are not the same then the replicator starts over.
What protocol does fog use to verify its images? somehow fog is pulling html from the synology… makes me wonder if it is looking in the /fog web path on the synology?
(the synology guide by george says this folder doesnt matter because its not a real storage node)
wondering if I can ‘create’ a webserver on the synology with the path fog is looking for? just a little more patience guys…i’d appreciate it.
Here are the logs from the replication, if you have time to look at them.
@wayne-workman yes. used ftp.
@wayne-workman It’s not a problem fog is causing. It’s a problem with how fog manages to do things. The request being made to the synology is not a fog based system and so synology is responding with its own information.
This is just a guess on my part, but it’s unlikely to be anything we can fix.
Wayne Workman last edited by Wayne Workman
the portions I deleted from the originall fogreplicator.log are html pages from the synology. Is there a reason that the fog server is pulling html files from the synology?
@Developers I believe that quote is a symptom of the root problem. I am just trying to drill down to why.
@mpsadmin Did you use the FTP option when logging in via WinSCP? I ask because I want to be sure. Photo:
@mpsadmin I have my fog-pi server with me that has FOG 1.4.4 on it. I’m looking at the code and it appears to rely on the file byte count being the same to indicate if the file needs to be replicated or not. I’m not a programmer so this object oriented php is a bit greek to me. But so far it looks like byte count is the trigger. Now this is the byte count as reported by FTP session. Also the log writes to /opt/fog/log look fairly descriptive to what exactly the replicator is doing.
If I remember right there is 2 or 3 replicator logs. One is the main replicator log, one is for the storage node, and there may be a log file for each image being replicated. I don’t have access to my production FOG server at the moment to confirm. But I thought the logs were pretty descriptive to why it was replicating.
Also, just checked with the d1.mbr file… md5 checksums match.
040d5ad57af1942104ab788d3b12778e synology web console
d1p1.img…checksums match here too.
79d8b1a77b8111418481c121c246501b fog server
79d8b1a77b8111418481c121c246501b synology web console
@george1421 yes, the files are making it to the NAS. Fog is replicating at full speed, up to the configured bandwidth limit. Once the files are replicated, the process begins anew.
@wayne-workman Apologies for not providing clearer info. The paths for the storage node are configured within fog as follows:
The ftp paths are as follows:
The files being replicated are within the G3-0118.1 folder, as listed below:
the source folder on the fog server is the /images folder, as shown below.
Additionally, I have verified that both the synololgy and the fog server are using the same ntp source as the rest of the devices on the network. (a windows domain controller). After changing the ntp settings, i stopped the fog replication service, deleted the files from the synology folder, and restarted the replication service.
Thanks. I’ll keep digging. Most likely it is something simple that I have done.
I logged in via ftp and was able to browse with no issues.
That’s not what I am referring to - I would like for you to FTP into the NAS and figure out where you land - i.e. the FTP Path directory.
- Is the directory where you land the FTP Path that is set for this node?
- What is your landing directory?
@mpsadmin So are the files actually making it to the NAS? The only reason why the fog server would keep replicating is because the check sums would not match so it will send the file over again. Possibly date/time mismatch might trigger the replication again, but I think they are focusing on the check sum as the key to replicate.
yes I did. I logged in via ftp and was able to browse with no issues.
Hi George. I had hoped that you would chime in as I followed your fantastic storage node document when I set this up.
in just noticed that the ‘images’ shared folder didn’t have rights for the fog user.
@mpsadmin But did you check the FTP root by logging in via FTP?