Replication Issue
-
@mronh Ok nothing serious in the apache and php-fpm logs. Revisiting the other logs you posted I just noticed this:
2018-10-05 13:43:26 /images/Sala209RebootRX/d1p1.img -> ftp://fog@YYY.YY.210.208/%2Fimages/Sala209RebootRX/d1p1.img 0-8733059 1.77 MiB/s ... 2018-10-05 13:53:51 /images/Sala209RebootRX/d1p2.img.084 -> ftp://fog@YYY.YY.210.208/%2Fimages/Sala209RebootRX/d1p2.img.084 0-91065975 1.99 MiB/s .... 2018-10-05 17:39:43 /images/Sala209RebootRX/d1p2.img.084 -> ftp://fog@YYY.YY.210.208/%2Fimages/Sala209RebootRX/d1p2.img.084 0-64085945 101.21 MiB/s 2018-10-05 17:39:43 /images/Sala209RebootRX/d1p1.img -> ftp://fog@YYY.YY.210.208/%2Fimages/Sala209RebootRX/d1p1.img 0-8290491 82.63 MiB/s
To me it looks like file sizes differ and as well transfer speeds are way different too. Doesn’t add up for me yet.
-
@Sebastian-Roth yeah, I see this too, i put my expected “top speed” of the lan in the speed limit of the replication config ( inside fog GUI), thinking its a issue with lftp ( with none value, its take a default value instead of limitless)
but, as none changed for better (at least)… I’ll see with the infrastructure guy here, maybe someone (with no knowledge) tried to “fix” some switches / bgp configs
I’ll keep in touch, tks!
-
@mronh You are on Ubuntu on your master node, right? We are tracking down a replication issue but we see this on CentOS and possibly RedHa so far.
Please run
ps ax | grep defunct
on your master node and let us know the result of it. -
@Sebastian-Roth yeap, Debian on server, ubuntu on the storage
result=> 7174 pts/0 S+ 0:00 grep --color=auto defunct ( edit: I’ll let the replication process make a full round and then take the result here again)
-
@Sebastian-Roth ok, theres some defunct processes in the fog server after some replication ( delete parts and etc… )
17914 ? Z 0:00 [sh] <defunct>
20702 ? Z 0:00 [sh] <defunct>
21632 ? Z 0:00 [sh] <defunct>
24658 ? Z 0:00 [sh] <defunct>
27305 ? Z 0:00 [sh] <defunct>
28423 ? Z 0:00 [sh] <defunct>
31260 pts/0 S+ 0:00 grep defunct -
@Sebastian-Roth Hi again, one “dumb” question: why lftp was choosed instead of rsync?
cheers
-
@mronh Well that’s interesting. I have only seen this issue on CentOS so far but it still needs more investigation as I have not found the root cause of it yet. So possibly this is on Debian as well?! As well interesting that you have sh(ell) defunct processes instead of lftp ones. I hope to find more time in the next days to figure that issue out.
Hi again, one “dumb” question: why lftp was choosed instead of rsync?
I can’t say for sure as this feature was added to FOG before I joined in to work on FOG. But as FTP is used/needed (e.g. for moving an uploaded image) I guess the team decided to use that same protocol for replication as well.
While rsync is definitely a great tool it does need a server part just as FTP does. So we would have to have a rsync daemon (or SSH daemon for tunneled rsync) running to use it. Just another component.
-
@mronh I have looked into this again I want to ask you to check the apache error and php-fpm logs of one or two of your storage nodes again. Especially the apache logs you posted don’t look like it came from a storage node!! Need the logs from Storage (YYY.YY.210.208).
We have fixed two issues in the replication code since the 1.5.4 release and you might want to try using the latest working branch. Within the working branch we also have optimized php-fpm settings. Possibly that will help on the storage node side as well.
-
@Sebastian-Roth Hello! Right, I’ll make the push and update booth server and node, let the rep service make a full round and return the logs.
cheers
-
@Sebastian-Roth Hi again, after a full round of the rep service, here it is:
I uploaded logs from booth server and storage
1_1541078475992_STORAGE_php7.1-fpm.log
0_1541078475992_STORAGE_error.log
1_1541078503271_SERVER_php7.0-fpm.log
0_1541078503271_SERVER_error.log
0_1541078552345_SERVER_fogreplicator.log
0_1541078653124_SERVER_fogreplicator.Sala209RebootRX.transfer.2 - Storage (YYY.YY.210.208).log
-
@mronh Ok, I have dug through a lot of code in the last two days, found and fixed a couple of issues with replication. All that will be in the next release. Hopefully coming soon. Let me know if you are keen to test those changes beforehand.
-
@Sebastian-Roth sure, right now Im using only the server due this issue.
if all became fixed my summer here in the next months will be sooooo much easier… hahaha
what I have to do?
-
@mronh The current changes are on a new branch
replication
(link) which I will merge intoworking
after a first round of feedback.Not sure if you have ever installed FOG unstable/testing. This is done using git to checkout the current code and install from that.
git clone https://github.com/FOGProject/fogproject/ cd fogproject git checkout replication cd bin ./installfog.sh
Important notice: I had to change some of the hashing code too and therefore nodes being on different versions (1.5.4 or working VS. replication branch) will end up replicating images over and over again. So you need to have all nodes on the replication branch or setup up a separate test environment!!
Please make sure you stop replication first (
systemctl stop FOGImageReplicator
), then update the storage node and after that update master node. -
@Sebastian-Roth Will those hashing code changes you made help with Ubuntu servers specifically 16.04? I remember earlier this summer that there was replication issues looping due to a hash file not matching but resolved to an extent in the working branch. I’m curious because I’m have many storage nodes and I can switch over from working branch to replication if your changes help.
-
@JGallo I have tested a fair bit and fixed a couple of issues that still were in the working branch. Also the replication branch is based on working and so it has even more replication issues fixed since 1.5.4!
I can’t promise you this is issue free yet. As I don’t have a test setup with many nodes. But I am sure it’s better than 1.5.4 and working branch were. So I would be very happy if you’d give it a try and post feedback and maybe logs if you still see issues.
-
@Sebastian-Roth Of course!! I will update server and nodes today to replication branch and get it ready for an image upload. I think the issue was with images being updated and then uploaded to existing images on the FOG server. Replication of a new image definition was fine even to the storage nodes. It will probably be a bit before I can have some concrete information since I don’t have many images that replicate across all nodes since I have storage groups defined in the image.
-
@Sebastian-Roth right. I’ll do it next week then.
this week im on the leash again… oh boy how I hate the end of the year…=/ hahahaha
-
@Sebastian-Roth I switched over to replication branch and updated all storage nodes along with my fog server. I uploaded image and it seemed like its working fine for original image. I haven’t updated the image since its very new but I when I have a chance, which will be very shortly since a project im currently working on will allow me to update an image on an existing one, I will go ahead and updated it and tail the replication log.
I also noticed in a different post that another user did the same thing and tested replication. Looks like the changes in the replication branched have work. I will update as well once i upload an image to an existing one to see if the updated image replicates properly to storage node.
-
@JGallo The more feedback I get on this the better. Looking forward to hear from you.
-
@Sebastian-Roth Hey man, make the upgrade from booth server and storage ( following ur previous instructions) and now we get some data to think bout. Justo to make it clear, I deleted all images in the storage to force a full replication from begining
I’ll up the logs from server and storage again, but ‘short history’ I see some “Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.)” and “File size mismatch”
Server Side:
2_1542111667714_SERVER_php7.0-fpm.log
1_1542111667714_SERVER_fogreplicator.log
0_1542111667714_SERVER_error.logStorage Side:
1_1542111678025_STORAGE_php7.1-fpm.log
0_1542111678024_STORAGE_error.logBesides that, the improves are really good, logs more accurate and steps of the algorithm way more “solidified” way to go man!