Replication Issue
-
@mronh Give me 20 minutes to get home and get some commands together to verify that you have the right code running…
-
We figured out that storage node wasn’t properly updated somehow. Re-running the installer fixed this. Not sure what exactly went wrong but logs are looking way better now. We’ll see in the morning. @mronh Please let us know.
-
@JGallo said in Replication Issue:
Should I force the replication or let it run on it’s own? I’m curious if it matters how to let the replication start.
How do you mean force the replication?
-
@Sebastian-Roth From my point of view, He can do a “service FOGImageReplication restart” and it will force the replication to do the job, otherwise he will need to wait de time of the cron job
-
@Sebastian-Roth thanks in advance pal! it will make the begining of the year much more easier to me. haha
-
@Sebastian-Roth What I meant to say is after I upload an updated image to a master node, with the changes in the replication branch, should I let the replication service run on it’s own? OR should I force the replication by restarting the replication service? I figured that by restarting the replication service will speed things up to check the logs after I successfully upload the updated image.
-
@JGallo As you typically re-run the FOG installer, the restart of the service is already performed and therefore not necessary. I’d recommend letting it cycle once or twice on it’s own, then upload the logs. This will let us know if it’s working as it should. By restarting the service to “speed” things along, we actually only see “initial startup.” While, functionally, they’re the same thing, it’s just good to know the full time operation is working as expected as well.
-
@JGallo As mentioned earlier: Important notice: I had to change some of the hashing code too and therefore nodes being on different versions (1.5.4 or working VS. replication branch) will end up replicating images over and over again. So you need to have all nodes on the replication branch or setup up a separate test environment!!
Please make sure you stop replication first on the master (
systemctl stop FOGImageReplicator
), then update the storage node and after that update master node. As Tom said, the installer will start up the service for you in the end. -
@Sebastian-Roth Yup. I read earlier about that. Followed your instructions and all nodes and fog server are updated with the replication branch. I’m currently uploading updated image to an image that exists currently. Awaiting for it to finish uploading and tailing the replication log.
-
Looks like it works. Here are my logs. Once upload completed, took about 15 minutes for replicator to begin. Once it pushed files to slave, I did a FOGImageReplicator restart and looks good.
[11-13-18 10:22:26 am] | Image Name: BCS-Velocity [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 10:22:29 am] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 10:22:30 am] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 10:22:30 am] * All files synced for this item. [11-13-18 1:22:11 pm] * Starting Image Replication. [11-13-18 1:22:11 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:22:11 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:22:11 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:22:11 pm] | Replicating postdownloadscripts [11-13-18 1:22:12 pm] * Found Image to transfer to 1 node [11-13-18 1:22:12 pm] | File Name: postdownloadscripts [11-13-18 1:22:13 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:22:13 pm] * All files synced for this item. [11-13-18 1:22:13 pm] | Replicating postinitscripts [11-13-18 1:22:15 pm] * Found Image to transfer to 1 node [11-13-18 1:22:15 pm] | File Name: dev/postinitscripts [11-13-18 1:22:16 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:22:16 pm] * All files synced for this item. [11-13-18 1:22:16 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:16 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:17 pm] * Not syncing Image between groups [11-13-18 1:22:17 pm] | Image Name: BCS-Velocity [11-13-18 1:22:17 pm] | There are no other members to sync to. [11-13-18 1:22:17 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:22:18 pm] * Found Image to transfer to 1 node [11-13-18 1:22:18 pm] | Image Name: 32-Dell-790 [11-13-18 1:22:19 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:21 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:23 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:23 pm] * All files synced for this item. [11-13-18 1:22:24 pm] * Found Image to transfer to 1 node [11-13-18 1:22:24 pm] | Image Name: 64-Dell-790 [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:27 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:28 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:28 pm] * All files synced for this item. [11-13-18 1:22:29 pm] * Found Image to transfer to 1 node [11-13-18 1:22:29 pm] | Image Name: BCS-Velocity [11-13-18 1:22:30 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.mbr: 89b972e8f6585f2606a6658d58b9f66d57957ac7d57fc2f7fd7d8882a12d8722 != 341041528cb53b70422e1c39270490452de62ad764c72541e4f6eb1890f3365d [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.mbr [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.minimum.partitions: 23b505385e9008070c65c42d950dff96d5cf39e99478b6b81c7a867e8bcadb02 != 899d69e652f3c9683d83deeec82f231bba2f4df0a01d706b5acbba9992a10861 [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.minimum.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: File hash mismatch - d1.partitions: ac70ba6fe1d57bf4a8ba01459f85f075f3df10bdcbd99a368ec1523078b8fde6 != ff0c6a27b7627ad4416fa46da7f57d2c4b0f4a621d2e7ca5414fa2faa5d43a96 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p1.img: 8699649 != 8696814 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p1.img [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p2.img: 36002135558 != 41888768241 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p2.img [11-13-18 1:22:32 pm] | CMD: lftp -e 'set xfer:log 1; set xfer:log-file "/opt/fog/log/fogreplicator.BCS-Velocity.transfer.BCS-Slave.log";set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c --parallel=20 -R --ignore-time -vvv --exclude ".srvprivate" "/images/BCS-Velocity" "/images/BCS-Velocity"; exit' -u fog,[Protected] 10.210.100.62 [11-13-18 1:22:32 pm] | Started sync for Image BCS-Velocity - Resource id #20268 [11-13-18 1:29:35 pm] | Sync finished - Resource id #20268
Here is log after ImageReplicator restart occured.
[11-13-18 1:31:02 pm] Interface Ready with IP Address: 10.210.100.61 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.0.1 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.1.1 [11-13-18 1:31:02 pm] * Starting ImageReplicator Service [11-13-18 1:31:02 pm] * Checking for new items every 10800 seconds [11-13-18 1:31:02 pm] * Starting service loop [11-13-18 1:31:05 pm] * Starting Image Replication. [11-13-18 1:31:05 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:31:05 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:31:06 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:31:06 pm] | Replicating postdownloadscripts [11-13-18 1:31:08 pm] * Found Image to transfer to 1 node [11-13-18 1:31:08 pm] | File Name: postdownloadscripts [11-13-18 1:31:09 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:31:10 pm] * All files synced for this item. [11-13-18 1:31:10 pm] | Replicating postinitscripts [11-13-18 1:31:11 pm] * Found Image to transfer to 1 node [11-13-18 1:31:11 pm] | File Name: dev/postinitscripts [11-13-18 1:31:12 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:31:12 pm] * All files synced for this item. [11-13-18 1:31:12 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:12 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:13 pm] * Not syncing Image between groups [11-13-18 1:31:13 pm] | Image Name: BCS-Velocity [11-13-18 1:31:13 pm] | There are no other members to sync to. [11-13-18 1:31:13 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:31:14 pm] * Found Image to transfer to 1 node [11-13-18 1:31:14 pm] | Image Name: 32-Dell-790 [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:18 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:19 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:19 pm] * All files synced for this item. [11-13-18 1:31:20 pm] * Found Image to transfer to 1 node [11-13-18 1:31:20 pm] | Image Name: 64-Dell-790 [11-13-18 1:31:21 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:24 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:24 pm] * All files synced for this item. [11-13-18 1:31:26 pm] * Found Image to transfer to 1 node [11-13-18 1:31:26 pm] | Image Name: BCS-Velocity [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:29 pm] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:30 pm] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:30 pm] * All files synced for this item.
-
@JGallo Thanks, sounds great. So I think we are only left with what @mronh saw in the logs even after we fixed the storage node installation. Information from chat session:
hey man… the max-retries happen again
mirror: d1p2.img.014: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.019: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.003: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p1.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.011: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.005: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.001: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.007: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.006: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.004: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.002: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.008: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.009: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) [11-13-18 4:03:56 pm] | Sync finished - Resource id #4282 [11-13-18 4:00:05 pm] | Sync finished - Resource id #4847 [11-13-18 3:52:01 pm] | Sync finished - Resource id #3131 [11-13-18 3:51:37 pm] | Sync finished - Resource id #2339 [11-13-18 3:50:38 pm] | Sync finished - Resource id #2047
Trying to figure this out before merging all the code back into our official working branch.
-
@mronh said:
Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.)
I have tested a lot not and was only ever able to replicate the issue by allowing only very few FTP connections at the same time in my test setup. Please edit your
vsftpd.conf
file and add the following line, then restart vsftpd -systemctl restart vsftpd
:max_per_ip=200
Seems like the default of 50 is not enough for the amount of images you have. Although I am wondering about this. We don’t keep too many FTP connections open from what I see in my tests but on the other hand I have to say we do use quite some connections just for the file checks as well. So maybe in your environment we simply need to increase the
max_per_ip
to make it happy. -
@Sebastian-Roth said in Replication Issue:
max_per_ip=200
Changes in the conf done, will let the rep job running 2 full rounds and edit this post with the results.
“Seems like the default of 50 is not enough for the amount of images you have” yeah, as I said in the chat, here we use fog A LOT, bout 7 diferent images in frequent use and others 4 used from time to time. F* good tool indeed. hahaha
Edit: Hell yeah! running like a charm! 3 rounds of rep job so far, no mismatch, no max conn reach… beautiful
tks dude! Appreciate ur suport!