Replication Issue
-
@Sebastian-Roth I switched over to replication branch and updated all storage nodes along with my fog server. I uploaded image and it seemed like its working fine for original image. I haven’t updated the image since its very new but I when I have a chance, which will be very shortly since a project im currently working on will allow me to update an image on an existing one, I will go ahead and updated it and tail the replication log.
I also noticed in a different post that another user did the same thing and tested replication. Looks like the changes in the replication branched have work. I will update as well once i upload an image to an existing one to see if the updated image replicates properly to storage node.
-
@JGallo The more feedback I get on this the better. Looking forward to hear from you.
-
@Sebastian-Roth Hey man, make the upgrade from booth server and storage ( following ur previous instructions) and now we get some data to think bout. Justo to make it clear, I deleted all images in the storage to force a full replication from begining
I’ll up the logs from server and storage again, but ‘short history’ I see some “Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.)” and “File size mismatch”
Server Side:
2_1542111667714_SERVER_php7.0-fpm.log
1_1542111667714_SERVER_fogreplicator.log
0_1542111667714_SERVER_error.logStorage Side:
1_1542111678025_STORAGE_php7.1-fpm.log
0_1542111678024_STORAGE_error.logBesides that, the improves are really good, logs more accurate and steps of the algorithm way more “solidified” way to go man!
-
@mronh Thanks for testing and reporting back. The first thing that jumps at me in the logs are many lines of hash mismatch like this:
File hash mismatch - d1p2.img.002: c8a2b5f37de6e0c7a5eeb0843b9164bac05cc984cada2cfb8da6132ba938bc2a != 7e56e1209070f2b8494e3d60cb6a27c103925bb442056ba43438c456126f027849baf5547ca1e0fec8accc309aae64ba1ae569e8698fe5e8041052cb627ed6b1
See the different length of the hash sums. I am fairly sure the storage node is not updated to the latest replication commit!!
Please check your web directory, maybe there is some link issue and you have two different versions mixed up. Run
ls -al /var/www /var/www/html /var/www/fog
and post results here.Beside that I’d stop replication for now on your master node and maybe try upgrading to the replication branch on the storage node again!
-
@Sebastian-Roth on the server side
ls -al /var/www /var/www/html /var/www/fog /var/www: total 20 drwxr-xr-x 4 root root 4096 nov 12 16:06 . drwxr-xr-x 12 root root 4096 ago 28 13:16 .. drwxr-xr-x 10 www-data www-data 4096 nov 12 16:14 fog drwxr-xr-x 2 root root 4096 ago 28 13:22 html -rw-r--r-- 1 root root 41 out 10 11:10 index.php /var/www/fog: total 408 drwxr-xr-x 10 www-data www-data 4096 nov 12 16:14 . drwxr-xr-x 4 root root 4096 nov 12 16:06 .. drwxr-xr-x 2 www-data www-data 4096 nov 12 16:06 api drwxr-xr-x 2 www-data www-data 4096 nov 12 16:06 client drwxr-xr-x 2 www-data www-data 4096 nov 12 16:06 commons -rw-r--r-- 1 www-data www-data 370070 nov 12 16:06 favicon.ico lrwxrwxrwx 1 www-data www-data 13 nov 12 16:06 fog -> /var/www/fog/ drwxr-xr-x 2 www-data www-data 4096 nov 12 16:06 fogdoc -rw-r--r-- 1 www-data www-data 572 nov 12 16:06 index.php drwxr-xr-x 13 www-data www-data 4096 nov 12 16:06 lib drwxr-xr-x 10 www-data www-data 4096 nov 12 16:06 management drwxr-xr-x 3 www-data www-data 4096 nov 12 16:06 service drwxr-xr-x 2 www-data www-data 4096 nov 12 16:06 status /var/www/html: total 20 drwxr-xr-x 2 root root 4096 ago 28 13:22 . drwxr-xr-x 4 root root 4096 nov 12 16:06 .. lrwxrwxrwx 1 root root 13 ago 28 13:22 fog -> /var/www/fog/ -rw-r--r-- 1 root root 10701 ago 28 13:17 index.html
on the storage side
ls -al /var/www /var/www/html /var/www/fog /var/www: total 16 drwxr-xr-x 4 root root 4096 nov 12 15:59 . drwxr-xr-x 13 root root 4096 jul 18 11:56 .. drwxr-xr-x 10 www-data www-data 4096 nov 12 16:00 fog drwxr-xr-x 2 root root 4096 jul 18 12:03 html /var/www/fog: total 408 drwxr-xr-x 10 www-data www-data 4096 nov 12 16:00 . drwxr-xr-x 4 root root 4096 nov 12 15:59 .. drwxr-xr-x 2 www-data www-data 4096 nov 12 15:59 api drwxr-xr-x 2 www-data www-data 4096 nov 12 15:59 client drwxr-xr-x 2 www-data www-data 4096 nov 12 15:59 commons -rw-r--r-- 1 www-data www-data 370070 nov 12 15:59 favicon.ico lrwxrwxrwx 1 www-data www-data 13 nov 12 15:59 fog -> /var/www/fog/ drwxr-xr-x 2 www-data www-data 4096 nov 12 15:59 fogdoc -rw-r--r-- 1 www-data www-data 572 nov 12 15:59 index.php drwxr-xr-x 13 www-data www-data 4096 nov 12 15:59 lib drwxr-xr-x 10 www-data www-data 4096 nov 12 15:59 management drwxr-xr-x 3 www-data www-data 4096 nov 12 15:59 service drwxr-xr-x 2 www-data www-data 4096 nov 12 15:59 status /var/www/html: total 20 drwxr-xr-x 2 root root 4096 jul 18 12:03 . drwxr-xr-x 4 root root 4096 nov 12 15:59 .. lrwxrwxrwx 1 root root 13 jul 18 12:03 fog -> /var/www/fog/ -rw-r--r-- 1 root root 10701 jul 18 11:56 index.html
I’ll make the git pull to the replic rep e install again on the storage and return here
-
@Sebastian-Roth right…look at this
server side “git checkout replication
Already on ‘replication’
Your branch is up-to-date with ‘origin/replication’.”storage side “git checkout replication
Already on ‘replication’
Your branch is up-to-date with ‘origin/replication’.” -
@Sebastian-Roth I will be updating a image definition this week. I ran into an issue with imaging a lab with storage nodes. I’m testing the solution out today and then I will be updating image to storage group that has storage nodes. Should I force the replication or let it run on it’s own? I’m curious if it matters how to let the replication start.
-
@mronh Can’t seen an issue in the output you posted. Can we do a Teamviewer session today? Will be available the next hours.
-
@Sebastian-Roth unfortunately remote sessions is not an option here, the outside traffic is controled/blocked, beyond my jurisdiction =/
-
@mronh Give me 20 minutes to get home and get some commands together to verify that you have the right code running…
-
We figured out that storage node wasn’t properly updated somehow. Re-running the installer fixed this. Not sure what exactly went wrong but logs are looking way better now. We’ll see in the morning. @mronh Please let us know.
-
@JGallo said in Replication Issue:
Should I force the replication or let it run on it’s own? I’m curious if it matters how to let the replication start.
How do you mean force the replication?
-
@Sebastian-Roth From my point of view, He can do a “service FOGImageReplication restart” and it will force the replication to do the job, otherwise he will need to wait de time of the cron job
-
@Sebastian-Roth thanks in advance pal! it will make the begining of the year much more easier to me. haha
-
@Sebastian-Roth What I meant to say is after I upload an updated image to a master node, with the changes in the replication branch, should I let the replication service run on it’s own? OR should I force the replication by restarting the replication service? I figured that by restarting the replication service will speed things up to check the logs after I successfully upload the updated image.
-
@JGallo As you typically re-run the FOG installer, the restart of the service is already performed and therefore not necessary. I’d recommend letting it cycle once or twice on it’s own, then upload the logs. This will let us know if it’s working as it should. By restarting the service to “speed” things along, we actually only see “initial startup.” While, functionally, they’re the same thing, it’s just good to know the full time operation is working as expected as well.
-
@JGallo As mentioned earlier: Important notice: I had to change some of the hashing code too and therefore nodes being on different versions (1.5.4 or working VS. replication branch) will end up replicating images over and over again. So you need to have all nodes on the replication branch or setup up a separate test environment!!
Please make sure you stop replication first on the master (
systemctl stop FOGImageReplicator
), then update the storage node and after that update master node. As Tom said, the installer will start up the service for you in the end. -
@Sebastian-Roth Yup. I read earlier about that. Followed your instructions and all nodes and fog server are updated with the replication branch. I’m currently uploading updated image to an image that exists currently. Awaiting for it to finish uploading and tailing the replication log.
-
Looks like it works. Here are my logs. Once upload completed, took about 15 minutes for replicator to begin. Once it pushed files to slave, I did a FOGImageReplicator restart and looks good.
[11-13-18 10:22:26 am] | Image Name: BCS-Velocity [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 10:22:29 am] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 10:22:30 am] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 10:22:30 am] * All files synced for this item. [11-13-18 1:22:11 pm] * Starting Image Replication. [11-13-18 1:22:11 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:22:11 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:22:11 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:22:11 pm] | Replicating postdownloadscripts [11-13-18 1:22:12 pm] * Found Image to transfer to 1 node [11-13-18 1:22:12 pm] | File Name: postdownloadscripts [11-13-18 1:22:13 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:22:13 pm] * All files synced for this item. [11-13-18 1:22:13 pm] | Replicating postinitscripts [11-13-18 1:22:15 pm] * Found Image to transfer to 1 node [11-13-18 1:22:15 pm] | File Name: dev/postinitscripts [11-13-18 1:22:16 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:22:16 pm] * All files synced for this item. [11-13-18 1:22:16 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:16 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:17 pm] * Not syncing Image between groups [11-13-18 1:22:17 pm] | Image Name: BCS-Velocity [11-13-18 1:22:17 pm] | There are no other members to sync to. [11-13-18 1:22:17 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:22:18 pm] * Found Image to transfer to 1 node [11-13-18 1:22:18 pm] | Image Name: 32-Dell-790 [11-13-18 1:22:19 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:21 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:23 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:23 pm] * All files synced for this item. [11-13-18 1:22:24 pm] * Found Image to transfer to 1 node [11-13-18 1:22:24 pm] | Image Name: 64-Dell-790 [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:27 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:28 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:28 pm] * All files synced for this item. [11-13-18 1:22:29 pm] * Found Image to transfer to 1 node [11-13-18 1:22:29 pm] | Image Name: BCS-Velocity [11-13-18 1:22:30 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.mbr: 89b972e8f6585f2606a6658d58b9f66d57957ac7d57fc2f7fd7d8882a12d8722 != 341041528cb53b70422e1c39270490452de62ad764c72541e4f6eb1890f3365d [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.mbr [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.minimum.partitions: 23b505385e9008070c65c42d950dff96d5cf39e99478b6b81c7a867e8bcadb02 != 899d69e652f3c9683d83deeec82f231bba2f4df0a01d706b5acbba9992a10861 [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.minimum.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: File hash mismatch - d1.partitions: ac70ba6fe1d57bf4a8ba01459f85f075f3df10bdcbd99a368ec1523078b8fde6 != ff0c6a27b7627ad4416fa46da7f57d2c4b0f4a621d2e7ca5414fa2faa5d43a96 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p1.img: 8699649 != 8696814 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p1.img [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p2.img: 36002135558 != 41888768241 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p2.img [11-13-18 1:22:32 pm] | CMD: lftp -e 'set xfer:log 1; set xfer:log-file "/opt/fog/log/fogreplicator.BCS-Velocity.transfer.BCS-Slave.log";set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c --parallel=20 -R --ignore-time -vvv --exclude ".srvprivate" "/images/BCS-Velocity" "/images/BCS-Velocity"; exit' -u fog,[Protected] 10.210.100.62 [11-13-18 1:22:32 pm] | Started sync for Image BCS-Velocity - Resource id #20268 [11-13-18 1:29:35 pm] | Sync finished - Resource id #20268
Here is log after ImageReplicator restart occured.
[11-13-18 1:31:02 pm] Interface Ready with IP Address: 10.210.100.61 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.0.1 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.1.1 [11-13-18 1:31:02 pm] * Starting ImageReplicator Service [11-13-18 1:31:02 pm] * Checking for new items every 10800 seconds [11-13-18 1:31:02 pm] * Starting service loop [11-13-18 1:31:05 pm] * Starting Image Replication. [11-13-18 1:31:05 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:31:05 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:31:06 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:31:06 pm] | Replicating postdownloadscripts [11-13-18 1:31:08 pm] * Found Image to transfer to 1 node [11-13-18 1:31:08 pm] | File Name: postdownloadscripts [11-13-18 1:31:09 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:31:10 pm] * All files synced for this item. [11-13-18 1:31:10 pm] | Replicating postinitscripts [11-13-18 1:31:11 pm] * Found Image to transfer to 1 node [11-13-18 1:31:11 pm] | File Name: dev/postinitscripts [11-13-18 1:31:12 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:31:12 pm] * All files synced for this item. [11-13-18 1:31:12 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:12 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:13 pm] * Not syncing Image between groups [11-13-18 1:31:13 pm] | Image Name: BCS-Velocity [11-13-18 1:31:13 pm] | There are no other members to sync to. [11-13-18 1:31:13 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:31:14 pm] * Found Image to transfer to 1 node [11-13-18 1:31:14 pm] | Image Name: 32-Dell-790 [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:18 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:19 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:19 pm] * All files synced for this item. [11-13-18 1:31:20 pm] * Found Image to transfer to 1 node [11-13-18 1:31:20 pm] | Image Name: 64-Dell-790 [11-13-18 1:31:21 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:24 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:24 pm] * All files synced for this item. [11-13-18 1:31:26 pm] * Found Image to transfer to 1 node [11-13-18 1:31:26 pm] | Image Name: BCS-Velocity [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:29 pm] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:30 pm] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:30 pm] * All files synced for this item.
-
@JGallo Thanks, sounds great. So I think we are only left with what @mronh saw in the logs even after we fixed the storage node installation. Information from chat session:
hey man… the max-retries happen again
mirror: d1p2.img.014: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.019: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.003: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p1.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.011: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.005: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.001: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.007: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.006: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.004: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.002: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.008: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) mirror: d1p2.img.009: Erro fatal: max-retries exceeded (421 There are too many connections from your internet address.) [11-13-18 4:03:56 pm] | Sync finished - Resource id #4282 [11-13-18 4:00:05 pm] | Sync finished - Resource id #4847 [11-13-18 3:52:01 pm] | Sync finished - Resource id #3131 [11-13-18 3:51:37 pm] | Sync finished - Resource id #2339 [11-13-18 3:50:38 pm] | Sync finished - Resource id #2047
Trying to figure this out before merging all the code back into our official working branch.