@Sebastian-Roth So I went ahead and switched to dev-branch and upgrade nodes along with fog server. Seems like it’s working fine and the storage group info is now displayed. Thank you again for your help in this.
Posts made by JGallo
-
RE: Overflow imaging from storage node that's not a master in a storage group
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Sebastian-Roth Fair enough. I will switch over to the dev-branch on nodes and fog server once we’re done with these projects. I should be able to do it prior to our winter break. Thank you again for your help on this.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Sebastian-Roth Cool. We have several projects in the mix right now but once it slows down, I plan on upgrading to 1.5.5 since i haven’t had a chance to do so. Would that fix happen to be in the working branch by any chance or should i stay on the dev branch?
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Tom-Elliott @Wayne-Workman Worked like a charm. The lab imaged with 20 going on at once with both nodes active and remaining clients were placed in line to wait until slots opened up. Thank you both for all your help.
-
RE: Replication Issue
Looks like it works. Here are my logs. Once upload completed, took about 15 minutes for replicator to begin. Once it pushed files to slave, I did a FOGImageReplicator restart and looks good.
[11-13-18 10:22:26 am] | Image Name: BCS-Velocity [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 10:22:27 am] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 10:22:28 am] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 10:22:29 am] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 10:22:30 am] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 10:22:30 am] * All files synced for this item. [11-13-18 1:22:11 pm] * Starting Image Replication. [11-13-18 1:22:11 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:22:11 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:22:11 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:22:11 pm] | Replicating postdownloadscripts [11-13-18 1:22:12 pm] * Found Image to transfer to 1 node [11-13-18 1:22:12 pm] | File Name: postdownloadscripts [11-13-18 1:22:13 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:22:13 pm] * All files synced for this item. [11-13-18 1:22:13 pm] | Replicating postinitscripts [11-13-18 1:22:15 pm] * Found Image to transfer to 1 node [11-13-18 1:22:15 pm] | File Name: dev/postinitscripts [11-13-18 1:22:16 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:22:16 pm] * All files synced for this item. [11-13-18 1:22:16 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:16 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:22:16 pm] | This is not the primary group. [11-13-18 1:22:17 pm] * Not syncing Image between groups [11-13-18 1:22:17 pm] | Image Name: BCS-Velocity [11-13-18 1:22:17 pm] | There are no other members to sync to. [11-13-18 1:22:17 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:22:18 pm] * Found Image to transfer to 1 node [11-13-18 1:22:18 pm] | Image Name: 32-Dell-790 [11-13-18 1:22:19 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:20 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:21 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:23 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:23 pm] * All files synced for this item. [11-13-18 1:22:24 pm] * Found Image to transfer to 1 node [11-13-18 1:22:24 pm] | Image Name: 64-Dell-790 [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:25 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:26 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:22:27 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:22:28 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:22:28 pm] * All files synced for this item. [11-13-18 1:22:29 pm] * Found Image to transfer to 1 node [11-13-18 1:22:29 pm] | Image Name: BCS-Velocity [11-13-18 1:22:30 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.mbr: 89b972e8f6585f2606a6658d58b9f66d57957ac7d57fc2f7fd7d8882a12d8722 != 341041528cb53b70422e1c39270490452de62ad764c72541e4f6eb1890f3365d [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.mbr [11-13-18 1:22:30 pm] # BCS-Velocity: File hash mismatch - d1.minimum.partitions: 23b505385e9008070c65c42d950dff96d5cf39e99478b6b81c7a867e8bcadb02 != 899d69e652f3c9683d83deeec82f231bba2f4df0a01d706b5acbba9992a10861 [11-13-18 1:22:30 pm] # BCS-Velocity: Deleting remote file d1.minimum.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:22:31 pm] # BCS-Velocity: File hash mismatch - d1.partitions: ac70ba6fe1d57bf4a8ba01459f85f075f3df10bdcbd99a368ec1523078b8fde6 != ff0c6a27b7627ad4416fa46da7f57d2c4b0f4a621d2e7ca5414fa2faa5d43a96 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1.partitions [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p1.img: 8699649 != 8696814 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p1.img [11-13-18 1:22:31 pm] # BCS-Velocity: File size mismatch - d1p2.img: 36002135558 != 41888768241 [11-13-18 1:22:31 pm] # BCS-Velocity: Deleting remote file d1p2.img [11-13-18 1:22:32 pm] | CMD: lftp -e 'set xfer:log 1; set xfer:log-file "/opt/fog/log/fogreplicator.BCS-Velocity.transfer.BCS-Slave.log";set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c --parallel=20 -R --ignore-time -vvv --exclude ".srvprivate" "/images/BCS-Velocity" "/images/BCS-Velocity"; exit' -u fog,[Protected] 10.210.100.62 [11-13-18 1:22:32 pm] | Started sync for Image BCS-Velocity - Resource id #20268 [11-13-18 1:29:35 pm] | Sync finished - Resource id #20268
Here is log after ImageReplicator restart occured.
[11-13-18 1:31:02 pm] Interface Ready with IP Address: 10.210.100.61 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.0.1 [11-13-18 1:31:02 pm] Interface Ready with IP Address: 127.0.1.1 [11-13-18 1:31:02 pm] * Starting ImageReplicator Service [11-13-18 1:31:02 pm] * Checking for new items every 10800 seconds [11-13-18 1:31:02 pm] * Starting service loop [11-13-18 1:31:05 pm] * Starting Image Replication. [11-13-18 1:31:05 pm] * We are group ID: 6. We are group name: BCS [11-13-18 1:31:05 pm] * We are node ID: 9. We are node name: BCS-Master [11-13-18 1:31:06 pm] * Attempting to perform Group -> Group image replication. [11-13-18 1:31:06 pm] | Replicating postdownloadscripts [11-13-18 1:31:08 pm] * Found Image to transfer to 1 node [11-13-18 1:31:08 pm] | File Name: postdownloadscripts [11-13-18 1:31:09 pm] # postdownloadscripts: No need to sync fog.postdownload (BCS-Slave) [11-13-18 1:31:10 pm] * All files synced for this item. [11-13-18 1:31:10 pm] | Replicating postinitscripts [11-13-18 1:31:11 pm] * Found Image to transfer to 1 node [11-13-18 1:31:11 pm] | File Name: dev/postinitscripts [11-13-18 1:31:12 pm] # dev/postinitscripts: No need to sync fog.postinit (BCS-Slave) [11-13-18 1:31:12 pm] * All files synced for this item. [11-13-18 1:31:12 pm] | Not syncing Image: 32-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:12 pm] | Not syncing Image: 64-Dell-790 [11-13-18 1:31:12 pm] | This is not the primary group. [11-13-18 1:31:13 pm] * Not syncing Image between groups [11-13-18 1:31:13 pm] | Image Name: BCS-Velocity [11-13-18 1:31:13 pm] | There are no other members to sync to. [11-13-18 1:31:13 pm] * Attempting to perform Group -> Nodes image replication. [11-13-18 1:31:14 pm] * Found Image to transfer to 1 node [11-13-18 1:31:14 pm] | Image Name: 32-Dell-790 [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:16 pm] # 32-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:17 pm] # 32-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:18 pm] # 32-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:19 pm] # 32-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:19 pm] * All files synced for this item. [11-13-18 1:31:20 pm] * Found Image to transfer to 1 node [11-13-18 1:31:20 pm] | Image Name: 64-Dell-790 [11-13-18 1:31:21 pm] # 64-Dell-790: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:22 pm] # 64-Dell-790: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:23 pm] # 64-Dell-790: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:24 pm] # 64-Dell-790: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:24 pm] * All files synced for this item. [11-13-18 1:31:26 pm] * Found Image to transfer to 1 node [11-13-18 1:31:26 pm] | Image Name: BCS-Velocity [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.fixed_size_partitions (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.mbr (BCS-Slave) [11-13-18 1:31:27 pm] # BCS-Velocity: No need to sync d1.minimum.partitions (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.fstypes (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.original.swapuuids (BCS-Slave) [11-13-18 1:31:28 pm] # BCS-Velocity: No need to sync d1.partitions (BCS-Slave) [11-13-18 1:31:29 pm] # BCS-Velocity: No need to sync d1p1.img (BCS-Slave) [11-13-18 1:31:30 pm] # BCS-Velocity: No need to sync d1p2.img (BCS-Slave) [11-13-18 1:31:30 pm] * All files synced for this item.
-
RE: Replication Issue
@Sebastian-Roth Yup. I read earlier about that. Followed your instructions and all nodes and fog server are updated with the replication branch. I’m currently uploading updated image to an image that exists currently. Awaiting for it to finish uploading and tailing the replication log.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Wayne-Workman @Tom-Elliott I just imaged one computer with the empty storage group in the location plug in and it pulled the image from the slave node. I’m going to leave it like that for now and when we go to image the lab later today, I will report. Like I said before, it is strange that the storage group is not visually represented on the location management page but once you click on the location, you can see that the storage group is defined.
-
RE: Replication Issue
@Sebastian-Roth What I meant to say is after I upload an updated image to a master node, with the changes in the replication branch, should I let the replication service run on it’s own? OR should I force the replication by restarting the replication service? I figured that by restarting the replication service will speed things up to check the logs after I successfully upload the updated image.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Wayne-Workman Interesting bug I think may exist. So I went into the location plug in and set the ‘node’ field to exactly what you said ‘please select an option’ and hit update then going to the location management page I no longer see a storage group set for the location. The strange thing is that going into the location it does show a storage group defined. It’s just when I go to the location management page that it doesn’t show. So by setting the ‘node’ field to blank, it appears that no storage group is defined at least according to the main menu of the location plug in.
Here is what it looks like. Hopefully you understand what I’m trying to describe.
-
RE: Replication Issue
@Sebastian-Roth I will be updating a image definition this week. I ran into an issue with imaging a lab with storage nodes. I’m testing the solution out today and then I will be updating image to storage group that has storage nodes. Should I force the replication or let it run on it’s own? I’m curious if it matters how to let the replication start.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Wayne-Workman Thank you. I will try tomorrow and observe when we deploy another image to a lab that we have scheduled. We have a three day weekend so my apologizes for late response.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@p4cm4n Both Master node and slave node in a storage group are storage node installs. The master node is the “master” for that particular storage group. Replication is not an issue getting image to slave. I’m having issues getting clients to pull from the slave once master node slots become unavailable.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Wayne-Workman Ok. Makes sense now. Could you guide me on how to set the slave node to the same location as the master node? I’m still confused on how to do this exactly because I’m looking in the location plug in settings and in the main menu of the location management it has create new location. I don’t see a way to add the slave node to the same location as the master node. Thank you once again for you help.
-
RE: Overflow imaging from storage node that's not a master in a storage group
@Wayne-Workman No. I imaging that by having the Master node set in the location plug in that the storage group would handle the overflow. Do I have to set the location to both the Master and the slave node for overflow imaging to occur? If I do, when I go to register clients, would one or two of the same locations show up when assigning a location to them?
-
Overflow imaging from storage node that's not a master in a storage group
Re: Storage nodes not deploying images
@george1421 In the post I referenced, you mentioned that the storage node concept doesn’t allow for overflow imaging. I have a question which involves that concept. Maybe I have my current FOG system incorrectly configured or something is not working properly.
Here is what I have. I have one FOG server with multiple storage nodes across school sites. I use the location plug in. Each school has a location defined and a master node associated with it. With each location, i also have a “slave” node that is replicated from the master node of each site. I define the default storage group of an image and let the replication of the master node of that location go to the slave node of that location.
My question is, when I go to image a lab by utilizing the group management in FOG, why would only the master node deploy the image? Wouldn’t the slave node take on any overflow requests? Say I had 30 machines but the master node is set to the default of 10 clients and the other 20 are completely ignoring the slave node that should at least provide an additional 10 open slots. We just imaged a lab and my observation was exactly that the slave node, even though it has the image files there, the clients completely ignored the fact that the slave storage node was available.
I have Ubuntu 16.04 with FOG 1.5.4.8 on the replication branch. Im also testing something out probably next week now that we are at that point of our projects and switching to the replication branch to see if uploading an image that has been already defined and needs updating will not go into a replication loop. That’s something else but I’m curious as to why the slave storage node in a storage group is ignored during the imaging task with that location plug in.
Thank you.
-
RE: Replication Issue
@Sebastian-Roth I switched over to replication branch and updated all storage nodes along with my fog server. I uploaded image and it seemed like its working fine for original image. I haven’t updated the image since its very new but I when I have a chance, which will be very shortly since a project im currently working on will allow me to update an image on an existing one, I will go ahead and updated it and tail the replication log.
I also noticed in a different post that another user did the same thing and tested replication. Looks like the changes in the replication branched have work. I will update as well once i upload an image to an existing one to see if the updated image replicates properly to storage node.
-
RE: Replication Issue
@Sebastian-Roth Of course!! I will update server and nodes today to replication branch and get it ready for an image upload. I think the issue was with images being updated and then uploaded to existing images on the FOG server. Replication of a new image definition was fine even to the storage nodes. It will probably be a bit before I can have some concrete information since I don’t have many images that replicate across all nodes since I have storage groups defined in the image.
-
RE: Replication Issue
@Sebastian-Roth Will those hashing code changes you made help with Ubuntu servers specifically 16.04? I remember earlier this summer that there was replication issues looping due to a hash file not matching but resolved to an extent in the working branch. I’m curious because I’m have many storage nodes and I can switch over from working branch to replication if your changes help.
-
Upgrading/Downgrading Kernels for Storage Nodes
https://forums.fogproject.org/topic/6909/kernel-update-storage-nodes
With the current issues with the latest kernels and FOG v1.5.4, is there an easy way to downgrade kernels from storage nodes? I know the subject from the post asking that to be a feature but upgrading storage node kernels are easy given the instructions from here
https://wiki.fogproject.org/wiki/index.php?title=Kernel_Update
but is there a procedure for storage nodes to downgrade kernels to a specific one especially if using the location plug in along with using init’s and kernels from the storage nodes? I keep looking trying in the forums but the wiki link is only good for latest kernel upgrades. Thank you.
-
RE: Add an menu item thruogh web GUI
@klkwarrior May I suggest the following:
-
If your project of imaging lots of machines are on the same network, maybe image as a group based on the model computers? Or are the computers at various sites?
-
Not sure if latest working branch has this but I know that 1.6 does have the deploy image option from the client which just basically allows for you to choose the image to deploy rather than defining anything. That I think is what is probably going to accomplish for you and you don’t have to do anything on the web UI in the sense of adding additional functions. Just define the image, upload, and then begin imaging.
-