Snapins dont deploy from Storage Nodes. Hash Mismatch
-
@Tom-Elliott So previously, snapins were pulled from the main.
Now, with locations, they are pulled from the nodes and this basically blocks deploying snapins completely at any location besides the main location.
-
@Wayne-Workman No, snapins were not being pulled from the main. The main was sending the file, but the file was being downloaded from the location. It was a “middle man” if you will.
Now the files are downloading, but something’s not allowing the file to download fully, or the hash is being calculated improperly.
-
@Tom-Elliott Still yet, the point is we can’t deploy any snapins to anywhere except the main location if locations are setup.
-
Moving this to bugs, since people are unable to deploy some or all snapins to remote locations while locations are setup.
-
@Wayne-Workman
I fould that if you change the location to the main location after Imaging the snapins will deploy from the main site. So basically you do an Image without snapins. once its imaged, change the location to the main location. then advanced task “All-Snapins” Its a workaround we are using currently. -
Based on our finding this morning, I’m leaning there’s more likely a networking issue going on.
@Greg-Plamondon can you please see if this happens with ALL nodes and ALL snapins? I know that may be asking a lot, but we found out that the issue’s you’re experiencing maybe just your environment. There are other potentials as well, and that’s not to say there isn’t necessarily a bug here. However, it seems it’s more likely related to a potential networking issue now.
For that reason, I’ve moved this out of bugs and back into FOG Problems. Joe and I still want to help figure this out, just want to first ensure this is simply not a bad patch cable, or some other firewall filters causing issues.
-
I am not sure what kind of network issue would cause a
Hash mismatch with the exact same hash values every time? If you guys know what kind of network issues I should be looking at I will investigate further. I don’t have any firewalls in between nodes and Master server, The iptables is shutdown and disabled on nodes and master. This issue happens to me on all 5 remote nodes that are in different locations connected via MPLS.Thanks.,
-
@Greg-Plamondon I had the same issue you had. Turned out, a fiber line that supported our entire VM platform was literally cut and left laying for a week by a contractor. All VM Platform traffic was going across a 100 meg copper link. Once that was fixed, everything immediately started working perfectly.
I’m happy to try to help diagnose what’s going on via team viewer. I don’t know if you’re still online or not, this sort of troubleshooting may take time and I can’t do it while at work.
-
@Wayne-Workman said in Snapins dont deploy from Storage Nodes. Hash Mismatch:
a fiber line tha
Thanks Wayne, If your available and willing some time this weekend. let me know.
-
@Greg-Plamondon Hey buddy I’ll be at my laptop most of the day. Message me, and I should get back to you sometime soonish.
-
@Greg-Plamondon Any word on this?
-
Fairly sure this is fixed now. I don’t know why the hashes were all sorts of strange, but I believe this was partially related to the timeout as seen https://forums.fogproject.org/topic/8278/rc6-snapins-no-longer-working/13.
With any luck, this was all it was, and I’m just going to guess what was seen from the start was from a download that timed timed out, while the hashing mechanism was able to run much faster and return within its own allotted time.