100mb/s speed limit, please help
-
@Wayne-Workman mea culpa, I stand corrected
-
OK on my clean build of 1.2.0 the compression settings are 9
On my production instance that is on 1.2.0 trunk 5070 the compression is 6
On my dev instance that is on 1.2.0 trunk 5676 the compression is 6 (must be default) because I know I haven’t tweaked this system.
Changing this now shouldn’t have an impact since the image that is captured already will be compressed at the higher value. Something I don’t know is where does the decompression happen, I’m suspecting the client computer. If that’s the case the slower (cpu wise) the computer is the slower the transfer rate will be.
-
Compression and decompression happens on the host. The default compression value was changed from 9 to 6 because 6 is faster (in probably 99% of cases).
If the OP can change his compression to 6 and then re-upload the image, he might find that his image deploys much faster afterwards.
@ch3i posted a script that will change the compression of an image without deploying/capturing again. It does it all on the server. It’s in the forums somewhere here. It needs to go in the wiki. Somebody hash tag it if they find it.
-
I have an image with compression at 0 or 1 (I can’t recall what it was for sure), and an image with compression at 3, but both have about the same speeds. I have also tried compression 9 just to try the other side of things, and unsurprisingly, things were much slower.
Recapturing the image is not a huge deal for me, and I am open to try anything right now.
-
@stowtwe Try with it set to 6.
-
@stowtwe said:
I have an image with compression at 0 or 1 (I can’t recall what it was for sure), and an image with compression at 3, but both have about the same speeds. I have also tried compression 9 just to try the other side of things, and unsurprisingly, things were much slower.
Just to be clear here, an image with compression of 0 or 1 gave the same transfer speeds as a 3 or 9? I just want to make sure we are not chasing the wrong assumption.
-
Is the speed “limiting” on all systems? Have you attempting imaging in other locations?
From the sounds of things, things are working exactly as intended. This means, if there’s a 10/100 SWITCH between any of the GIG parts and your imaging to a system on the other side of that “middleman” switch, it would be giving you the same type of thing.
Speed isn’t always related to decompression/compression though it does play into it. Uploads would be the most often AFFECTED issue with compression as the CLient machine is not only transferring the data, but compressing it before it goes up the drain.
I think we need to trace where the 10/100 is starting at. It could even be as simple as an improper punch down.
-
All of our connections are gigabit running to the computers we are trying to image. The only connection that is 100Mb is the line running from the switch to our router, but that should be irrelevant in this case because the packets do not need to go through the router to reach the hosts we are imaging. This WOULD be a bottle neck if we were imaging to other rooms in the building.
Here is a picture (There is another switch before the router, but you get the idea):
@george1421 Compression 9 was much slower than the others. I have also just recently tried compression 3, and it yielded similar results to compression 1 or 0. I will be trying 6 next to see how it goes.
-
THat’s why we need to narrow what and when and where things are happening.
Just because you “know” doesn’t mean that it couldn’t be doing something unexpected.
Seeing as it consistently showing the same results, it leads me to believe there is a 10/100 connection somewhere between. Maybe the imaging computer is on a different subnet than the fog server? If it is it would have to pass through the router and back to reach the fog server to begin with.
-
Also,
Could you try a TOTALLY different system and image it to see if you see the same limiting factors?
What are the systems that you’re seeing get the 100mb/s issues (specifically)? Age, nic, bios, etc… as possible?
Are the host’s nic Gigabit or 10/100?
There’s a lot of variables, and testing the same systems over and over seems a bit much to try to find a good solution.
Maybe let’s take out the problem systems for now and find out if it is indeed network, or system.
-
@Tom-Elliott Looking back over this thread I have a question for Tom, actually a confirmation is all I need. Does image deployment use NFS to move the file from the server to the client?
@stowtwe You have done tests with ftp and iptraf which received expected results. Did you do those tests between the FOG server and the same network jack where these target devices are connected?
Lastly, we haven’t considered that both the OP and Tom is right. Lets assume that for some reason the target computers are only functioning in 100Mb/s mode instead of GbE. This would make both people correct. With an unmanged switch it would be difficult to tell, maybe from the lights on the front of the switch.
Something else to consider is just change out the switch to a different model to see if there is something in the switch going wrong.
[Edit] I see that Tom was thinking along the same lines as it could possibly be the nic too. it just took me a bit for my last post [/Edit]
-
@george1421 Imaging uses NFS.
NFS is utilized for upload and download.
/images is mounted as read-only via NFS for download.
/images/dev is mounted via NFS as read/write for upload (after upload completes, FTP is used to move the image from /images/dev to /images)
Settings for NFS can be found in
/etc/exports
Here’s a related article with lots of good commands and info in it: https://wiki.fogproject.org/wiki/index.php/Troubleshoot_NFS
-
@Tom-Elliott The issue was with subnetting. We combined 2 subnets not long ago, and the server still had the old subnet mask. I changed the mask and it now is seeing everything on the same network as it should. I am getting 3-4 GB/min now, MUCH faster than before.
Thank you everyone for the help. This was something I should have been able to catch, and was definitely not a direct problem with FOG.
-
What on earth has subnetting to do with speed? I am not convinced that this issue was fixed by changing the subnet mask on the FOG server. Well, only if the wrong subnet mask would make the server send all the traffic over a gateway to the client(s) which would then have been the bottleneck. But… Anyhow, great you got this fixed.
-
@Sebastian-Roth said:
What on earth has subnetting to do with speed?
That’s what I thought. But if it’s fixed it’s fixed.