Another slow Deployment qustion
-
@Wayne-Workman Sorry I concatenated @Sebastian-Roth’s and your reply in my head.
-
@Arsenal101 said in Another slow Deployment qustion:
If we could ever get it 10gb to the end PC the only thing slowing the process down would be the human factor!
While 10G to the desktop would be really nice, its not necessary and a bit of a waste because I would suspect on the target end, the disk or CPU is your limiting factor and not the network. For the server managing multiple data streams I can see the network and then the disk subsystem being the bottleneck.
-
@george1421 Pretty much, It would just be cool to say we have 10gb to the desktop… A SSD drive would be sweet though!
-
@Wayne-Workman said in Another slow Deployment qustion:
@george1421 Take this video for example, look at the video @ 5:34
We see the space used on the image is 31.7, write speed is 2.29GB/min.
We see elapsed time is 13 minutes and 31 seconds.
31.7 divided by 2.29 = 13.84, or 13 minutes and 50 seconds.
Therefore the rate that Partclone displays is write speed (or read speed), not network transfer rate.
Wayne I fully agree with you and understand that the part clone display is not actual network usage, but its the best metric we have without getting into to much tech. So the point it it’s not accurate but its the best we have (like the Windows Performance Index, at least its some metric that we can use as a baseline).
My testing with the intel nuc as the FOG server and deploying to a e6400 with a HHD and SSD. You can see the speed difference in just switching the target from a HDD to SSD with everything else being the same.
https://forums.fogproject.org/topic/6373/fog-mobile-deployment-server-for-200-usd-finished/3
“I replace the seagate rotating hard drive in the e6400 with a Crucial MX100 256GB SSD I had laying around. I again redeployed the same image as in the previous tests, this time the transfer rate was 7.8GB/m (130MB/s {!!faster than wire speed!!} ) according to partclone. As compared to 5.1GB/m with a rotating disk in the target. I booted the e6400 back into debug mode and ran hdparm -Tt /dev/sda hdparm reported 242MB/s for buffered disk reads as compared to 80MB/s with rotating media.”
-
@george1421 We can get a exact metric if we turn on FTP_Image_Size on the server, it’ll display the image size. Then we can use total time elapsed during imaging to calculate throughput.
-
@Wayne-Workman Throughput is already displayed on the dashboard (granted not per host), but if the network is NOT saturated as this is trying to lead to, you should see “plenty” of available bandwidth of the network there.
The part that bothers me is this still feels more like a networking issue than a seek/io issue. While I do totally understand IO as being a part of this, your network is most likely the first culprit. Primarily considering the mount point is used across the network to begin with.
-
@Arsenal101 said:
Maybe at some point I could convince my boss to throw a 10gb NIC and some decent hardware for our fog server in the budget.
As George already said this is not very wise. I might add that multicast would solve all that I am pretty sure. I don’t understand why everyone is so afraid to get multicast running?? What’s that drawback that I don’t seem to see? Please tell me…
-
@Sebastian-Roth Only downside is reduced flexibility, that is to say if you have 20 computers and they need 7 different images, multicast won’t really be that useful.
Other than that, multicast is pretty great. Is there any word on the state of the torrent mechanic or is that on hold/abandoned? I personally think multicast would be better than the torrenting from a network saturation/resources perspective, but maybe I’m wrong?
-
@Quazz said:
Only downside is reduced flexibility, that is to say if you have 20 computers and they need 7 different images, multicast won’t really be that useful.
While you are right about the impossibility to deploy different images via multicast I don’t see this as a fair argument. A network/switch can handle unicast and multicast at the same time so it’s not an either or thing. Use whichever is appropriate for the task you want to run. Multicasting when you actually want different images is stupid and unicasting when you want the same image is not very wise either.
Other than that, multicast is pretty great. Is there any word on the state of the torrent mechanic or is that on hold/abandoned? I personally think multicast would be better than the torrenting from a network saturation/resources perspective, but maybe I’m wrong?
There has been a discussion on and off on the forum. Just search for ‘torrent’ and I am sure you’ll find it.
-
While I still stand by what I’ve already said - just try multicast and see if it works - for same images at once. Stick computers into a group, and initiate multicast from the group.