Is it normal for a multicast deploy session to be 30x slower then a “normal” deploy? Hope not. Maybe something is not right. I tried with the same (one) laptop, same image, same server, everything the same. Compression was set at 4.
PS unicast deploy goes really fast, probably at the max my hardware is capable of. Windows 10 deployment in 3 minutes, I like that…
@coco65 There’s been research in the area of FOG’s compression and delivery times.
The conclusion - 5 to 7 is best.
@Wayne-Workman : You were right. I built the following: a physical fog server (a laptop) connected to a netgear pro switch, capture and deploy an other laptop, first with unicast then with multicast. Now there is almost no difference in deployment time, multicast takes about 20 seconds longer. Nothing else was connected to the test-network.
Conclusion: a good switch does make a difference. Thanks for the tip.
I wonder if the compression factor used makes a big difference in speed. Storage is cheap, time is money… Going to test that next.
@Wayne-Workman : At the moment I am using a tplink 8-port switch. Just ordered a netgear pro 8 port managed switch, let’s see if it goes better with that.
@coco65 Since this was at home, I’m guessing you were using a consumer grade underpowered switch - or worse - the integrated switch on a AP/Switch/Router combination device.
Those devices can do Multicast because they can forward broadcast traffic correctly - but their onboard chips don’t have the power to replicate such heavy broadcast traffic to all ports - so things slow way down.
You won’t see such bad performance on a Cisco Catalyst 1Gbps switch.
@Tom-Elliott : The test I did was from a virtual server (Debian 8 inside Hyper-V) to a physical machine (a laptop). See the report picture above, “Acer” is the brand of the laptop.
@coco65 It could be any number of things. I do, however, believe Hyper-V may be the largest cause at this point. Can you try a multicast image to a physical machine?
@Tom-Elliott : The test was done at home, both the fog server (running on Debian in a Hyper-V virtual server) and the laptop are connected to Gb switch. There was almost zero network load during the tests. With unicast (normal deploy) I got 3 Gb/min. With multicast (to 1 client, the same laptop) I got not even 200 Mb/min. The firewall in Debian is turned off. Could the difference be because of Hyper-V? Next week at work I can try it in a different environment. Here is a photo during multicast deploy;
Multicast is it’s own beast. Especially if you’re running multicast over the same network as all the rest of your traffic.
If I deploy two systems at the same time via multicast, I get about 1.25 GB/Min (or just about 100Mbps). Where I might get 3GB/Min (or just about 200Mbps) if I ran the exact same image to two separate machines.
I don’t know your environment so I’m unable to see why/what’s going on to cause such a large impact.
Multicast has a little bit of overhead, but not that much.
I’ve seen around 2.8GB/min on Dell Vostro 1220 (Celeron CPU, weak ass hard drives), so your speeds are definitely abnormal.