• Hi everyone,

    I’ve been using this forum to help me build an imaging system for my company for the past several years and you have all been very helpful.

    I work in the help desk and have managed the imaging for my company for the past several years (2k employees, +1000 machines for customer use).I recently decided it would be beneficial to upgrade from 1Gbps to 10Gbps as we sometimes need to image several dozen machines in a timely manner.

    My (desktop) server is setup as an isolated network. All machines are on site and hands on when being imaged.

    My server has a 10Gbps PCIe nic, and my switch (JUNIPER EX2300-48P) has a 10Gbps SFP+ uplink. Both the server and switch are detecting 10Gbps, however the computers I’m imaging are only pulling 1Gbps collectively (.25Gbps per machine if 4 are imaging)

    My networking team is swamped and can’t afford the time to help me improve this system. I’m lost here and could use some advice.

    Thank you all so much!

  • Moderator

    You didn’t mention any baseline numbers of what you are seeing today with imaging on a 10GB network.

    In my office we have a 10G core network with 1GbE going to the communication closets. I get between 13 and 15 GB/min (single unicast) transfer rates to modern target computers. On a well managed pure 1GbE network you should be seeing about 6GB/min (single unicast) transfer rates. I have not tested transfer rates on a pure 10GbE network, but I suspect the bottleneck will be in the disk controller on the server or the VM itself and not the network.

  • Moderator

    It’s worth noting that cables will sometimes allow for the link to be “10Gbps” but in actuality can only handle “1Gbps”

    Can you do a test to another computer with a 10Gbps link that you can actually transfer at that speed?

    And as others have mentioned, if you are using SATA drives, then naturally they’ll be far slower than you network throughput regardless!

  • I guess I’m not sure on what’s wrong.

    There’s many different scenarios that might cause this.

    The most prominent.

    GIG NIC’s are the most common in consumer/user based machines.

    So just because your server has 10G capability on the server does not translate to 10G capability to the individual hosts. So this should mean, you could have 4Gbps for 4 machines imaging at the same time. This could, very easily, be disk IO though. Most Disk IO is capped at 6GBps?

    @Sebastian-Roth suggested using Multicast. Have you tried this? Remember, while disk IO might have the capability of 6gbps throughput, there’s many variables at play for this. It’d be one thing to have 4 machines imaging from 4 different images, than 4 machines imaging from the same image association (as you’re requesting data from the same location 4 individual times.) Using multicast, you’d only be requesting that same data once.

  • Moderator

    @rvelasco Could be disk IO just as well. Why do you think it’s network??

  • I have the same issue both with unicast and multicasting. The bottleneck doesn’t seem to be the HDD but the network. No matter how many computers i have hooked up, the combined transfer rate is always maxed out at 1gbps

  • Moderator

    @rvelasco You do unicast deploys by the sound of what you wrote. I can imagine the Disk subsystem of your Desktop server to be the bottle neck here. Have you considered deploying via multicast?