Slow Unicast Deploy on new Machines
-
@tomierna Once you have windows up and running, if you download and upload a 1GB+ file, do you get about the same transfer rates (remembering our point of view has changed from the server to the client)? If you have a m.2 sata/nvme drive, I would still expect a better download from the network results (which is kind of the opposite flow from what you are seeing today with imaging).
-
@george1421 - our imaging network is not routed, and I don’t have access to file servers on the rest of our LAN.
So, to test download speeds, I created a 2GB (random) file in /var/www/fog/client/ and added a case to download.php to allow me to download directly from the FOG Server with Chrome.
I was getting consistently 85-95MB/s.
I’m not sure the best way to test a file upload that would show me speed, short of installing an FTP program and doing sFTP back and forth, but upload speed isn’t a problem in the FOG-booted scenario.
-
I reset the statistics in the GC728X Switch before starting the most recent batch of unicast deployments (I’m doing 9 right now).
There is a column in the port statistics for the switch called “Link Down Events”, and they are counting upwards. 15% in to the deploy, most of the counters are at 11, and some are higher, in the 20’s.
There is a thread on Spiceworks saying this card was flapping on Cisco switches. The poster ruled out green ethernet. It’s off on my switch too.
Tomorrow I will reset the statistics again and then capture an image to see if the link-down events count up during a capture. I’m also preparing a new t410i image, so I’ll be able to test if the ports flap with those machines on either capture or deploy.
I’ll try turning off the various auto-negotiation stuff to see if that makes the flapping cease.
-
@tomierna Be aware that you WILL see 2 link transitions when pxe booting into the FOS kernel. One will happen when iPXE kernel takes over from the PXE ROM, and the second will happen when FOS takes over control of the NIC from iPXE. This is why FOG imaging has an issue with standard STP protocol. For FOG you need to run RSTP on your building switch.
-
@george1421 that makes sense, but for one imaging session each port saw at least 21 link down events during this 9-unit unicast.
One was 25 link down events, one was 32, and one was 33.
Doesn’t that seem a little odd?
-
Also, for what it’s worth, “Spanning Tree State” is set to Disable and operation mode is set to RTSP on the switch.
-
@tomierna That IS very strange, once the kernel is booted, you should not see any link transitions.
a bit from the far side: I have no basis for this vision, but I’m seeing something about tcp off loading and ethtool.
This will be digging in the weeds a bit with this one.
- Set up a debug deploy (tick the check box for debug before you submit the deploy task on one of these 480s).
- PXE boot the computer, after a few pages of text it should drop you to a linux command prompt on the target computer.
- look through the /var/logs to see if there are any error messages related to the network adapter.
[sidebar] If you give root a password withpasswd
. Give it a simple password like hello. Then get the IP address of the fos linux withip addr show
. From here you can use putty on a windows computer to connect to the FOS engine. Login as root and the password of hello. Now you can interface with the FOS engine using the comfort of your windows computer. It will also make it easier to copy and paste text into the FOS engine. [/sidebar] - After reviewing the logs and recording any thing suspicious continue on to the next step.
- using my vision above lets use ethtool to shut off some advanced features of the intel nic.
ethtool -K eth0 sg off tso off gro off
- Now comes the time consuming bit, by entering
fog
at the linux command prompt you can single step through image deployment. You will need to press enter at each breakpoint. It will be interesting to see if turning all of the advanced features of the nic has any impact on image deployment.
We can do some other benchmarking from the FOS debug prompt, but lets see how this goes.
-
@george1421 I’ve started a debug session, turned off as many of those advanced features as it would let me, and am currently tailing messages.
ethtool gave the following error:
Cannot get device udp-fragmentation-offload settings: Operation not supported Cannot get device udp-fragmentation-offload settings: Operation not supported Actual changes: scatter-gather: off tx-scatter-gather: off tcp-segmentation-offload: off tx-tcp-segmentation: off tx-tcp6-segmentation: off generic-segmentation-offload: off [requested on] generic-receive-offload: off
The only text relating to the device in messages (first two lines are for a different driver in the kernel, right?):
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI e1000: Copyright(c) 1999-2006 Intel Corporation. e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k e1000e: Copyright(c) 1999-2015 Intel Corporation. e1000e 0000:00:1f.6 0000:00:1f.6 (uninitialized): registered PHC clock e1000e 0000:00:1f.6 eth0: (PCI Express:2.5GT/s:Width x1) MY:MA:CA:DD:RE:SS e1000e 0000:00:1f.6 eth0: Intel(R) PRO/1000 Network Connection e1000e 0000:00:1f.6 eth0: MAC: 12, PHY: 12, PBA No: 1000FF-0FF e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Looking at the output of ifconfig -a for the device while the deploy is running
eth0 Link encap:Ethernet HWaddr MY:MA:CA:DD:RE:SS inet addr:10.0.0.179 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2923202 errors:0 dropped:657 overruns:0 frame:0 TX packets:493054 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4385852212 (4.0 GiB) TX bytes:50905921 (48.5 MiB) Interrupt:16 Memory:ed200000-ed220000
That dropped RX packets sure looks suspicious!
Is there anything else I can poke while the deploy is running? I’m prepared to abandon this deploy and twist other knobs…
-
The deploy of the main partition is finished and I’m holding off on finishing it up to get some more stats.
There haven’t been any substantive additional lines in /var/log/messages.
ifconfig -a shows a couple thousand dropped packets.
ethtool -S eth0 shows:
NIC statistics: rx_packets: 14019324 tx_packets: 2266954 rx_bytes: 21033100462 tx_bytes: 236885691 rx_broadcast: 845 tx_broadcast: 4 rx_multicast: 14 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 14 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 2672 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_csum_offload_good: 14018965 rx_csum_offload_errors: 0 rx_header_split: 0 alloc_rx_buff_failed: 0 tx_smbus: 1 rx_smbus: 46 dropped_smbus: 0 rx_dma_failed: 0 tx_dma_failed: 0 rx_hwtstamp_cleared: 0 uncorr_ecc_errors: 0 corr_ecc_errors: 0 tx_hwtstamp_timeouts: 0 tx_hwtstamp_skipped: 0
rx_missed_errors corresponds roughly with the ifconfig dropped packets.
-
After closely watching the deploy process a few times with statistics resets in between, I can confirm 16-24 link-down events is normal, because of the number of boots and reboots including Snap-In runs.
Some of these machines still have their BIOS date set incorrectly, and that makes KMS activation not work, so the 24-count includes the initial Snap-In to activate, and then the subsequent reboots and re-Snap-In to activate properly once the time is coherent.
I’m going to run another debug session soon and this time I’m going to increase the RX Ring Buffer to maximum - I’ve seen some chatter about this helping mitigate dropped packets with the e1000e card.
-
Changing the ring buffer size to maximum didn’t do anything to help the speed, and the number of dropped packets is climbing.
In the meantime, I’ve also captured a new image from a t410i and deployed it to five machines as unicast. I was getting a solid 5GB/min for all of them, so my server and 10GbE link are working swimmingly. The deploy of all five of those took about seven minutes.
-
@tomierna as basic as this sounds would it be simple enough to test with a new patch cable?
-
For comparison purposes, from a debug deploy’s shell prompt, I forced a t480’s port to 100Mbit/Full Duplex and then continued the deploy.
It started at roughly 550MB/min, and has settled to 405MB/min.
ethtool -S eth0 shows no increase in rx_missed_errors, but shows a large number of rx_crc_errors: 247402.
The switch port has a similar number of errors shown as “Collision Frames”.
Midstream, I forced 1GbE auto. The switch port agreed that it had re-negotiated at 1000Mb. There has been no increase in speed on the deploy, but now the CRC errors aren’t increasing, and instead the dropped packets count is increasing slowly (as before with 1000Mbit negotiated).
It seems like while the physical link is negotiating at 1000Mbit, something is throttling until it settles at around 55Mbit/sec.
To try to test things another way, I created an Ubuntu 18.04 USB boot drive and booted a t480 from that. Then, with my 2GB random-data file on the FOG server, I downloaded it via Firefox. It gave me a solid 100MB/sec.
dmesg on Ubuntu verified it’s using the same e1000e driver as the FOG kernel is using.
My next step is to fully install Ubuntu so I can create a 24GB random file to better test a download of that duration and size.
@Tom-Elliott - re: new patch cable - I’ll try, but this was happening with all of the stations, of which at least two of the cables were brand new. The Ubuntu test was on the same port and with the same cable, and it was getting proper 1GbE speeds.
-
Sooooo, here’s where it gets weird.
I’m currently deploying to a t480 in debug mode.
It’s showing about 480MB/minute or 64Mbit/sec.
I’ve also got a fast USB3 SSD, formatted as EXT4 connected.
In a shell connected to the t480, I’ve used both cURL and wget to copy my 32GB randomized test file from the FOG server’s web server at 100+MB/sec.
I think this excludes the network card?
Thinking maybe it is an NFS issue, I copied the same image being deployed to the internal m.2 ssd to the external USB3 ssd with rsync, so I could see the copy speed. It copied in 3 minutes. That’s a 19GB .Img file.
So, it’s not NFS?
How about partimage vs. partclone? Nope: the old t410i image was partimage. The new one is partclone. All five simultaneous unicast deploys from earlier today of the new t410i image were 1GbE speed.
I’m really flummoxed at this point. Right now I’m getting GbE speeds on network copies to the external USB3 while a deploy is running at less than 100Mb/sec speeds writing to the internal m.2 drive.
Is it some sort of incompatibility with the m.2 ssd? How would I test that?
-
I’m excluding the m.2 ssd as being problematic while booted into the FOG deploy kernel.
I stopped the deploy, formatted the largest partition with ext4, and then re-did my rsync test from the NFS images share.
Solidly 100MB/s, or 3 minutes for the whole image to copy.
-
I have to had it to you, you’ve tried about every permutation I could think of.
I finally found my old post about making fog go fast (benchmarking). https://forums.fogproject.org/topic/10459/can-you-make-fog-imaging-go-fast
So now the question in my mind is, is it the network, linux driver, or the local hard disk. Some of the commands from the go fast post can give you an idea of the commands needed to test the network and local disk writing speeds using the debug session from the FOS engine.
But I’m suspecting that the issue is related to your collisions and packet drops. I reset the port counters on the switch in the build up room today and deployed to 10 different systems with no collisions or crc errors. I’m just wondering do you see the same errors if you connect the 480 to your core switch, the one with the 10GB uplinks? It could be the switch too. (??).
For your m.2 test, you can use the dd command from my post to create a local 1GB file and compare both the read and write stats to my post too. But I’m really leaning towards the network is at fault some how.
-
@george1421 I’m wrapping up for today, and I’ll work on it some more Monday.
I’ll read through your post for sure, so thanks for that!
The collisions/crc errors only happened when I forced the port into 100Mb/Full link mode.
Rx packet drops are what accrued while in 1GbE link, and there weren’t nearly as many.
I’ll test again next week, but I don’t think the dropped packets counter went up when I was doing straight network copies - I think that counter only went up during the deploys.
I’m satisfied that the m.2 is not the bottleneck, based on my final test today.
You ask me to connect to my core switch, but the topology is much more flat than that; Server 10GbE Fiber -> Imaging Station Switch -> Imaging client machines. There is no other network hardware in between.
-
@tomierna said in Slow Unicast Deploy on new Machines:
You ask me to connect to my core switch, but the topology is much more flat than that; Server 10GbE Fiber -> Imaging Station Switch -> Imaging client machines. There is no other network hardware in between.
The 480s did this before you upgraded to the 10GbE links too?
-
@george1421 Yes, there was a single GbE connection between the server and the previous switch, an unmanaged Netgear GS116.
The first set of deploys I tried were similarly slow on the previous switch. The project to change out the switch for a managed one and add the 10GbE link was long planned, but since I couldn’t get any info out of the GS116, I figured having a management console would help debug things.
Doing five unicast t410i’s each at gigabit speeds makes me think the 10GbE link and VM are not the problem.
-
@tomierna You’ve done a great job debugging that issue and trying to nail it down.
on Ubuntu verified it’s using the same e1000e driver as the FOG kernel is using
Ubuntu just as other distros do add many custom kernel patches. Possibly there is one or the other patch fixing exactly that issue you see on those machines. Be it network driver, m.2 drivers or what! Within FOS (the FOG mini linux OS) we use a plain vanilla kernel with a handful of patches added. Whatever you find, we are happy to add a patch to our kernels to help you make this run faster!