• Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login
  • Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login

Slow Unicast Deploy on new Machines

Scheduled Pinned Locked Moved Solved
FOG Problems
slow deploy unicast 1.5.0
4
54
11.4k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • G
    george1421 Moderator @tomierna
    last edited by george1421 Jul 3, 2018, 4:17 PM Jul 3, 2018, 10:15 PM

    @tomierna If you have a managed switch do you see any collisions or other issues when you look at the port counters.

    Tell us a bit more about your FOG server itself.

    Is it physical or virtual
    What host OS is running on the FOG server?
    How much ram is in the FOG server?
    What does the disk subsystem look like? Is is a single sata hdd, ssd, or raid?
    If you only deploy a single unicast stream to a single target computer do you get 500MB/m transfer rates?
    What happens if you run 2 unicast deployments at the same time, does it change your throughput?
    Are the 410s and 480s in the same firmware mode (bios or uefi)
    Also make sure the 480s have the latest firmware installed.

    OK also just for clarity, if you use the same network port, do the T480 and T410 provide the same through put?

    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

    1 Reply Last reply Reply Quote 0
    • T
      tomierna
      last edited by Jul 3, 2018, 10:21 PM

      Deploying t410i gets 5-7GB/min, or around 900Mbit/second.

      Deploying t480 on the same switch port and cable gets 400-500MB/min, or around 60Mbit/second.

      Capture of images is 5-7GB/min from either model. That’s what is so strange.

      Thanks for the note about the newest kernels, I’ll downgrade.

      Re: Partimage vs. Partclone compression, I can try, but I don’t think that accounts for a 15-20x speed differential.

      Re: Perceived transfer rates vs. actuals: the image for our t410i is 24GB in size. The t480 image is 19GB. A t410i will deploy in less than 10 minutes, and a t480 will take over an hour.

      G 1 Reply Last reply Jul 3, 2018, 10:31 PM Reply Quote 0
      • G
        george1421 Moderator @tomierna
        last edited by Jul 3, 2018, 10:31 PM

        @tomierna OK just because I’m a type ‘A’ person.

        500MB/m translates to 8.3MB/s

        A single 100Mb/s link moves about 12.5MB/s theoretical maximum.

        A 1GbE link has a theoretical throughput of 125MB/s.

        (this is still a process of finding out where the problem isn’t. I’m still trying to build a truth table in my head).
        So you can put a T410 and T480 on the same network jack and then deploy the same image to that target computer, and both are in the same firmware mode (uefi or bios) and the 410 has 6GB/m and the 480 has 500MB/m?

        Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

        T 1 Reply Last reply Jul 4, 2018, 12:47 AM Reply Quote 0
        • T
          Tom Elliott @george1421
          last edited by Jul 4, 2018, 12:47 AM

          @george1421 Based on this, it would seem, to me, the 480 has a 10/100 NIC (possibly) vs the 410 having a 10/100/1000 NIC?

          Just my thoughts on the whole thing.

          Typically, because of the compression applied, you will see faster than your network speeds, though not by too much. For example, on a 1Gb network (both sides) and using SSD (both sides) you could see 13-18 GB/min, where on a 1Gb network the theoretical (goldilocks?) maximum (translated) would be 7.5 GB/min.

          So compression is important in this. As CPU and write to disk is often much faster than the network itself. (This is also partially why Network->Disk is faster than Disk -> Disk, as the disk in question has to spin up, and locate the other point on the disk (same disk or not)).

          It really seems that the NIC on the 480’s is different than the 410’s, or some other variable. Seeing as things seem normal on one, and not on the other, it really points to the machine being the problem, not something fog is doing.

          Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

          Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

          Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

          1 Reply Last reply Reply Quote 1
          • S
            Sebastian Roth Moderator
            last edited by Jul 4, 2018, 6:00 AM

            @tomierna Interesting case you have there. From the information given so far I would suspect the Intel 219lm NIC and/or driver to be the problem. But it’s kinda strange you see the slowness only when deploying. I will try to investigate, see if we can find some known driver issues.

            Would be interesting to know if downgrading the kernel as suggested by George will also help with this issue. I doubt it but sure give it a go. There is nothing to loose.

            Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

            Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

            1 Reply Last reply Reply Quote 0
            • T
              tomierna
              last edited by Jul 5, 2018, 3:28 PM

              Thanks for the responses so far. I’ll try to answer the questions from everyone.

              @george1421 - My FOG Server is a VM on a XenServer (7.3). The VM is running CentOS 7.4.1708, and has 4GB RAM and 2 CPUs allocated. Looking at the memory usage on the server, it doesn’t appear critical, but I have plenty of RAM in the master, so I can certainly try adding more. Disk subsystem is a large number of 2TB drives (24 I think?) in RAID configuration, though I’d have to check the management console to say which config. It’s hardware raid though, and the XenCenter for that VM doesn’t seem to show that it is taxing the disk subsystem. This is a pretty beefy VM server.

              The t410i machines have 7200RPM 500GB drives. The t480 machines have 256GB M.2 SSDs.

              Deploying one t480 ends up between 400-500MB/m.

              Deploying one t410i shows expected throughput from a 1Gb port.

              Deploying multiple t480 (unicast) ends up between 400-500MB/m on each machine.

              I’ve not deployed multiple t410i (unicast) since trading out the switch and going to a 10GbE link to the server, but with the previous hardware, they shared a single 1GbE link to the server and it showed in the statistics.

              Looking at the BIOS on the t410i, I don’t see any uEFI switch, so I presume it’s running in traditional boot.

              With the t480s, UEFI is set to “only” and CSM Support has to be on, otherwise rEFInd complains and forces a “press any key” screen to appear after imaging completes.

              For testing purposes, I changed the boot setting to Legacy Only on one t480, and it hasn’t made a difference in Deploy speed.

              I will check the firmware version, but it is probably current since the machines were build-to-order, and they were shipped directly to us only a couple of weeks ago.

              Deploying a t410i on a particular network port and then trying a t480 on the same port shows the t410i with proper throughput, and the t480 with very low throughput.

              Re: 1GbE vs 100Mb/s link, I have checked in the switch management interface when a t480 deploy is running, and the link speed is listed as 1GbE.

              @Tom-Elliott - I’m also leaning toward the t480s having some sort of strange issue. It could be BIOS-level, or maybe client kernel level. The fact that it captures at a normal fraction of link speed but deploys at much reduced speed makes me think it’s not the FOG Server.

              @sebastian-roth - I started with whatever client kernel was installed with 1.5.0, and updated to 4.17.0 in an attempt to debug. Should I try going back farther?

              Some additional data and stuff I’ve tried today:

              • The 8-machine multicast that I mentioned starting never completed. It stalled at 22%, and I let it sit for a bit to see if it would recover. It never did.

              • As mentioned above, I changed one to Legacy BIOS mode, and that didn’t change the deploy throughput.

              • I’ve looked through the t480 BIOS config pretty closely, and I don’t see anything related to network that I think would make a difference.

              • Early on when starting to make these machines image smoothly, I had to turn off the IP6 stack for netboot.

              1 Reply Last reply Reply Quote 0
              • G
                george1421 Moderator
                last edited by Jul 5, 2018, 4:01 PM

                @tomierna Very well then.

                Your FOG server is sufficient and probably can be ruled out as the root of your issues here. I’m also leaning towards something unique with this new hardware.

                Since you were/are using unmanaged switches this is probably not the issue, but we have see on the enterprise managed switches, that sometimes the “green ethernet” [IEE 802.3az] settings get confused and cause the communications to switch to backup mode. But one would think this should happen in either mode (capture and deploy) not just in deploy mode. The unmanaged switches typically don’t support this green function so I don’t think this is the case here.

                Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                1 Reply Last reply Reply Quote 0
                • T
                  tomierna
                  last edited by Jul 5, 2018, 6:29 PM

                  There was a BIOS update for the t480 machines, but after installing it on one, and running a unicast deploy, it doesn’t seem to have fixed anything.

                  1.12 was the original and 1.14 is the current BIOS, and there is a note about Ethernet instability when net booting before Windows starts, but alas, deploying is still slow.

                  Also, I don’t know if I answered it, but the switch statistics show very few packet or frame errors. I also checked that Green Ethernet was disabled on the switch.

                  G 1 Reply Last reply Jul 5, 2018, 6:35 PM Reply Quote 0
                  • G
                    george1421 Moderator @tomierna
                    last edited by Jul 5, 2018, 6:35 PM

                    @tomierna While this won’t fix anything, can you go into windows device manager and record the hardware ID here?

                    It should looks (nothing) like this:

                    PCI\VEN_8086&DEV_1502&SUBSYS_05D21028&REV_06
                    

                    The above is for an intel 82579LM network adapter. That ID translates to a linux id of [8086:1502]. With that ID we can search to see if other linux folks are seeing similar results. But since the 480s are so new, there may be some undiscovered issue with the linux driver for that nic.

                    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                    T 1 Reply Last reply Jul 5, 2018, 6:40 PM Reply Quote 0
                    • T
                      tomierna @george1421
                      last edited by george1421 Jul 5, 2018, 12:44 PM Jul 5, 2018, 6:40 PM

                      @george1421

                      The I219-LM shows the following under Hardware Ids in the Details tab of Device Manager:

                      PCI\VEN_8086&DEV_15D7&SUBSYS_225D17AA&REV_21
                      PCI\VEN_8086&DEV_15D7&SUBSYS_225D17AA
                      PCI\VEN_8086&DEV_15D7&CC_020000
                      PCI\VEN_8086&DEV_15D7&CC_0200

                      [MOD Note] linux device translation [8086:15D7] - Geo

                      G 1 Reply Last reply Jul 5, 2018, 6:52 PM Reply Quote 0
                      • G
                        george1421 Moderator @tomierna
                        last edited by Jul 5, 2018, 6:52 PM

                        @tomierna Once you have windows up and running, if you download and upload a 1GB+ file, do you get about the same transfer rates (remembering our point of view has changed from the server to the client)? If you have a m.2 sata/nvme drive, I would still expect a better download from the network results (which is kind of the opposite flow from what you are seeing today with imaging).

                        Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                        T 1 Reply Last reply Jul 5, 2018, 8:52 PM Reply Quote 0
                        • T
                          tomierna @george1421
                          last edited by Jul 5, 2018, 8:52 PM

                          @george1421 - our imaging network is not routed, and I don’t have access to file servers on the rest of our LAN.

                          So, to test download speeds, I created a 2GB (random) file in /var/www/fog/client/ and added a case to download.php to allow me to download directly from the FOG Server with Chrome.

                          I was getting consistently 85-95MB/s.

                          I’m not sure the best way to test a file upload that would show me speed, short of installing an FTP program and doing sFTP back and forth, but upload speed isn’t a problem in the FOG-booted scenario.

                          1 Reply Last reply Reply Quote 0
                          • T
                            tomierna
                            last edited by Jul 5, 2018, 9:35 PM

                            I reset the statistics in the GC728X Switch before starting the most recent batch of unicast deployments (I’m doing 9 right now).

                            There is a column in the port statistics for the switch called “Link Down Events”, and they are counting upwards. 15% in to the deploy, most of the counters are at 11, and some are higher, in the 20’s.

                            There is a thread on Spiceworks saying this card was flapping on Cisco switches. The poster ruled out green ethernet. It’s off on my switch too.

                            Tomorrow I will reset the statistics again and then capture an image to see if the link-down events count up during a capture. I’m also preparing a new t410i image, so I’ll be able to test if the ports flap with those machines on either capture or deploy.

                            I’ll try turning off the various auto-negotiation stuff to see if that makes the flapping cease.

                            G 1 Reply Last reply Jul 5, 2018, 10:49 PM Reply Quote 0
                            • G
                              george1421 Moderator @tomierna
                              last edited by Jul 5, 2018, 10:49 PM

                              @tomierna Be aware that you WILL see 2 link transitions when pxe booting into the FOS kernel. One will happen when iPXE kernel takes over from the PXE ROM, and the second will happen when FOS takes over control of the NIC from iPXE. This is why FOG imaging has an issue with standard STP protocol. For FOG you need to run RSTP on your building switch.

                              Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                              T 1 Reply Last reply Jul 5, 2018, 11:11 PM Reply Quote 0
                              • T
                                tomierna @george1421
                                last edited by Jul 5, 2018, 11:11 PM

                                @george1421 that makes sense, but for one imaging session each port saw at least 21 link down events during this 9-unit unicast.

                                One was 25 link down events, one was 32, and one was 33.

                                Doesn’t that seem a little odd?

                                T G 2 Replies Last reply Jul 5, 2018, 11:13 PM Reply Quote 0
                                • T
                                  tomierna @tomierna
                                  last edited by Jul 5, 2018, 11:13 PM

                                  Also, for what it’s worth, “Spanning Tree State” is set to Disable and operation mode is set to RTSP on the switch.

                                  1 Reply Last reply Reply Quote 0
                                  • G
                                    george1421 Moderator @tomierna
                                    last edited by george1421 Jul 5, 2018, 5:38 PM Jul 5, 2018, 11:37 PM

                                    @tomierna That IS very strange, once the kernel is booted, you should not see any link transitions.

                                    a bit from the far side: I have no basis for this vision, but I’m seeing something about tcp off loading and ethtool.

                                    This will be digging in the weeds a bit with this one.

                                    1. Set up a debug deploy (tick the check box for debug before you submit the deploy task on one of these 480s).
                                    2. PXE boot the computer, after a few pages of text it should drop you to a linux command prompt on the target computer.
                                    3. look through the /var/logs to see if there are any error messages related to the network adapter.
                                      [sidebar] If you give root a password with passwd. Give it a simple password like hello. Then get the IP address of the fos linux with ip addr show. From here you can use putty on a windows computer to connect to the FOS engine. Login as root and the password of hello. Now you can interface with the FOS engine using the comfort of your windows computer. It will also make it easier to copy and paste text into the FOS engine. [/sidebar]
                                    4. After reviewing the logs and recording any thing suspicious continue on to the next step.
                                    5. using my vision above lets use ethtool to shut off some advanced features of the intel nic.
                                      ethtool -K eth0 sg off tso off gro off
                                    6. Now comes the time consuming bit, by entering fog at the linux command prompt you can single step through image deployment. You will need to press enter at each breakpoint. It will be interesting to see if turning all of the advanced features of the nic has any impact on image deployment.

                                    We can do some other benchmarking from the FOS debug prompt, but lets see how this goes.

                                    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                                    T 1 Reply Last reply Jul 6, 2018, 1:51 PM Reply Quote 2
                                    • T
                                      tomierna @george1421
                                      last edited by Jul 6, 2018, 1:51 PM

                                      @george1421 I’ve started a debug session, turned off as many of those advanced features as it would let me, and am currently tailing messages.

                                      ethtool gave the following error:

                                      Cannot get device udp-fragmentation-offload settings: Operation not supported
                                      Cannot get device udp-fragmentation-offload settings: Operation not supported
                                      Actual changes:
                                      scatter-gather: off
                                              tx-scatter-gather: off
                                      tcp-segmentation-offload: off
                                              tx-tcp-segmentation: off
                                              tx-tcp6-segmentation: off
                                      generic-segmentation-offload: off [requested on]
                                      generic-receive-offload: off
                                      

                                      The only text relating to the device in messages (first two lines are for a different driver in the kernel, right?):

                                      e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
                                      e1000: Copyright(c) 1999-2006 Intel Corporation.
                                      e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
                                      e1000e: Copyright(c) 1999-2015 Intel Corporation.
                                      e1000e 0000:00:1f.6 0000:00:1f.6 (uninitialized): registered PHC clock
                                      e1000e 0000:00:1f.6 eth0: (PCI Express:2.5GT/s:Width x1) MY:MA:CA:DD:RE:SS
                                      e1000e 0000:00:1f.6 eth0: Intel(R) PRO/1000 Network Connection
                                      e1000e 0000:00:1f.6 eth0: MAC: 12, PHY: 12, PBA No: 1000FF-0FF
                                      e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
                                      

                                      Looking at the output of ifconfig -a for the device while the deploy is running

                                      eth0      Link encap:Ethernet  HWaddr MY:MA:CA:DD:RE:SS  
                                                inet addr:10.0.0.179  Bcast:10.0.0.255  Mask:255.255.255.0
                                                UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                                                RX packets:2923202 errors:0 dropped:657 overruns:0 frame:0
                                                TX packets:493054 errors:0 dropped:0 overruns:0 carrier:0
                                                collisions:0 txqueuelen:1000 
                                                RX bytes:4385852212 (4.0 GiB)  TX bytes:50905921 (48.5 MiB)
                                                Interrupt:16 Memory:ed200000-ed220000
                                      

                                      That dropped RX packets sure looks suspicious!

                                      Is there anything else I can poke while the deploy is running? I’m prepared to abandon this deploy and twist other knobs…

                                      1 Reply Last reply Reply Quote 0
                                      • T
                                        tomierna
                                        last edited by Jul 6, 2018, 3:08 PM

                                        The deploy of the main partition is finished and I’m holding off on finishing it up to get some more stats.

                                        There haven’t been any substantive additional lines in /var/log/messages.

                                        ifconfig -a shows a couple thousand dropped packets.

                                        ethtool -S eth0 shows:

                                        NIC statistics:
                                             rx_packets: 14019324
                                             tx_packets: 2266954
                                             rx_bytes: 21033100462
                                             tx_bytes: 236885691
                                             rx_broadcast: 845
                                             tx_broadcast: 4
                                             rx_multicast: 14
                                             tx_multicast: 0
                                             rx_errors: 0
                                             tx_errors: 0
                                             tx_dropped: 0
                                             multicast: 14
                                             collisions: 0
                                             rx_length_errors: 0
                                             rx_over_errors: 0
                                             rx_crc_errors: 0
                                             rx_frame_errors: 0
                                             rx_no_buffer_count: 0
                                             rx_missed_errors: 2672
                                             tx_aborted_errors: 0
                                             tx_carrier_errors: 0
                                             tx_fifo_errors: 0
                                             tx_heartbeat_errors: 0
                                             tx_window_errors: 0
                                             tx_abort_late_coll: 0
                                             tx_deferred_ok: 0
                                             tx_single_coll_ok: 0
                                             tx_multi_coll_ok: 0
                                             tx_timeout_count: 0
                                             tx_restart_queue: 0
                                             rx_long_length_errors: 0
                                             rx_short_length_errors: 0
                                             rx_align_errors: 0
                                             tx_tcp_seg_good: 0
                                             tx_tcp_seg_failed: 0
                                             rx_flow_control_xon: 0
                                             rx_flow_control_xoff: 0
                                             tx_flow_control_xon: 0
                                             tx_flow_control_xoff: 0
                                             rx_csum_offload_good: 14018965
                                             rx_csum_offload_errors: 0
                                             rx_header_split: 0
                                             alloc_rx_buff_failed: 0
                                             tx_smbus: 1
                                             rx_smbus: 46
                                             dropped_smbus: 0
                                             rx_dma_failed: 0
                                             tx_dma_failed: 0
                                             rx_hwtstamp_cleared: 0
                                             uncorr_ecc_errors: 0
                                             corr_ecc_errors: 0
                                             tx_hwtstamp_timeouts: 0
                                             tx_hwtstamp_skipped: 0
                                        

                                        rx_missed_errors corresponds roughly with the ifconfig dropped packets.

                                        T 1 Reply Last reply Jul 6, 2018, 6:18 PM Reply Quote 0
                                        • T
                                          tomierna
                                          last edited by Jul 6, 2018, 3:48 PM

                                          After closely watching the deploy process a few times with statistics resets in between, I can confirm 16-24 link-down events is normal, because of the number of boots and reboots including Snap-In runs.

                                          Some of these machines still have their BIOS date set incorrectly, and that makes KMS activation not work, so the 24-count includes the initial Snap-In to activate, and then the subsequent reboots and re-Snap-In to activate properly once the time is coherent.

                                          I’m going to run another debug session soon and this time I’m going to increase the RX Ring Buffer to maximum - I’ve seen some chatter about this helping mitigate dropped packets with the e1000e card.

                                          1 Reply Last reply Reply Quote 0
                                          • 1
                                          • 2
                                          • 3
                                          • 1 / 3
                                          1 / 3
                                          • First post
                                            13/54
                                            Last post

                                          238

                                          Online

                                          12.0k

                                          Users

                                          17.3k

                                          Topics

                                          155.2k

                                          Posts
                                          Copyright © 2012-2024 FOG Project