• Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login
  • Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login

Slow Deploy Speed.

Scheduled Pinned Locked Moved
FOG Problems
6
16
7.0k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D
    Dragnous
    last edited by Sep 23, 2014, 7:13 PM

    [quote=“VincentJ, post: 36878, member: 8935”]Not exactly solving the problem of the slow speed, but could you do twice as many with an extra storage node?

    I’ve found with multiple clients there is a point at which the slowdown really does ramp up. would be interesting if someone had FOG on SSD and could see if the same thing occurred.[/quote]

    Yea i am starting to think machines with only 10/100 ethernet may be my problem. When i was doing the SL510 they had 10/100/1000 ethernet s the speeds seemed good.

    1 Reply Last reply Reply Quote 0
    • V
      VincentJ Moderator
      last edited by Sep 23, 2014, 9:08 PM

      on machines with 10/100 I would expect 500MB/minute. On my gig machines I can push 5+GB per minute to a few at a time… but if I pile on more it can throw in the towel and really drag the speed down.

      Also consider checking the disk IO statistics on your drives… HDDs aren’t good at random IO, and if you have enough sequential transfers at once, it becomes random.

      1 Reply Last reply Reply Quote 0
      • J
        Jaymes Driver Developer
        last edited by Sep 24, 2014, 11:34 AM

        Not to waste a post, I hate doing this. But I am in the same boat as Vincent here. My Gig equipment ( so long as I keep the 10/100 out of the way) I can push almost 5 GB/min and sometimes 6.5, but that’s very rare with NO ONE in my building but me.

        I have some labs that have not been upgraded and they still sit at 10/100 and I only image them in sets of 10 (30 computers 3 sessions) to keep the load down. I also take my fog machine to this lab to take care of it. I know this isn’t the case in some installs. Just letting you know how I deal with the issue 🙂

        Keep testing and let us know, we will do what we can to help you to fix the issue.

        WARNING TO USERS: My comments are written completely devoid of emotion, do not mistake my concise to the point manner as a personal insult or attack.

        1 Reply Last reply Reply Quote 0
        • D
          Dragnous
          last edited by Sep 24, 2014, 3:36 PM

          So i dumped using Multicast as the speeds where terrible and I’ve been using UDPCast and I’ve been able to push more at a time.

          I am going to duplicate and add a node so i an have 2 different sets going at a time and see if i can push 32 at a time with 2 nodes using udp.

          Edit: using a mysql client can i connect to the fog data base so i can export a more custom report…? if so what do i need to do

          1 Reply Last reply Reply Quote 0
          • T
            Trevelyan
            last edited by Sep 25, 2014, 1:31 PM

            Just as a note; if you are using multicast for a 10/100 set of hosts from a gigabit server, I reckon you will get slow speeds as a result of a whole load of re-transmits. If you were to force the server to run at 10/100, I reckon you will get the same performance as you would for unicast.

            1 Reply Last reply Reply Quote 0
            • B
              b.martin
              last edited by Sep 27, 2014, 9:37 AM

              Hello, after several tests on my FOG QNAP server, I noticed that it must reduce the Gzip compression level in fog settings\fog boot settings. Put 7 instead of default 9. The download speed and upload is much better. Example: 3 minutes to download a backup of a disk of 5500Mo for Windows XP or 15 minutes for a disk 42000Mo for Windows 7

              1 Reply Last reply Reply Quote 0
              • V
                VincentJ Moderator
                last edited by Sep 27, 2014, 2:46 PM

                your client CPUs low end? decompression is usually quite easy for CPUs to do fast. (faster than gigabit anyway)

                have you looked at how much space you lose from doing that change?

                1 Reply Last reply Reply Quote 0
                • B
                  b.martin
                  last edited by Sep 27, 2014, 5:26 PM

                  Peu d’espace perdu, mais énormément de débit gagné. A vous de faire vos tests pour le vérifier
                  Little wasted space, but a lot of flow won. You make your tests to check

                  1 Reply Last reply Reply Quote 0
                  • S
                    Steven B
                    last edited by Sep 30, 2014, 5:09 PM

                    [quote=“VincentJ, post: 36878, member: 8935”]Not exactly solving the problem of the slow speed, but could you do twice as many with an extra storage node?

                    I’ve found with multiple clients there is a point at which the slowdown really does ramp up. would be interesting if someone had FOG on SSD and could see if the same thing occurred.[/quote]

                    I’m running fog and imaging a bunch of HP t820 with 128G SSD drives. They are linux based so the up / down is raw not resizable. I can do on one system in about 15 minutes averaging 7G on the push down. I was able to push 16 systems last week and averaged 4.6G This week the 16 systems thru put drops to about 1G to all.
                    Similar thru put to traditional window systems but because nfts performance is better. Monitoring the eth0 now to see. I may have issues with the universities network getting udp traffic to and from other ips.

                    1 Reply Last reply Reply Quote 0
                    • V
                      VincentJ Moderator
                      last edited by Sep 30, 2014, 8:46 PM

                      is your server SSD as well?

                      the interesting bit would be imaging from a server with an SSD to allow multiple images to get to multiple clients from different parts of the disk at the same time.

                      1 Reply Last reply Reply Quote 0
                      • S
                        Steven B
                        last edited by Oct 1, 2014, 3:53 PM

                        [quote=“Steven B, post: 37172, member: 24174”]I’m running fog and imaging a bunch of HP t820 with 128G SSD drives. They are linux based so the up / down is raw not resizable. I can do on one system in about 15 minutes averaging 7G on the push down. I was able to push 16 systems last week and averaged 4.6G This week the 16 systems thru put drops to about 1G to all.
                        Similar thru put to traditional window systems but because nfts performance is better. Monitoring the eth0 now to see. I may have issues with the universities network getting udp traffic to and from other ips.[/quote]
                        No the server is not SSD. The ubuntu image has four VMs with networking. I have 3 SSD drives in the thin client. 16G windows 7 for horizon view 80G with dual boot windows 8 server 2008.

                        1 Reply Last reply Reply Quote 0
                        • 1 / 1
                        1 / 1
                        • First post
                          15/16
                          Last post

                        162

                        Online

                        12.0k

                        Users

                        17.3k

                        Topics

                        155.2k

                        Posts
                        Copyright © 2012-2024 FOG Project