• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login

    Hardware upgrades for server

    Scheduled Pinned Locked Moved Solved
    General
    5
    16
    2.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • george1421G
      george1421 Moderator @tesparza
      last edited by

      @tesparza Those numbers are not bad. They are a little lower than expected, but more than adequate. With those numbers that disk subsystem can produce (theoretical max) of about 13GB/min. So your bottle neck is not with your disk subsystem.

      The next step is to check the network stack. Ideally we’d like to get a computer connected to the same physical network switch as your FOG server.

      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

      T 1 Reply Last reply Reply Quote 1
      • T
        tesparza @george1421
        last edited by

        @george1421 Okay Im gonna try to capture again, this time im directly connected to the same switch as the fog server. 1Gbps link

        george1421G 1 Reply Last reply Reply Quote 0
        • george1421G
          george1421 Moderator @tesparza
          last edited by

          @tesparza Understand that capture rates will be different then deployment rates. On capture you take the penalty for faster deployments.

          I haven’t had a chance to get back to this thread, but the next steps are to test the network stack. Here is the concepts: We will do that using a target computer connected to the same switch as your fog server. We will register the target computer then schedule a debug capture or deploy it doesn’t matter. Once we are at the command prompt we will manually mount the /images/dev share on the fog server and then use the iperf3 utility to measure bandwidth between the target computer and FOG. That will give us an idea of bandwidth availability. Then you will repeat the same process from the far end of your network to see if there is a difference.

          The last bit is testing nfs performance. But lets see what the network tests tell us. I need to create clear instructions for the above to ensure we get the numbers we might expect.

          Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

          T 1 Reply Last reply Reply Quote 0
          • T
            tesparza @george1421
            last edited by

            @george1421
            My core switches are 1gbps. And my classroom switch are 100mbps. That’s the bottle neck. Can’t do nothing about it until I get my upgrades later this year.

            george1421G 1 Reply Last reply Reply Quote 0
            • george1421G
              george1421 Moderator @tesparza
              last edited by

              @tesparza OK knowing that I might expect to see a ~700MB/min transfer rates using FOG.

              So what can you do?

              1. Upgrade your network <smirk>
              2. Change your image compression from standard gzip to zstd and set the compression level to 11-15. zstd is a newer, tigher and faster decompression tool than gzip. The tighter you can pack the image the easier it will be on bandwidth. But its also a sliding scale, the tigher you pack the data the more CPU usage you will have on the client during image decompression.

              We don’t have any base line numbers using 100Mb/s networking so you will have to find the right fit for your setup. In your case a faster server won’t help. If you have multiple computers to image at the same time, I would try to multicast the image to several at a time. Because you don’t have speed, you have to manage the quantity.

              Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

              T C 2 Replies Last reply Reply Quote 0
              • T
                tesparza @george1421
                last edited by

                @george1421 Yeah i just tried a deployment on the switch where the fog server is connected and im getting 8GB/min
                I just gonna have to wait for the upgrade that we are getting, for now I’m gonna just register the host and use the snapins. Multicast and cloning will have to wait until later this year. Thank you so much guys

                george1421G 1 Reply Last reply Reply Quote 0
                • george1421G
                  george1421 Moderator @tesparza
                  last edited by

                  @tesparza On the plus side you know that your server is fine and up to the job. No expense there needed.

                  Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                  1 Reply Last reply Reply Quote 1
                  • V
                    VincentJ Moderator
                    last edited by

                    I use a VM as my central fog server.

                    The storage ‘nodes’ are NAS’ (Usually FreeNAS) so if you have Synology, FreeNAS or Qnap you could just set those up as where the data actually moves from.

                    My FOG Server is over the other side of an IPSEC VPN so I cannot pull images directly from it.

                    1 Reply Last reply Reply Quote 0
                    • C
                      compman @george1421
                      last edited by

                      @george1421 zstd decompression speed is supposed to stay more or less the same whatever the compression level. Only compression becomes slower.

                      JunkhackerJ 1 Reply Last reply Reply Quote 0
                      • JunkhackerJ
                        Junkhacker Developer @compman
                        last edited by

                        @compman while that’s true, in practice zstd at a higher compression level will return a faster overall process when the bottleneck is the network.

                        signature:
                        Junkhacker
                        We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                        1 Reply Last reply Reply Quote 2
                        • 1 / 1
                        • First post
                          Last post

                        188

                        Online

                        12.0k

                        Users

                        17.3k

                        Topics

                        155.2k

                        Posts
                        Copyright © 2012-2024 FOG Project