• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. VincentJ
    3. Posts
    V
    • Profile
    • Following 1
    • Followers 1
    • Topics 10
    • Posts 263
    • Best 7
    • Controversial 0
    • Groups 1

    Posts made by VincentJ

    • RE: Hardware upgrades for server

      I use a VM as my central fog server.

      The storage ‘nodes’ are NAS’ (Usually FreeNAS) so if you have Synology, FreeNAS or Qnap you could just set those up as where the data actually moves from.

      My FOG Server is over the other side of an IPSEC VPN so I cannot pull images directly from it.

      posted in General
      V
      VincentJ
    • RE: Location Plugin 'ID Must be set to edit'

      git checkout dev-branch
      git pull

      now running 1.5.0 RC11
      Interesting new GUI.

      I added a location - it said it was successful however it seems no location was created.
      When i clicked to list locations none appeared.

      I uninstalled the plugin and re-installed it and it now works.

      Thank-You for the quick reply 🙂

      posted in Bug Reports
      V
      VincentJ
    • Location Plugin 'ID Must be set to edit'

      ‘ID Must be set to edit’

      Getting this error when trying to add a location on my new central server.

      Using Fog from Git SVN revision 6077

      posted in Bug Reports
      V
      VincentJ
    • RE: ZSTD Compression

      @Junkhacker Yes it was. All of my images are.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      So… Deploy with old fog, capture with new fog on a different image set to zstd 11

      I noticed on the upload, the screen still says:
      Starting to clone device (/dev/sda2) to image (/tmp/pigz1)

      The image is Multiple Partition Single Disk (non re-sizeable)

      On my old image it has two files… but the new one has three…

      I am pulling the image off the NAS now to unpack it and test it, but thought i would pitch in on what i’ve seen so far.

      svn revision 6066

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      We already have the check box for ‘legacy images’ which the admin can use… No reason that the check box on the image couldn’t say which compression method it’s using.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      What is in that image? 2.6GB compressed is very small. Does that image download in under a minute normally?

      I have a base windows 10 + updates image i can also try. The one i used in my numbers previously had applications in it for a complete system. I will see if i can get that to compress down to something similar.

      While my image is a lot bigger if i scale yours up to the size of mine; i am saving a lot more space.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      @Tom-Elliott Thanks for putting it into the init.

      Would it be as simple as searching through the code for the commands for imaging and changing them to use zstd instead of pigz or would there be more complicated things involved due to the way the commands are generated?

      Do you know if most people use multicast or just do multiple unicast for deployments? I have never got multicast to work fully and always end up with each client downloading on it’s own. I have usually had my server set to 4 clients at once except when i had 10GbE and 2Gbit links between MDF and IDF… On that machine i used 8 and with ZFS caching I had no problems with the disk IO of so many transfers.

      If we can get improvements via increasing those numbers then it makes things a bit more worth the effort to speed up people’s deployments.

      as for uploading… I also have to upload every month or so and with one of my clients i have a 2 hour time window to do all maintenance so uploading sometimes gets delayed as it can take a considerable amount of time.

      The other benefit of reduced file size would also help, in my case, by reducing the sync time between sites over WAN.

      As people’s machines become more powerful then we can scale with them instead of being held back by the lack of speed in PIGZ. 10GbE is coming down in price and SSD/NVMe/HDD are getting better all the time.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      Maybe you just saw the note about 1vCPU. I only reduced to 1vCPU as the numbers with 4vCPU were all so close together.

      Also might help to simulate a ‘low end’ machine…

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      The version of zstd i’ve been using is using all my threads 🙂

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      1 vCPU 1.6GHz - the system can no longer saturate gigabit over network shares…
      Down from 110MB/s to 82MB/s

      Compression - Compressed size - Decompression time
      zstd lvl1 - 7,940,779KB - 131 seconds
      zstd lvl3 - 7,420,268KB - 134 seconds
      zstd lvl11 - 6,967,155KB - 139 seconds
      zstd lvl22 - 6,214,702KB - 157 seconds

      pigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 247 seconds

      On my quad core VM PIGZ -6 only used 50MB/s decompression, zstd level 11 with a single core VM uses the same 50MB/s…
      On the single core VM, PIGZ -6 is only 30 MB/s, the lowest zstd gets on level 22 is 39.5MB/s

      if we use the single core numbers, writing the whole image in 247 seconds (which isn’t too much faster than expected anyway) is around 66MB/s on disk, using zstd 11 writing it in 139 seconds is 117MB/s Most SATA disks should be able to do this… It will be a push for some 2.5" disks… (I checked numbers for 2.5" and 3.5" WD Greens)

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      So, both of us have results that show zstd decompressing quicker and having better ratios.

      I’m going to reconfigure my VM to only have 1 vCPU at 1.6GHz to see if i can get more useful decompression results.

      I redid the pigz -3 decompression test twice to confirm it was slower than the others… Not what i was expecting but that is what happened.

      In my tests for compression the standard -6 on PIGZ is beaten by zstd for ratio by zstd lvl3 and completes 443 seconds faster… We could up to zsd lvl11 and have it 93 seconds quicker and save around 550MB.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      SO…

      16,390,624KB file removed from the compressed windows 10 image.
      on a 34GB RAMdisk.

      Copying the file on the RAMdisk is 740MB/s so that is well above what we need for imaging for most people.

      Lets try some things to get some numbers.

      Compression - Compressed size - Compression time - Decompression time
      zstd lvl1 - 7,940,779KB - 50 seconds - 38 seconds
      zstd lvl3 - 7,420,268KB - 75 seconds - 40 seconds
      zstd lvl5 - 7,286,951KB - 128 seconds - 40 seconds
      zstd lvl8 - 7,070,670KB - 261 seconds - 41 seconds
      zstd lvl11 - 6,967,155KB - 425 seconds - 41 seconds
      zstd lvl14 - 6,942,360KB - 674 seconds - 42 seconds
      zstd lvl17 - 6,781,375KB - 1,618 seconds - 42 seconds
      zstd lvl20 - 6,471,945KB - 2,416 seconds - 43 seconds
      zstd lvl22 - 6,214,702KB - 3,970 seconds - 45 seconds

      pigz.exe --keep -0 a:\d1p2 - 16,393,125KB - 72 seconds - 80 seconds
      pigz.exe --keep -3 a:\d1p2 - 7,783,303KB - 292 seconds - 158 seconds (157 seconds)
      pigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 518 seconds - 149 seconds
      pigz.exe --keep -9 a:\d1p2 - 7,512,046KB - 1,370 seconds - 149 seconds

      Windows 10 Pro, 4 vCPU 42GB RAM with 34GB RAM Disk.
      Host XenServer 7.0, Dual E5-2603 v3, 64GB RAM, HDD Raid 1.
      Other VMs moved to the other hosts in the pool.

      Decompression seems to not use all CPU with PIGZ… around 50%…
      Compression does use all 100% CPU
      Decompression with zstd does use all CPU - but most were around 400MB/s so possibly I’m hitting some other limit.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      🙂 I have more results in progress on the RAMdisk. Hopefully i’ll get them done tonight.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      I am trying to setup something to test speed with local disks and a ramdisk…

      I have a copy of pigz for windows and the zstd version of 7zip…

      I’m setting up a quad core VM on one of my hosts with over 32GB RAM. CPU in the host is dual E5-2603V3 and storage is HDD Raid 1.

      Which compression levels in pigz do you want me to try and test?

      I am not entirely sure this gets round all of the problems speed wise which i could have that would make results not ideal… but it’s the best i can do.

      posted in Feature Request
      V
      VincentJ
    • RE: Synology NAS as FOG Storage node

      If you setup NFS and FTP then it should work perfectly.

      I use Synology and FreeNAS’ as storage nodes. My Fog Server doesn’t even serve images.

      posted in Tutorials
      V
      VincentJ
    • RE: ZSTD Compression

      https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/

      They did a comparison between the gzip cli and the zstd cli and compression with zstd was around 5 times faster and decompression was around 3.5 times faster…

      zstd is tunable, which is why i included four levels, which all beat gzip9 for compression ratio. If using zstd, we can deliver the image faster (because it’s smaller) and decompress the data faster once it arrives at the client… where is the loss?

      I don’t have a testing infrastructure i can use at the moment that would yield verifiable speed results. My storage array is not built for IOPS and there would be live VMs also consuming bandwidth on the array and the network which would affect the result. I would also be reading an image off a drive while trying to write the same image to the same drive within a VM.

      posted in Feature Request
      V
      VincentJ
    • RE: ZSTD Compression

      Disk image of a windows 10 system with some applications - 7,512,046KB (PIGZ 9)
      Unpacked - 16,390,624KB (I can’t unpack further with the tools on the system)
      zstd lvl5 - 7,286,951KB
      zstd lvl11 - 6,967,155KB
      zstd lvl17 - 6,781,375KB
      zstd lvl22 - 6,214,702KB

      I tested in a VM so compression/decompression times/speeds are not useful measurements in my case.

      posted in Feature Request
      V
      VincentJ
    • ZSTD Compression

      Is there any way we could drop in zstd compression into fog and see if we get faster imaging/better compression ratios?

      http://facebook.github.io/zstd/

      posted in Feature Request
      V
      VincentJ
    • RE: AD Join Not Functioning (Code 87)

      Strange update…

      4 VMs that i’ve been playing with.

      Two of them got manually joined to the domain.

      I reimaged the VMs as i had been playing with the registry to no avail… and suddenly two were able to join to the domain via the client… I added computer objects for the last two in the AD and the remaining two joined also after a few minutes.

      Tested the dedicated FOG user as well… success at domain join.

      Seems the problem is resolved and another has poked it’s head up. The Client does not seem to be able to join the domain without a pre-staged computer object - Even when FOG has the domain administrator’s credentials to join the domain.

      posted in Windows Problems
      V
      VincentJ
    • 1
    • 2
    • 3
    • 4
    • 5
    • 13
    • 14
    • 1 / 14