• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Junkhacker
    3. Best
    • Profile
    • Following 9
    • Followers 5
    • Topics 10
    • Posts 2,009
    • Best 232
    • Controversial 0
    • Groups 2

    Best posts made by Junkhacker

    • RE: PartImage faster than PartClone?

      @scgsg if you want to optimize for speed, i suggest switching to zstd compression. it works better with modern multi-core multi-thread processors than the previously used types. as for whether partimage or partclone is faster by themselves, i consider it a moot point since partimage has not been under active development in 7 years. partclone might be slower due to it’s built in checksums.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Advice on specs for new setup

      @candidom i think one FOG server configured that way should be able to handle it. you might want to consider trying it with 1 before buying the other 2. i was doing 30+ at a time with traditional drives in a raid 5 at speeds that i found quite adequate.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Storage Node Pxe not working

      for whatever reason the fog/service/ipxe/boot.php web address on your storage node wasn’t working correctly. there are a lot of reasons that might be happening (apache config problems, php config problems, db connection problems, firewalls, etc…), but if your hosts can reach the primary fog server they can just load up the one there. this url is the php file that dynamically generates the text used in the pxe boot menu

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: PartImage faster than PartClone?

      @scgsg since we no longer support capturing with partimage, a capture image set to partimage is captured using partclone with literally the exact same code as if it had been properly set to partclone to begin with. I suspect external factors are at play causing any difference in speed you are seeing.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: How do you re-compress an image file?

      @Tom-Elliott in fact, in my testing, it was 10% faster

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Small Image taking up Big space.

      a good tool for that would be WinDirStat

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: PartImage faster than PartClone?

      @Wayne-Workman let me save you the trouble of testing:

          case $imgFormat in
              6)
                  # ZSTD Split files compressed.
                  zstdmt --ultra $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. &
                  ;;
              5)
                  # ZSTD compressed.
                  zstdmt --ultra $PIGZ_COMP < $fifo > ${file}.000 &
                  ;;
              4)
                  # Split files uncompressed.
                  cat $fifo | split -a 3 -d -b 200m - ${file}. &
                  ;;
              3)
                  # Uncompressed.
                  cat $fifo > ${file}.000 &
                  ;;
              2)
                  # GZip/piGZ Split file compressed.
                  pigz $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. &
                  ;;
              *)
                  # GZip/piGZ Compressed.
                  pigz $PIGZ_COMP < $fifo > ${file}.000 &
              ;;
      esac
      

      this is the code that the image format setting gets used for on uploads. the default partclone image format is “1” partimage is “0”
      as you can see, literally the same thing is done on an upload with either of those two settings set with regard to how the image is captured.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: On-demand image deployment at boot time?

      another note, the deploy times when using fog should be similar to what you’re seeing now, if you put those SSD drives in the fog server instead of as a second SSD in the PCs

      alternatively, if the current system is working for you and your difficulty in maintaining the current system is in part updating the “imaging” drive of the systems, fog could be used to push out update images to the first drive of the systems.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Small Image taking up Big space.

      does that computer have 32 gigs of ram, by chance?
      by default, i believe windows creates a pagefile equal in size to the amount of ram in the computer.
      btw, fog automatically deletes the pagefile and hibernate files before uploading the image to the server, since they will automatically be recreated and they consume a lot of wasted space.

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: PartImage faster than PartClone?

      @scgsg i would like to point out that, unless the client you’re using for testing is similar to the clients you’re deploying to, your benchmarks aren’t going to be very useful for you. your test client is using a processor that was a economy model when it was released almost 7 years ago. zstd and pigz are optimized for modern efficient multi-threading systems, and i suspect your Pentium isn’t taking advantage of them very well.

      personally i use zstd compression level 11, as i find it has nearly the same upload speed as gzip at compression level 6 while making the images 26% smaller and deploy 36% faster. again, that is on more modern hardware than you’re using, your results will vary.

      zstd compression level 19 is the highest normal compression level. above 19 are “ultra” compression levels that require massive amounts of ram.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: On-demand image deployment at boot time?

      here’s a demo video using standard 1GbE, a server with a hard disk RAID array (no SSDs), to a Dell Optiplex 3020 with a cheap SSD. https://youtu.be/gHNPTmlrccM

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Hosts being randomly deleted from hostsMAC table..

      this problem is believed to be resolved now, thank you for the bug report.

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: Performance Monitoring tools

      @Fernando-Gietz We use icinga for all of our servers, but for monitoring a single server in detail like this i like monitorix. you just choose what services you want to monitor and you get feedback screens like this https://www.monitorix.org/imgs/mysql.png and https://www.monitorix.org/imgs/apache.png

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: The future of partclone and therefore FOG as it is

      the reason the partitions aren’t usable is because of a bug in partclone since at least 3.11 regarding --ignore_crc, which we use in our scripting. if you remove that, it works fine. i’ve been doing testing with partclone 3.12 for a while and it is completely backward compatible with the existing images. sorry i didn’t see this discussion until now.

      also, there’s new options in the latest versions of ztst and partclone that i think we need to discuss. interesting and potentially very important options have been opened up.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Windows 7 dual boot image clears Windows boot manager

      try setting your Operating System on the image to “Windows Other”

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: Client Side Scene Fog?

      @tutu10 i think you posted this to the wrong forum…

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: The future of partclone and therefore FOG as it is

      @george1421 well, like i alluded to with my last comment, adding checksums doesn’t make sense since we pipe it right into a compressor that will add it’s own checksums.
      … actually, you can defer to my post here https://forums.fogproject.org/topic/12750/file-format-and-compression-option-request/ for my arguments for change, but the short of it is:

      disable checksums with the flag -a0

      the rest of my changes actually have to do with the settings we use for compression, come to think of it.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Host details

      if anyone still wanted this, here’s how you get it. add this file to your web-root/fog/lib/hooks folder
      Most of the credit goes to Tom and Rowlett on this one.

      [url=“/_imported_xf_attachments/0/714_AddHostSerial.hook.php?:”]AddHostSerial.hook.php[/url]

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: How to edit the pxe boot menu in FOG 1.2.0

      @chriscarman this can be done in the trunk version of fog from the web interface

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: The future of partclone and therefore FOG as it is

      @george1421 pigz still outperforms gzip. gzip still doesn’t support multi-threading, and i don’t think they have any plans to implement it. also pigz uses a slightly different formula for it’s rsyncable implementation that i have tested to be slightly better at creating chunk data that dedups.

      blame me for the 200MB split code. it was an experiment that got pushed into the mainline code, but the idea was to make it easier for people running FOG to make backups to CD/DVD/external hard drive.

      posted in General
      JunkhackerJ
      Junkhacker
    • 1 / 1