• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Junkhacker
    3. Posts
    • Profile
    • Following 9
    • Followers 5
    • Topics 10
    • Posts 2,009
    • Best 232
    • Controversial 0
    • Groups 2

    Posts made by Junkhacker

    • RE: Access Control

      @NT_Tech the mobile interface has been removed, and the “mobile only” user type can’t be made in the interface anymore, but, you can still create that type of user by directly entering it in the database. that user will only be able to use the PXE boot menu.
      i know this is not an ideal solution.

      posted in General Problems
      JunkhackerJ
      Junkhacker
    • RE: Captured Image is about the size of the Hard Drive

      @KevFlwrs the “size on disk” is the partition size that partclone is trying to capture. fog tries to resize the partition down before capturing it, but it can only shrink down to the last used block on the partition. you might be able to get it to shrink down if you defrag the drive before you attempt to capture the image.

      posted in Windows Problems
      JunkhackerJ
      Junkhacker
    • RE: Captured Image is about the size of the Hard Drive

      @KevFlwrs again, what settings did you use for your image? what you’re seeing would be perfectly normal if you are using a nonresizable image type.

      posted in Windows Problems
      JunkhackerJ
      Junkhacker
    • RE: Captured Image is about the size of the Hard Drive

      the images didn’t attach to your post, it seems we’ve had a problem with that on the forums lately. i don’t understand what the issue is. what settings did you use for your image?

      posted in Windows Problems
      JunkhackerJ
      Junkhacker
    • RE: file format and compression option request

      @Junkhacker further testing suggests that everything works fine without the --ignore_crc flag we have set in all restore operations. i’ve also learned that due to a bug in partclone 2.89, the checksums it was making were practically useless. if/when we upgrade to the newer version of partclone, we might want to consider disabled checksums to be the default, since we’ve been, in essence, operating without checksums all along.

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: Host Server OS Suggestions

      @Scott-B said in Host Server OS Suggestions:
      fog will work on any of them, really.
      I recommend Debian or CentOS, whichever you’re most comfortable with.
      I don’t recommend Arch for production. Only crazy people do that.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: After installing FOG, I seem to get locked out of Ubuntu.

      @dws88 the installer creates a user named fog. if a user named fog already exists, it ends up changing the password. the install instructions used to specifically say not to create a user named fog yourself, but it looks like that warning has been removed. guess we need to add it back.

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • RE: After installing FOG, I seem to get locked out of Ubuntu.

      @dws88 when you set up the server, did you use an account with the username “fog”?

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • just sharing an appreciation post

      https://www.reddit.com/r/sysadmin/comments/a727a8/fog_appreciation_post/

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: How do you re-compress an image file?

      @Tom-Elliott in fact, in my testing, it was 10% faster

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: file format and compression option request

      @Junkhacker interesting. no problems with images that are told not to generate checksums. no problem with images with checksums if i remove the --ignore_crc parameter.

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: file format and compression option request

      i’ve had a chance to do just a little testing. something definitely isn’t working right. images i had converted using partclone 3.11 deploy fine with the standard init, but using your build with 3.12 i can’t get a successful deploy of existing images (boot to blinking cursor), and can’t deploy what i capture with it (it flashes fast but looks like a crc error when you would normally get a “syncing” message from partclone, even though we are ignoring checksums)

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: file format and compression option request

      @Sebastian-Roth huh, i didn’t even realize it was on an “unstable” build. Tsai doesn’t differentiate on his github https://github.com/Thomas-Tsai/partclone/. It has a “release” tag.
      i can confirm that the images i have created with 0.3.12 with the checksum removed still deploy with the 0.2.89 build, so at least there’s not much of a backwards compatibility concern, for what it’s worth.
      i’ll try to do some testing with your init build when I get back to work.

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: file format and compression option request

      @Sebastian-Roth said in file format and compression option request:
      i have not used zbackup because it hasn’t been in active development in years, i find it slow, and microsoft’s dedup evaluation tool (DDEval.exe) actually works pretty well for determining how dedupable a data set is.

      I am wondering if this is actually the case

      i posted my benchmark results…

      those posts are about adding the compression with --rsyncable to clonezilla, but they mention that they aren’t getting the results they expect and conclude “I think it could be due to how Clonezilla stores data.” well, they’re right. partclone has a rolling checksum included in the file that kills dedup. since then, they have added the ability to save the image without a checksum. i don’t think anyone has tested it’s dedup potential until now.

      see results of the same 2 Windows 7 images from the benchmarks in my original post.
      partclone images with checksum - uncompressed (originally captured by fog, just decompressed):

      Evaluated folder size: 29.70 GB
      Files in evaluated folder: 16
      
      Processed files: 6
      Processed files size: 29.70 GB
      Optimized files size: 15.16 GB
      Space savings: 14.54 GB
      Space savings percent: 48
      
      Optimized files size (no compression): 29.70 GB
      Space savings (no compression): 4.13 MB
      Space savings percent (no compression): 0
      
      Files excluded by policy: 10
           Small files (<32KB): 10
      Files excluded by error: 0
      

      notice that all of the potential space savings reported by the tool are in compression

      partclone images without checksum - uncompressed:

      Evaluated folder size: 29.66 GB
      Files in evaluated folder: 16
      
      Processed files: 6
      Processed files size: 29.66 GB
      Optimized files size: 8.64 GB
      Space savings: 21.02 GB
      Space savings percent: 70
      
      Optimized files size (no compression): 17.39 GB
      Space savings (no compression): 12.27 GB
      Space savings percent (no compression): 41
      
      Files excluded by policy: 10
           Small files (<32KB): 10
      Files excluded by error: 0
      

      this offers the best possible dedup of the ways of storing images for fog, but imaging would take a long time without compression, so…
      partclone images without checksum pigz -6 --rsyncable:

      Evaluated folder size: 12.53 GB
      Files in evaluated folder: 16
      
      Processed files: 6
      Processed files size: 12.53 GB
      Optimized files size: 7.93 GB
      Space savings: 4.60 GB
      Space savings percent: 36
      
      Optimized files size (no compression): 7.93 GB
      Space savings (no compression): 4.60 GB
      Space savings percent (no compression): 36
      
      Files excluded by policy: 10
           Small files (<32KB): 10
      Files excluded by error: 0
      

      i have more benchmark results, but this post is already getting long.

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • file format and compression option request

      oh great, Junkhacker hasn’t been around forever and now here he is requesting a bunch of stuff…

      i have a few inter-related requests:

      1. fog is still using version v0.2.89 of partclone. i’d like for it to be upgraded to a newer version, such as v0.3.11, so that a new capture argument can be added

      2. I would like the -aX0 argument added to the partclone capture command, as either the new default or included in a new Image Manager option. this argument tells partclone to not roll a checksum into the image as it’s being captured. we use the --ignore_crc argument on restores anyway, so this should have no detrimental effects.

      3. I would like the --rsyncable argument added as the default, or as part of a new Image Manager option, to pigz compression. this periodically resets the internal structure of the compressed data stream. this only adds approximately 1% to the size of the image. with that 1% increase in size, combined with removing the checksums, turns fog images into data that can be transfered more efficiently with rsync, or more importantly for my purposes, deduplicated.

      fog images as they are created now do almost no duplication on supported filesystems and backup systems. my proposed should allow images to be dedupicated quite well. I’m working up some data to show what degree possible using the windows DDPEval tool and a number of manually converted image files for testing. so far, the results are promising and i thought i would put my ideas out there for others to review and possibly replicate.

      here are the kinds of results i’ve seen with my testing so far using 2 Windows 7 images,

      compressed as originally by fog with pigz -6:

      Processed files size: 12.47 GB
      Optimized files size: 12.44 GB
      Space savings: 33.09 MB
      Space savings percent: 0
      
      Optimized files size (no compression): 12.47 GB
      Space savings (no compression): 1.51 MB
      Space savings percent (no compression): 0
      

      here converted to no-checksum pigz -6 --rsyncable:

      Processed files size: 12.53 GB
      Optimized files size: 7.93 GB
      Space savings: 4.60 GB
      Space savings percent: 36
      
      Optimized files size (no compression): 7.93 GB
      Space savings (no compression): 4.60 GB
      Space savings percent (no compression): 36
      

      zstandard has just added an rsyncable option to their compression (not yet released, but it’s in the dev code if you build it) but it doesn’t offer as much dedup, and since it’s not in it’s actual release yet, i want to hold off on adding that code, but eventually add it as well.

      *Edited for typos and formatting.

      posted in Feature Request
      JunkhackerJ
      Junkhacker
    • RE: Add an menu item thruogh web GUI

      @klkwarrior why not do full registration?

      posted in General Problems
      JunkhackerJ
      Junkhacker
    • RE: Integrating clonezilla into fog

      @alouis here’s what i use

      :MENU
      menu
      item --gap -- ---------------- iPXE boot menu ----------------
      item clonezilla_backup Clonezilla backup
      item clonezilla_restore Clonezilla restore
      item return return to previous menu
      choose --default return target && goto ${target}
      
      :clonezilla_backup
      login
      kernel http://${fog-ip}/ipxe/clonezilla/live/vmlinuz initrd=http://${fog-ip}/ipxe/clonezilla/live/initrd.img ocs_live_run="ocs-live-general" boot=live live-config noswap nolocales keyboard-layouts=NONE edd=on nomodeset ocs_daemonon="ssh" ocs_lang="en_US.UTF-8" vga=788 nosplash fetch=http://${fog-ip}/ipxe/clonezilla/live/filesystem.squashfs ocs_prerun="mount.cifs -o user=${username},dom=<domain>,pass=${password},vers=2.1,rsize=32768,wsize=32768,rw, //server/share/clonezilla/${username} /home/partimag "
      initrd http://${fog-ip}/ipxe/clonezilla/live/initrd.img
      boot ||
      goto MENU
      :clonezilla_restore
      login
      kernel http://${fog-ip}/ipxe/clonezilla/live/vmlinuz initrd=http://${fog-ip}/ipxe/clonezilla/live/initrd.img ocs_live_run="ocs-live-general" boot=live live-config noswap nolocales keyboard-layouts=NONE edd=on nomodeset ocs_daemonon="ssh" ocs_lang="en_US.UTF-8" vga=788 nosplash fetch=http://${fog-ip}/ipxe/clonezilla/live/filesystem.squashfs ocs_prerun="mount.cifs -o user=${username},dom=<domain>,pass=${password},vers=2.1,rsize=32768,ro, //server/share/clonezilla/restore /home/partimag "
      initrd http://${fog-ip}/ipxe/clonezilla/live/initrd.img
      boot ||
      goto MENU
      

      i this uses a mapped smb location using user supplied credentials to save into a folder that shares the user’s name.
      to restore images, move the image to the “restore” directory.
      i did it this way to reduce clutter by multiple people saving images, and adding a small amount of security, since most of this is done by student workers.

      posted in General Problems
      JunkhackerJ
      Junkhacker
    • RE: 1.6 Issues/Reporting

      just reporting in problems i encountered testing 1.6
      registered new host, created new image, tried to upload image of host to the newly created image:
      image uploaded, but repeatedly failed to update database that it had finished.
      while looking at the error, i realized that invalid parameters were sent to the computer, as well
      the newly created image had a compression value set to 0 and the compression type was not assigned. i could not update these values on the image’s profile page. nor could i import a image list with corrected values.

      I don’t have time to troubleshoot further right now, but I am interested if anyone can reproduce similar problems (or i just messed up my setup somehow).

      posted in Bug Reports
      JunkhackerJ
      Junkhacker
    • RE: Advice on specs for new setup

      @candidom i think one FOG server configured that way should be able to handle it. you might want to consider trying it with 1 before buying the other 2. i was doing 30+ at a time with traditional drives in a raid 5 at speeds that i found quite adequate.

      posted in General
      JunkhackerJ
      Junkhacker
    • RE: Trying out Fog for the very first time, already stuck at this tutorial....

      @vascomorais here’s how the advanced menu is used https://wiki.fogproject.org/wiki/index.php?title=Advanced_Boot_Menu_Configuration_options

      posted in FOG Problems
      JunkhackerJ
      Junkhacker
    • 1 / 1