FOG 1.5.9-RC2 incompatible with Windows 10 v2004 Partition Structure

  • Uh oh.

    The partition structure for a bare metal UEFI install of Windows 10 has changed dramatically with v2004.

    FOG 1.5.9-RC2 (installed clean today at 3pm ET) no likey.

    Microsoft Windows 10_v1903 64bit (10.0.18362.30)
    Partition ### Type Size Offset
    Partition 1 Recovery 529 MB 1024 KB
    Partition 2 System 99 MB 530 MB
    Partition 3 Reserved 16 MB 629 MB
    Partition 4 Primary 1023 GB 645 MB

    Microsoft Windows 10_v2004 64bit (10.0.19041.264) aka 20h1
    Partition ### Type Size Offset
    Partition 1 System 100 MB 1024 KB
    Partition 2 Reserved 16 MB 101 MB
    Partition 3 Primary 1023 GB 117 MB
    Partition 4 Recovery 505 MB 1023 GB

    It captures, something, but deployment well, that fails because v2004’s partition structure is not sized properly.

  • My workaround that has worked so far was to make the source image drive small, 28GB in my case, and it seems to always work now so long as the target drive is bigger, a complication with the last partition not wanting to be any closer to the beginning of the drive as it was on the source disk (i think), but it can be further away with no issues:

  • @Sebastian-Roth

    Not sure if this has been resolved or if this helps -

    I’ve created a Win 10 UEFI VM using the 2004 ISO. VMware® Workstation 15 Pro 15.5.6 build-16341506. All the settings were default except I went up tp 8GB of RAM. I installed it, ran all the updates, activated it while waiting for updates, rebooted twice then captured the image with FOG dev-branch version: 1.5.9-RC2.11 running on Ubuntu 20.04 LTS

    It deployed fine to a similar VM.

  • I’ll get back to ya on this. I’ve finished my image refresh for the new year and am digging deep into something else now.

    I built them in Hyper-V on Windows 10v2004 .

    It was a coin toss on whether I witnessed the problem on over three dozen VMs, but I was able to overcome it on every one.

  • Moderator

    @sudburr Is the partition layout still fine when you see this blkid: error: /dev/sda4 thing on subsequent tries?

    May I ask you to do the following: Let it try to capture a few times more. Every time you save a copy of the /images/dev/aabbccddeeff/d1.partitions (this aabb… is the MAC address of the host without colons) file to another location for us to compare those afterwards. Whenever you see the blkid... error you name it d1.partitions_fail_X (put in numbers instead of X) and if you don’t see the error you name it d1.partitions_ok_X.sfdiskPartitionFileName

    What kind of VM do you use? I am wondering if I am able to replicate the issue using the same setup?!

  • And it appears to be totally random whether it throws that error, cold boot, warm boot, reset, whatever. If blkid: error comes up I just keep resetting the vm until it goes away .

  • Moderator

    @sudburr This is a mystery to me. Why would it find four partitions to begin with but then the device node file is one?!?

  • Sigh … cold vs warm boot is not it.

    New VMs today, and just to spite me, it failed on the warm reboot test. It worked after resetting to the checkpoint prior to the initial capture attempt. So it worked on a cold boot.

    This is definitely the indicator that the capture will be bad.

    blkid: error: /dev/sda4: No such file or directory

    If I see this, I shut the VM down, revert the checkpoint and try again.

  • Moderator

    @sudburr said in FOG 1.5.9-RC2 incompatible with Windows 10 v2004 Partition Structure:

    Testing further, but right now, Cold booting into the task appears to be the guilty party.

    Wow that would be a really nasty one. Please keep us posted.

  • Okay, you’re going to love this. I only have one success to go on so far, but here’s my current working theory.

    Failure scenario

    1. Cold boot VM to Quick Inventory
    2. Shutdown VM after the natural reboot after QI
    3. Create Task
    4. Cold boot VM into Task
    5. Partition 4 is not recognized as fixed.

    Success scenario

    1. Cold boot VM to Quick Inventory
    2. Pause VM after the natural reboot after QI
    3. Create Task
    4. Resume VM into Task
    5. Partition 4 is recognized properly.

    Testing further, but right now, Cold booting into the task appears to be the guilty party.


  • Not reliably.

    VM I’m working on now refuses to capture with partition 4 as fixed. It’s throwing the error during capture:

    blkid: error: /dev/sda4: No such file or directory


  • Moderator

    @sudburr said:

    Partition 4 is sometimes not identified as fixed size.

    Is this something you can reproduce?

  • So far today, working with all new VMs again, things are looking good.

    I made one change to the mastering process, to use Disk Management (diskmgmt.msc) within Windows to shrink the OS partition (to just 5GB free space) before sysprep and shutdown. It’s capturing partitions properly with fixed 1:2:4 so far.

    Captured images are deploying properly to HDDs both larger and smaller than the original 64GB.

  • It’s consistently inconsistent.

    Today I created some more VMs. All with 64 GB drives. Essentially …

    Capture VM1

    cat d1.fixed_size_partitions

    Capture VM1 a second time

    cat d1.fixed_size_partitions

    Capture VM2

    cat d1.fixed_size_partitions

    None will deploy to a drive smaller than the original 64 GB, though the data is only 12 GB uncompressed. It’s just a Windows install.

    Looking at the 50 GB drive after a failed attempt to push the 64 GB image onto it. Diskpart reports a single 63 GB partition. wha?
    Gnome Partition Editor shows the drive as 50 GB unallocated.

    Dumping one of the 1:2:4 images onto a 2TB drive now; and it’s good.

    So Partition 3, isn’t really resizing smaller when captured, and Partition 4 is sometimes not identified as fixed size.

  • Moderator

    @sudburr Did you manually edit the file or recapture the image or why is this changed? Just trying to make sense of this.

  • No.

    cat d1.fixed_size_partitions
  • Moderator

    @sudburr Do I get this right? It does expand sda3 and sda4.

    Is d1.fixed_size_partitions still set to 1:2:4 for this image?

  • @Sebastian-Roth Okay, I wasn’t dreaming it.

    Here’s the original on a 1024 GB drive after I’ve shrunk the partitions with GPARTED

    And here’s the result immediately after going onto a 110 GB drive.

    My original percentage math just happened to be a coincidence. But partition 4 is definitely not reproducing as intended.

  • Okay, that is peculiar. My test yesterday to a physical device resulted in the expanded partition 4. My test today to a VM did not. Partition 4 remained right-sized.

    I don’t know when I’ll be able to do another physical test, but I will check again, somewhen.

  • @Sebastian-Roth The math bears it out. will check again