Imaging Windows 10 v2004 with UEFI - GPT Partitions onto 31GB or smaller drive. (and failing)
-
So here’s what the Windows 10 v2004 HOME - Non-UEFI partition looks like in GParted:
3 partitions, images and fits snugly on a 28GB HDD.
Then here’s the HOME - UEFI version:
4 partitions, but through FOG requires a 60GB+ HDD.
-
@ProfDrSir I guess two things.
- What was the golden image created on?
- IMO that /dev/sda4 is a problem since its a non-resizable partition and may have a fixed location on the disk. Where its start sector may be beyond the end of that 30TiB disk.
As a test delete it on the source (golden image) server then capture it with fog. This will leave the “C:” drive as the last partition on the disk. That /dev/sda4 is probably the windows recovery partition. If that work and you “need” the recovery partition then recreate the golden image disk structure to place the recovery partition before the “C:” drive partition.
-
Looks familiar.
-
@george1421 Yeah it has been problematic, that last partition also seems to be custom sized by win10 2004 and places it purposely at the end of the drive, almost like it’s written backwards from the end, which gparted doesn’t like. I was trying to manually copy partitions to a smaller drive but gparted habitually leaves 1 MB of unused space after the last partition, which seems to ‘break’? this goofy tacked on partition.
But I’ll test what you posted and see what happens.
The golden image in this case was created on VMware workstation 15.5, but I’ve done a few test installations on a Clevo P157SM-A, the weird thing tho, the new windows v2004 iso will install a v1909 on a lot of older hardware. The new v2004 has been an odd duck of an update insomuch as how it installs.
In reference I had an old AiO that wouldn’t upgrade to 2004 and had this to say about it:
The update from an older v to 2004 also has 2x phases, I assume 1 is to place that partition on the end of the drive.
One thing I did notice, was the VMware Virtual drive is 60GB, so like you said that partition, if attached will require it be stuck somewhere specific on the drive… meaning the 60GB was because the golden image was built on a 60GB? … so basically whatever image I author will always require that size as a minimum to be able to image it properly…
I did make a test build on a virtual 30GB drive, but it failed to image to a 32GB SSD…
-
@sudburr Yeah it’s definitely all connected to the same issue pertaining specifically to Win10 v2004. I do wonder about the thinking behind this change… the “why”.
-
Well one experiment I’ve done with clonezilla seems to work for imaging, I restored the image onto a bigger drive, which ofc leaves all the extra unallocated space after the partitions.
I then move the last partition, telling it to leave 1MiB after
Then resized sda3 to take up the remaining space:
Seems to leave a random 1MiB after sda4, but this still works sometimes and windows loads up without issue, I think it’s because the positioning of sda4 is taken seriously by Linux, but microsoft / windoze has always been lazy with how it uses it’s partitions, especially concerning location, (ofc other than the boot partition) leading me to think the positioning of this partition as George suggested might not be all that important, but I still have to wonder what the hell it’s purpose is… I have seen GParted fail when trying to move this partition to other locations though, specifically when I was trying to manually build a smaller image, the apply steps would fail when manipulating that 4th partition.
I don’t know if this’ll help anyone or anything, but it gives me a path forward on the thumb drive image side of things anyway. Maybe I’ll prep as small a drive as I can get away with then use that and see where it gets me with FOG.
-
@ProfDrSir I was able to complete the MDT test build of 2004 this afternoon. I had to update my base image to use office 365 and to include the new edge browser plus a few other add on application updates (we use a fat windows image for deployment). I’m going to capture and deploy it tomorrow AM to a VM and then a physical machine for validation. In MDT I told it to not create the restore partition so the drive partition is the last partition on the disk like I suggested. If I remember right my reference image disk is 60GB so I should be able to test deploying to a 30GB VM (assuming that my fat image fits in 30GB) to confirm what you are experiencing.
-
@george1421 I’ll be excited to see your results! if we can just remove / ignore that 4th partition it makes imaging slightly more tedious, but functionally should be the same as before I’d assume.
I also noticed that upgrading older v of Windows 10 on Dells with the new Bios sometimes breaks Windows… X-D which has required me to image the SDD in another machine before v2004 will run on the newer Dells… FYI.
-
@ProfDrSir said in Imaging Windows 10 v2004 with UEFI - GPT Partitions onto 31GB or smaller drive. (and failing):
we can just remove / ignore that 4th partition it makes imaging slightly more tedious, but functionally should be the same as before I’d assume.
This is just my personal opinion but that recovery/restore partition really adds no value in a business environment. Using the windows recovery method it take an hour or more to bring a broken computer back. With FOG, and fat win image, I can go from bare metal to ready for the user in 15 minutes. Total tech hands on time is about 40 seconds for imaging vs dealing with the restore process. Now if you have irreplaceable data on the broken system your only option is to recover especially if bitlocker is involved.
In our case we never do an in place upgrade, its always nuke and pave when going between releases. I know the upgrade process works, its just we find its faster for nuke and pave (once the user’s profiles are backed up with USMT).
-
@george1421 which fat standard are you using? and what other benefits come from using it?
-
@ProfDrSir said in Imaging Windows 10 v2004 with UEFI - GPT Partitions onto 31GB or smaller drive. (and failing):
which fat standard are you using?
There are two methodologies here. The first is a thin image in that only the windows OS and updates are added to the reference image. All other applications like office, SAP, acrobat reader are added into the image on the target computer. The advantage to this is that when updated versions of acrobat reader or other applications come out you don’t need to rebuild your reference image (because it only contains windows + updates). You just change your post deployment tasks (snapins, or what every deployment tool you are using to deploy the later application)
The second is a fat image, in that you load all applications onto your reference image before you capture it with FOG (understand if you have applications that are based on a unique guid, like enterprise antivirus you must install those post imaging anyway). All of the time of loading these applications goes into the reference image and not post deployment. With MDT it takes about 1.5 hours to create and configure the reference image. When I capture it with fog the reference image is about 98% of the completed image. With a fat client I can take a workstation and go from bare metal to ready to move to the user’s worksite in 15 minutes. The system is fully configured and ready to roll. I don’t have to do much post install configuration.
To recap a thin install gives you the flexibility to modify the deployment with different applications or configurations at deployment time. A fat image you put all of your work in to the reference image up front and then enjoy rapid deployment on the back end.
Before win10 our methodology was to use the thin image approach because we could create a standard OS reference image and use it for a year or so. Basically the applications would update version more often than the OS. Now with Win10 we have to build a new OS every 18 months so the OS updates more frequently than the applications. That makes the fat image approach more beneficial.
As I mentioned a few times previously we use MDT and the lite touch approach to build our reference image. That way we can get a consistent build each time we rebuild our reference image. When we go between 1903 and 1909 we just copy our task sequences between builds and we get the same results (hopefully) when we build a new base image on the next release of windows 10 (the last OS you will ever have/need).
-
@george1421 interesting, Your setup sounds great for support of a large company with varied users.
-
Alrighty, so my work-arounds for this issue were to use a virtual drive of 28GB to make the original image, alternatively you can delete the 4th partition to make imaging completely compatible, but you lose the ability to do an OS reset in Windows.
If you move the partition to where it used to be in v1909, Windows can’t find it and that also loses the ability to reset, so if you have to move it, just delete it i guess…
another alternative if using clonezilla you can image and use gparted to first move the 4th partition to the end of the drive, then resize the 3rd partition, this seems to work fine without losing the option to reset, tested on 2 different machines, a Clevo Laptop and an HP All in One.
Otherwise typically just make sure the drive you are imaging to with FOG is generally larger than the source disk you imaged from and everything should be fine… generally speaking…