PC unbootable after capture fails
-
@dolf Just throwing this out there, is it possible to successfully capture a non-resizable multiple partition image and deploy it? This would just be a test to see if the resizing is the issue or not.
-
Is the partition table manipulated in any way when capturing a non-resizable multiple partition image? If not, it probably works just like CloneZilla, which I’m using now. And that works.
I’ll test your idea as soon as I have time. For now, I’m trying to work around the issue to save time. Working through the night to get the image ready. 200 PCs to deploy soon…
Where can I find the exact
ntfsresize
command used by FOS? I looked at thefog.upload
script while in FOS, and there was a call toshrinkPartition
or something like that, but I couldn’t find where the call tontfsresize
happens. I would like to type that exact command on a terminal and see what happens. -
@dolf said in PC unbootable after capture fails:
Where can I find the exact ntfsresize command used by FOS?
it’s in the init, but you can view the source code of the init in your trunk source. It’s here, line
196
to be exact.<Trunk Directory>/src/buildroot/package/fog/scripts/usr/share/fog/lib/funcs.sh
Also some stuff around
460
and490
and508
-
Back at it! I tried resizing with GParted, which is known to very carefully check everything before touching the drive. I simply booted GParted Live, and resized the big partition,
sda2
to a minimum. Here is the log: gparted_details.htmMaybe FOG could learn from (or even directly use) GParted in this regard
-
@dolf I am not sure why resizing isn’t working for you. I’ve created hundreds of images with fog - most re-sizable - for Windows 7, 8, 8.1, 10, ubuntu, CentOS, Fedora - I’ve not had the problems that you’ve had. All my co-workers use resizable. We have probably 30 different hardware models from various manufacturers at work, they all work fine with fog. Many community members here use resizable images, seldom do issues with resizing come up.
We need to troubleshoot what’s going on with your particular setup - and see what can be done.
I particularly think something is wrong with the MBR. After deploying a resizable image (captured by fog), you can boot to a linux live disk and likely be able to mount the HDD and read all the files just fine, copy to and fro, and run other diagnostics. I really doubt that the resizing is breaking it, I really think it’s something with the MBR.
As a sort of test, after capturing a resizable image with fog, you can trade out the mbr fog captured with the mbr that CloneZilla captured, set permissions, and try to deploy. See what happens.
-
@Wayne-Workman Good to hear that it works for you. The fact that it usually works, but didn’t work for me is the definition of an edge case. And things should not break when edge cases happen.
I just realized that I unknowingly tested exactly what you suggested, and that’s probably why it worked. When I try to resize the problematic image, however, I get this: gparted_details_bad.htm
Still, GParted wins, because it safely terminates before destroying the disk. FOG should, too.
This discussion shows that most people aren’t really sure why this happens. We could use the following algorithm to work around the problem (expanding on what GParted does):
increment := "1GB or a certain percentage of the disk size" partition = /dev/sda2 calibrate partition target_size := check file system on partition for errors and fix them and get estimate of smallest supported shrunken size if there are errors stop do simulate resizing to target_size target_size += increment while simulation fails and target_size < disk_size if target_size < disk_size // this means the simulation must have succeeded for the current value of target_size actually resize the file system actually resize the partition // note that file systems and partitions are not the same thing, and are not necessarily the same size... TODO: this is yet another edge case to consider // if all simulations failed, we just don't resize the disk, and the capture process can still continue uninterrupted
-
Sorry, actually no, the image where the resize succeeded has the same mbr, but fewer files in sda2 (about 10GB less than the one that fails to resize).
The suggestion for making the capture process safer still holds, though
I even tested it: If I resize to 70GB instead of the minimum (about 66GB), it works just fine. I suspect that it isn’t possible to know exactly what the minimum size of an NTFS partition will be without simulating. That’s probably why the authors of ntfsresize include messages like this (emphasis mine):
- Estimating smallest shrunken size supported …
- You might resize at 71189536768 bytes or 71190 MB (freeing 178764 MB).
- Please make a test run using both the -n and -s options before real resizing!
Luckily, simulation takes about 10 seconds for a 250GB drive, so it won’t be a large performance hit.
-
@dolf I agree with all of that. How good are you with shell script?
-
@dolf While I understand what you’re saying, I don’t think it should continue going. I agree it should not, in the least, actually resize the partition unless we know absolutely all will continue fine down the road (which is not very practical, as I don’t know of a way to “dry_run” the fog system before actually performing tasks to test for all these edge cases. The reason there are different image types (resize, non-resize, raw) is to allow people to use what will suit them best. If resize is going to cause issues, I think it wise to fail to upload, but not attempt altering the disk.
Can you post the contents of your image’s (broken please) d1.fixed_size_partitions file? I suspect what’s occurring is an unexpected partition is resizing, thus moving the start sector of the next partition. That I can fix, though I don’t know where to begin.
-
@Wayne-Workman I’m not great at shell scripting. I google about 5 pages for every line I write. I mostly do Python, PHP and C.
@Tom-Elliott I’ll have to dissapoint you
1
-
Another thing comes to mind as well.
FOG Does run some math to calculate the smallest size of the partition plus a little more (wiggle room if you will). I may need to see an upload again using debug and at the point it’s testing (once complete) break out and see what is showing for the ntfsresize variable.
lsblk and fdisk -l would also, possibly, be extremely helpful as well (before AND after).
-
I know it’s a long thread, but here it is: https://forums.fogproject.org/topic/8059/pc-unbootable-after-capture-fails/10
-
Just to show that it does work if you make the wiggle room a tad (where tad=4GB) bigger: gparted_details_70GB.htm
That’s using the same “broken” image. Everything works perfectly on that image, so I wouldn’t really call it broken.
chkdsk
agrees with me. It does, however, contain massive software packages with millions of files. -
@dolf Was the system defragged before it was uploaded?
I ask because: … http://tuxera.com/forum/viewtopic.php?f=2&t=31012
I don’t know if this was/is the case, just may be worth a shot?
-
To add further, the part where it’s talking about shifting the data on the drive in a strange format the MFT segments are being moved around and possible extend partial bits to beyond the partition layout. Or so I believe, I don’t really know, but it would leave some understanding as to why a slightly larger partition layout would work.
-
I would tend to think that those “massive” files you’re talking about just didn’t have the room to be shifted around with the target being 2GB free. Defragging before uploading could solve that - and make your image perform better too.
-
I didn’t defrag, but I analyzed the fragmentation, and it reported
1% fragmented
.
However, last night the hard disk of the PC I originally used to develop this image started acting up.chkdsk /R /F /V /X
on reboot returned no error or bad sectors, but the DELLPre-boot System Assessment
reports that the HDD hasError Code 2000-0142
. I couldn’t find what that code means, other than that the HDD hasfailed
. I think it’s probably a problem with the HDD’s electronics, rather than the disk surface, because the diagnostic utility only took a minute, so it obviously didn’t scan the disk surface. I’m replacing the disk now, to check. -
And the truth comes out.
-
Or does it? I just started from scratch on a new image with a new HDD, and I’m having the same problem… MFT gets corrupted. I’ll do some more thorough tests when I have time.
-
I’m still experiencing this problem. Running trunk these days. However, I have found another workaround to upload resizable images:
- Capture a non-resizable image
- Make a resizable image, and copy the files manually from the non-resizable image
- Create these files manually:
d1.fixed_size_partitions
Just contains “:1”d1.minimum.partitions
I make the minimum size of the resizable partition quite a bit larger than the uncompressed size ofd1p2.img
Can check this withgzip -l d1p2.img
d1.original.fstypes
/dev/sda2 ntfsd1.original.swapuuids
empty
- Deploy
It works. Therefore it seems that resizing the partition before and after capturing with
partclone.ntfs
is not necessary, even for resizable images.