Macbook Air Capture Fail
-
@sebastian-roth Sebastian, have you had a chance to look at the logs or is there any other info I can get you that may help? I’m actually going to have a man onsite today, but will not have anyone there tomorrow.
Thanks in Advance,
James
-
@SlimJim I started to compare the logs (as well with my successful imaging from a HPF+ partition I formated using the linux hfsutils) but haven’t found anything in particular yet. I am still trying to understand exactly what partclone is doing when calculating the bitmap by reading the code and HFS+ specs. Won’t be a quick win I am afraid.
Could you get me a dump of the volume header of that client’s disk? Boot it up into a debug capture task, plug in a USB stick and run the following commands:
mkdir /usb mount /dev/sdb1 /usb dd if=/dev/sda2 of=/usb/volheader_sda2 bs=4096 count=1 gzip /usb/volheader_sda2 umount /usb
Please upload that file (volheader_sda2.gz) to your google drive/dropbox/etc. and post a link here.
Note to myself: http://sysforensics.org/2016/09/mac-dfir-hfs-filesystem-volume-header/ and http://dubeiko.com/development/FileSystems/HFSPLUS/tn1150.html
-
@sebastian-roth Trying to get my onsite guy to get this info for you now, I’ll post when I have it.
-
@sebastian-roth I’m sorry for the delay, haven’t been in the office for a few days, but got the requested info and uploaded here https://drive.google.com/file/d/0B3UbxG_W0mD9UFo3TFdyMDNfRjg/view?usp=sharing
Please let me know if there is anything else you find or require.
Thanks in Advance,
James
-
@SlimJim Ok, here is what I’ve found so far. The values for total blocks in the hfs_header image file (29288960) don’t really match the values in the log files (29379602). As I don’t fully understand HFS+ yet I am not sure what that means. Maybe the header was taken from a different disk/partition?
Understanding the HFS+ header is actually not too hard with the documentation from the links I posted earlier. So I was able to kind of replicate the issue by saving the header data of my test system (using dd), changing the used blocks value in that header data image (using hexedit) and writing it back to disk. After that I got the same error message saying “hfsplusclone.c: bitmap count error…”.
Looking at the numbers of your logs again things start to make sense. The differences of the two log files I posted show that freeBlocks values differ exactly by the same amount as mbitmap values do (24969607 - 24953979 = 15628 = 4425623 - 4409995). From my understanding the freeBlocks value (which is used to calculate the usedBlocks/mbitmap value) in the HFS+ header looks like it’s being changed properly in the HFS header on disk.
What does that mean? I am not sure yet but I thing partclone finds some invalid allocation information on that partition and/or does the calculation wrong. I am going to look into the partclone code to learn more about how the maths is done. But it will take some more time I am afraid.
Do you have this issue on just one system (your master) or is this happening on several different machines?
-
@sebastian-roth This has happened on a the few that I’ve tested.
-
@slimjim After reading through the specs and source code again and again I think I might have found out what’s wrong - seems to be a simple integer overflow.
@Tom-Elliott The variable allocation_start_block is defined as
UInt32
(see here. This is fine as long as the allocation extend file is stored somewhere not that far from the beginning of the drive. But HFS+ allows to have it anywhere on disk really. In the examples posted it starts at block 3867215. Multiplied by blocksize 4096 it simply overflows theUInt32
and partclone finds wrong information.
Would you mind changing line 133 toUInt64 allocation_start_block;
and build fresh init’s that have the patched partclone included? I think that should fix the issue here. -
Init’s have been updated with the patch, as well as I fixed a compiling issue with partclone-0.2.89 in case anybody else was at all trying to build their own inits. (Sorry I fixed it manually once, and had forgotten that I had to do that. Found appropriate fix though and it is now a part of the source scripts for building the inits.)
Please give a try for them.
Init’s can be downloaded as:
wget -O /var/www/fog/service/ipxe/init.xz https://fogproject.org/inits/init.xz wget -O /var/www/fog/service/ipxe/init_32.xz https://fogproject.org/inits/init_32.xz
Hopefully it fixes the issue you were seeing.
-
If this works, I’ll push a pull request against partclone so future versions shouldn’t hit this problem.
@Sebastian-Roth thanks for taking the time to look this over and hopefully this is the solution. If I had to guess, this was just a simple oversite on Thomas Tsai’s part.
@SlimJim Thanks for the patience and understanding on this.
-
Any word?
-
I’m sorry guys, school started and things got a bit hectic and therefore, I was not able to use my onsite guys to test this yet, but I will be able today.
James
-
@tom-elliott @Sebastian-Roth AWESOME! Looks like that worked, capture completed successfully! As always you guys have been so helpful and patient, I really appreciate you guys taking the time to assist me in my times of need!!
James
-
@SlimJim You are welcome. Thanks to you too for patiently delivering information and waiting for results!
For now please keep those init files in place while using FOG 1.4.4. The fix will be in the next release!
-
I also sent a pull request the the official partclone developer so hopefully this will be fixed as well.
EDIT: Done… https://github.com/Thomas-Tsai/partclone/commit/c0629e1a8e73dbdd165d7ac102b8bc9f6f44dac7