The future of partclone and therefore FOG as it is
Sorry if this sounds a bit daunting. Not meant to. I just stumbled upon a changelog message that I would like to discuss with all the FOG users out there:
From https://clonezilla.org/downloads/stable/changelog.php (Clonezilla and Partclone are kind of partner projects and as I simply couldn’t find release notes on Partclone itself I need to reference this one!):
Clonezilla live 2.5.3-3
Partclone was updated to 0.3.8. //NOTE// New image format is used in this release. It is different from the one saved by Partclone 0.2.x.
Within FOG we currently use version 0.2.89 which seems to be the latest 0.2.x release. That’s fine from my point of view even though it’s a bit dated. Updating our whole FOS base system to the latest Buildroot version I ran into a compiling error with Partclone as it still uses ustat function which is deprecated in up-to-date glibc version (2.28). So we seem to get to a point where we need to patch Partclone 0.2.89 to be able to further use it within FOG or we need to come up with a way on how to move to version 0.3.x…
One would think that newer versions of Partclone are able to read the old image format but there are hints on the web that it can cause trouble: https://sourceforge.net/p/clonezilla/discussion/Clonezilla_live/thread/34023334/
Keep in mind that Partclone versions 0.3.x still are proposed as unstable on the official website. On the other hand it’s in use with current Clonezilla releases since almost two years!
If you can’t follow all the tech details above, don’t worry about it too much. The basic message is Partclone seems to move on using a different image format not compatible with what we used to have! What that essentially means is that if we also move forward and add Partclone 0.3.x to FOG that would break all existing images I reckon (not tested yet).
@Quazz Would you find the time to compile FOS builds with Partclone 0.3.x for testing?
@Sebastian-Roth i understand the reasoning for not updating it. i’m hoping to have a chance to help with testing by the end of the week.
Ok, I finally got to test the new inits as well and I think we are good to go with this. Though I have only done basic testing. We’ll see what we run into when other users start using it.
I only brought it up because we’re updating ztsd for this build anyway, so i figured “why not have the latest one?”
You are right on the one hand side! This is all kind of work in progress and why not just use the most recent version while we are at it. But on the other hand if we use 1.4.3 now we should all test again and 1.4.4 might be just waiting around the corner. For those reasons I tend to not go for 1.4.3 now but keep 1.4.2 and keep my fingers crossed that buildroot is catching up quickly.
@george1421 i only brought it up because we’re updating ztsd for this build anyway, so i figured “why not have the latest one?”
“let’s keep ZSTD at this version until Buildroot catches up
I also agree with this quote. The only exception would be if a certain package had a documented error that would be fixed by an update or The FOG Project needed a feature in a package that is not available in the current buildroot release.
The developers have enough on their plates keeping the project moving forward to have to think about every package and if we are running the latest version of each package. Its roughly equivalent to deciding which OS to install, bleeding edge fedora or a bit older (mature) but more stable OS like CentOS.
@Junkhacker Sebastian and I discussed ZSTD a bit and the coin kind of fell on the “let’s keep ZSTD at this version until Buildroot catches up” to keep things a bit simpler and requiring less maintenance from the dev team.
There are other tools in the toolchain that could theoretically also be upgraded, but at the point you’re putting so much maintenance on the board for any future release that you have to question why you’re using a bundled package like Buildroot anymore.
@Sebastian-Roth FYI zstd 1.4.3 is out. it’s a just a bug-fix upgrade, and a bug that’s probably not relevant to our use case, but we should probably upgrade anyway.
@george1421 @Junkhacker We had to sort through a couple of things in the pull request and I was just able to merge it all in. It’s build and everyone can test the binaries build from current master here: https://dev.fogproject.org/blue/organizations/jenkins/fos/detail/master/95/artifacts
I will try to do my testing on the weekend. Would be great if you find the time to do tests on your systems in the next days as well.
@Sebastian-Roth Those should still work yes. Also created a PR on github, so through that way people could check it out too I suppose.
We only really have consumer grade hardware here, rarely anything special, so I know it will work in most scenarios, just need to know if it also works in those other scenarios!
@Quazz Sorry for staying away from this so far. Guess there is not much I can add as I can only test on VirtualBox. I’ll still do that later on an get back to you. Is it still the download links you posted 20 days ago?
Would be great if we could come to a satisfactory answer on this matter. Unfortunately I reached the limit on what I can test on my end (all good), but obviously we want to be sure it works at least as well as the older version for everyone.
Had a kernel error of rootfs can’t mount blablabla on a device today, normal init works fine. Ramdisk size is at 275000 currently.
Unsure why this happened.
edit: Might just be a machine issue as it seems to be acting up in other ways too (namely freezing)
@Quazz You might want to include the
nvmepackage in your build too. Hopefully that will help us solve the issue with the nvme disk swapping. Its not important for this new init testing, but it may be of some use in the future.
Symbol: BR2_PACKAGE_NVME [=n] x x x x Type : bool x x x x Prompt: nvme x x x x Location: x x x x -> Target packages x x x x -> Hardware handling x x
which doesn’t come from Buildroot
Ah so its a fog supplied package. Well then that changes the equation a bit. If its fog project supplied and its tested then there is probably no reason why we should not update. My concern is deviating from the norm that buildroot devs create.
@george1421 I agree for vital packages that waiting on Buildroot where possible is preferable, but something like Testdisk (which doesn’t come from Buildroot) could surely use a bump since that doesn’t break anything :)
You installed buildroot 2019.02.1
2019.02.4 actually, which doesn’t fundamentally changes any packages, just a bunch of bugfixes!
@Quazz Great news that you got it to compile. Also APFS support was a current topic wondering if it was going to be supported. So good idea to add that support. To make use of that will the web gui need to be updated to for a different operating system type?
I’ll download the inits a bit later today to try them in my environment. I’m still running FOG 1.5.5 FWIW.
As for updating other packages… (understand this is just my opinion) You installed buildroot 2019.02.1 which is the same as what the developers are using. Every time you deviate from the standard buildroot package you run the risk of introducing unexpected bugs. I would say unless there is a compelling reason (fog needs a feature of the update package or to fix a bug fog is hitting) don’t upgrade packages at random. Let the buildroot devs do that leg work. I also saw that pigz had an updated version and I think I saw the incompatibility statements so I decided to not update that in my build. That was simply because pigz supported the resyncable flag that we were looking to implement. So I would recommend that unless we are trying to solve a problem, I would say hold off on updating any packages beyond the buildroot stream. I would rather want stable over new unneeded features.
Great job on getting the inits compiled and working!! I’ll let you know how it works on my stuff.
I’m also wondering if we should look into updating other packages that FOG either overrides or provides manually. (eg testdisk is on 6.14 as opposed to 7.1 and even pigz is at 2.3.4 instead of 2.4 (although pigz seems to have introduced potentially backwards incompatible changes))
-B128option seems to allow it to capture, finally!
Give it a whirl if you’re interested.
-B128option gives issues in certain scenarios (special partitions/raw/very small ones???), so we can’t reliably use that.
In order to use
--rsyncable, zstd had to be updated to a minimum of 1.3.8 (chosen the latest version of 1.4.2 to include performance improvements and bug fixes)
Minor Buildroot update means config doesn’t have to be updated, it’s just bugfixes
Added APFS support so that we can offer some better support for newer Macs (if they decide to PXE boot that is) (potentially gptfdisk package should be updated to 1.0.4 (adds typecodes for APFS and others), haven’t tested anything in this direction!)
Currently it fails to capture
msftrespartitions, throwing exit code 139 (presumably zstd’s error code). msftres is captured with
It’s difficult to see what’s going on because the screen breaks up, but it seems to not recognize any size on this type of partition on partclone 3.12 which is a regression compared to 2.89
edit: Adding a strategic
debugPausetells me there is a segmentation fault on line 2041 in
Even stranger is that it is talking about line 2053, yet for some reason mentions line 2041???
Though not sure why this occurs…
edit2: Seems to be down to the fact that it writes to
Direct writing to /images works fine
edit3: Welp, it seems to come and go as it pleases, not sure what the actual cause is!
edit4: My suspicion lies on B128 option being the culprit (initial test is promising), recompiling, will test tomorrow.
@george1421 I decided to go for 1.4.2 since it includes a couple of performance improvements and bug fixes.
Took a while longer because I forgot to include the hash file.
It’s finally compiling successfully, time for some more tests.