Upgrade from 1.5.7 to 1.5.8 issues
-
@Sebastian-Roth Maybe that is where i/we were at. Taking 0.3.13 and putting it on the 1.5.7 inits. This is a bit more complicated then creating the 1.5.8mod since for the 1.5.8mod you just include the old version of partclone in the build.
I was also going to check to see if there has been any updates for partclone too. But things have been a bit crazy here.
-
@george1421 @Sebastian-Roth Thank you for all you do! I understand about being “essential”. Working from home right now.
-
@Sebastian-Roth It looks like 1 of those 1.5.8mod’s was already done in the middle of the current truth table.
-
@Chris-Whiteley said:
It looks like 1 of those 1.5.8mod’s was already done in the middle of the current truth table.
Yes, I mentioned earlier that you seem to have done this one already but I may ask you to re-do that one test as we pick this up again. Just wanna make sure we really get the tests consistent.
I will work on building a 1.5.7mod today.
-
@Chris-Whiteley It’s been hectic today and so I finally got to start the build just now. Will get back to you in the next hours.
Did you get to test the other two pairs listed in the truth table with speed marked as “???” yet?
-
@Sebastian-Roth I didn’t because I am also very busy. I am in IT at a district that supports 13 other districts. This has been “survival mode”, not so much “normal mode”. I will let you know when I get some time.
-
@Chris-Whiteley Just updated the truth table with the link to a 1.5.7 init file with partclone 0.3.13.
-
@Chris-Whiteley @george1421 After a lot of fiddling I got a setup up and running where it seems I can replicate the issue described. I updated the trush table a fair bit and keep on going the testing. From what I have found so far it’s not partclone nor is it Zstd causing the slowness. For now it looks like a difference between buildroot 2019.02.1 and 2019.02.09 is causing this. Will be a fair amount of work to figure out what exactly it is. But to me it looks like it’s worth the time cause in my testing it’s 10-15 % difference!
-
@Sebastian-Roth So do you think its worth looking at 2020.02 or 2020.08 to see if one of them are comparable to 2019.02.09 v 2019.02.1 ?
Just thinking about it (because most of the speed related stuff is done in the kernel) it would have to be related to the NFS server, because that is external to the kernel. So I wonder if we still see the slowness in 2020.08 then we should investigate to see if its an NFS related issue and do some performance tuning/testing with NFS. I don’t think staying on 2019.02.1 is a long term viable option. But a 10% performance hit isn’t really one either. If we had a solid NFS or disk performance testing tool (akin to iperf or netperf) we could test different nfs tuning parameters.
-
@george1421 Thanks for thinking through this as well!
What I can say so far is that it’s not the kernel either! I use the same (4.19.101 from FOG 1.5.8) for all my tests (except for a single test with plain 1.5.7) and I do see a noticable time difference just by swapping out init files.
I have tested the current FOG 1.5.9 (buildroot 2020.02.6) now as well. It’s slow too. Truth table updated.
So what I am working on now is building inits with buildroot 2019.02.2 through to 2019.02.8 based on the configs we used for 2019.02.1 (FOG 1.5.7). That should give us a pretty clear answer to which version introduced the slowness.
Tuning NFS might be a good idea on top of what I do right now but I don’t want to intertwine those two things. We definitely lost performance. Looks like buildroot introduced it as of now and I want it back from buildroot instead of compensating it by tuning NFS.
-
After some more hours of testing I have to say that I was on the wrong track with my assumption that I had Chris’s issue replicated. Too bad I still haven’t.
Turns out the 10-15 % slower deploy is being caused by command line parameters we added to partclone and Zstd (and other commits) for file deduplication (discussed in the forums earlier [1] and [2]).
@Junkhacker @george1421 @Quazz I am wondering if we want to keep those for every FOG user or if we should make those optional (enable via kernel parameter or something else) now that I see it causing a noticable performance decrease.
@Chris-Whiteley said on Feb 27, 2020, 11:01 PM:
After a test with the new init I am still having the issues of speed decrease. It is almost double what it used to take. My images being pushed out was around 2:30 minutes and now it is 4:17.
Sorry but I think I still have not found what is causing such a huge difference in time in you setup. Maybe the stuff mentioned above is playing a role for you as well but I would really wonder if deduplication is causing such a huge delay for you. Do you still use 1.5.7 at the moment? Would you be keen to get into testing newer versions again to see if we can figure this out?
-
@Sebastian-Roth Possibly a global setting “Create dedup friendly image files” then if that global parameters are set it sets a kernel flag to tell FOS to add in the dedup command line parameters for partclone and image compression.
I don’t see a value in making this an image level option. You are either used dedup storage for your images or not. I don’t see a value in having image 1 configured for dedup storage and image 2 not. It should be all or nothing IMO.
-
@Sebastian-Roth I actually ended up removing the B128 option (https://github.com/FOGProject/fos/commit/e151e674b14279375884c8597e06f82272fe3f92) when I noticed some issues with it, so it’s not in the current inits.
The a0 option is to disable checksum creation, shouldn’t negatively impact speed either.
Although, it’s possible that the whole checksum thing is bugged, which is one of the issues raised at partclone by Junkhacker and somehow causing issues?
-
@Quazz Good point! Your comment made me look at this again. Now I see that I was too quick in assuming those changes were causing the slowness because all those parameters added are only used when creating/uploading the image. Seems like I had a too narrow mindset after hours of digging through this to not have noticed such an obvious thing.
The slowness I noticed yesterday must have been because of the removal of
--ignore_crc
parameter in 3e16cf58 - while still using partclone 0.2.89 in this test. So test is ongoing. -
@Junkhacker @Quazz Looks like the
--ignore_crc
parameter (discussed here as well) really makes the difference. Will do a test with the latest FOS build later today but a first run - exact same FOS one with parameter and one without - show the time difference. -
@Sebastian-Roth Can we add it as an optional parameter since it breaks compatibility with partclone 2 images?
-
@Quazz Ohhh well, how could I forget about this… Obviously there have been many other things nagging in my head this year.
After a long time digging into this things seem to add up at least. You mentioned the CRC patch in a chat session only a good week ago but I did not grasp it back then.
So now I manually added the patch mentioned to our 0.3.13 build and deployed an image (captured with 0.2.89) using the patched 0.3.13 partclone with parameter
--ignore_crc
. Unfortunately this does not seem to fix the issue. I suppose it’s worth finding out why the patch doesn’t work instead of adding ignore_crc as optional parameter to the code. -
@Quazz Good news. I think I have figured it out. The bug described by @Junkhacker is not actually solved by the fix proposed I reckon - just udated the issue report.
Comparing two dd images of partitions being deployed one with
--ignore_crc
and the other one without in a hex editor I found random bytes in the earlier one. Those looked a bit like the CRC hash is being written to disk instead of just skipped when we tell partclone to ignore CRC.Tests looking pretty good so far. Will build an up to date FOS init with added patches and
--ignore_crc
added for everyone to test tomorrow. -
@Chris-Whiteley @george1421 @JJ-Fullmer @Quazz Would you please all test this new init build: https://fogproject.org/inits/init-1.5.9-ignore_crc-fix.xz (very close to what we released with 1.5.9 but added
--ignore_crc
option and patches mentioned below) -
@Sebastian-Roth
Just gave this a test
I’m on fog 1.5.9.3
I’m using bzImage 5.6.18
I set a host to use the init you shared.I had previously imaged this machine on the init that came with 1.5.9.3/1.5.9.2 and it imaged in 4 min 33 secs and when I watched the speed it was around 10-11 GB/min. But I wasn’t babysitting the speed, it probably slowed down a bit when I wasn’t looking based on the results below.
Same image on the new init didn’t appear to go much faster but did stay stable around 11-12 GB/min but it actually finished in 2 min 38 secs. So It almost cut the time in half.
So, I didn’t get to see the super fast 20+ GiB/min speed again, but it did finish in about half the time.
Edit
I also noticed it now shows speed in GB/min instead of GiB/min. Not that big a scale change, but something I noticed. I also deployed the image on the machine one more time, this time without being connected to a small desktop switch and got the exact same 2:38 deploy time.
Other Edit
I also noticed that I too don’t have any 2.89 partclone images to test. Mine are all 3.13 I’m pretty sure.