Upgrade from 1.5.7 to 1.5.8 issues
-
@Sebastian-Roth I think I dropped the ball on this one. My company is considered essential so I’m still working through this mess.
I think where I was at was taking the partclone bits from 1.5.7 inits and to overwrite the partclone bits in the 1.5.8 inits. This will give us an idea if its 0.2.89 code or something else in teh 1.5.8 inits like zstd, gzip, something else.
-
@Chris-Whiteley @george1421 Thanks for the quick update. I just added two lines to the truth table. Chris, can you please test those two and report back? One of them you did a test on already but as this is crucial I may ask you to do it again as we get this going again.
-
@george1421 said:
I think where I was at was taking the partclone bits from 1.5.7 inits and to overwrite the partclone bits in the 1.5.8 inits.
Re-reading things I think we already have this part (1.5.8mod). But we seem to miss the reverse - put partclone 0.3.13 into the default 1.5.7 init - unless I have overlooked something in the thread. Not really sure why I put this in the truth table already but I can’t find a 1.5.7mod init anywhere in this topic. Do you?
-
@Sebastian-Roth Maybe that is where i/we were at. Taking 0.3.13 and putting it on the 1.5.7 inits. This is a bit more complicated then creating the 1.5.8mod since for the 1.5.8mod you just include the old version of partclone in the build.
I was also going to check to see if there has been any updates for partclone too. But things have been a bit crazy here.
-
@george1421 @Sebastian-Roth Thank you for all you do! I understand about being “essential”. Working from home right now.
-
@Sebastian-Roth It looks like 1 of those 1.5.8mod’s was already done in the middle of the current truth table.
-
@Chris-Whiteley said:
It looks like 1 of those 1.5.8mod’s was already done in the middle of the current truth table.
Yes, I mentioned earlier that you seem to have done this one already but I may ask you to re-do that one test as we pick this up again. Just wanna make sure we really get the tests consistent.
I will work on building a 1.5.7mod today.
-
@Chris-Whiteley It’s been hectic today and so I finally got to start the build just now. Will get back to you in the next hours.
Did you get to test the other two pairs listed in the truth table with speed marked as “???” yet?
-
@Sebastian-Roth I didn’t because I am also very busy. I am in IT at a district that supports 13 other districts. This has been “survival mode”, not so much “normal mode”. I will let you know when I get some time.
-
@Chris-Whiteley Just updated the truth table with the link to a 1.5.7 init file with partclone 0.3.13.
-
@Chris-Whiteley @george1421 After a lot of fiddling I got a setup up and running where it seems I can replicate the issue described. I updated the trush table a fair bit and keep on going the testing. From what I have found so far it’s not partclone nor is it Zstd causing the slowness. For now it looks like a difference between buildroot 2019.02.1 and 2019.02.09 is causing this. Will be a fair amount of work to figure out what exactly it is. But to me it looks like it’s worth the time cause in my testing it’s 10-15 % difference!
-
@Sebastian-Roth So do you think its worth looking at 2020.02 or 2020.08 to see if one of them are comparable to 2019.02.09 v 2019.02.1 ?
Just thinking about it (because most of the speed related stuff is done in the kernel) it would have to be related to the NFS server, because that is external to the kernel. So I wonder if we still see the slowness in 2020.08 then we should investigate to see if its an NFS related issue and do some performance tuning/testing with NFS. I don’t think staying on 2019.02.1 is a long term viable option. But a 10% performance hit isn’t really one either. If we had a solid NFS or disk performance testing tool (akin to iperf or netperf) we could test different nfs tuning parameters.
-
@george1421 Thanks for thinking through this as well!
What I can say so far is that it’s not the kernel either! I use the same (4.19.101 from FOG 1.5.8) for all my tests (except for a single test with plain 1.5.7) and I do see a noticable time difference just by swapping out init files.
I have tested the current FOG 1.5.9 (buildroot 2020.02.6) now as well. It’s slow too. Truth table updated.
So what I am working on now is building inits with buildroot 2019.02.2 through to 2019.02.8 based on the configs we used for 2019.02.1 (FOG 1.5.7). That should give us a pretty clear answer to which version introduced the slowness.
Tuning NFS might be a good idea on top of what I do right now but I don’t want to intertwine those two things. We definitely lost performance. Looks like buildroot introduced it as of now and I want it back from buildroot instead of compensating it by tuning NFS.
-
After some more hours of testing I have to say that I was on the wrong track with my assumption that I had Chris’s issue replicated. Too bad I still haven’t.
Turns out the 10-15 % slower deploy is being caused by command line parameters we added to partclone and Zstd (and other commits) for file deduplication (discussed in the forums earlier [1] and [2]).
@Junkhacker @george1421 @Quazz I am wondering if we want to keep those for every FOG user or if we should make those optional (enable via kernel parameter or something else) now that I see it causing a noticable performance decrease.
@Chris-Whiteley said on Feb 27, 2020, 11:01 PM:
After a test with the new init I am still having the issues of speed decrease. It is almost double what it used to take. My images being pushed out was around 2:30 minutes and now it is 4:17.
Sorry but I think I still have not found what is causing such a huge difference in time in you setup. Maybe the stuff mentioned above is playing a role for you as well but I would really wonder if deduplication is causing such a huge delay for you. Do you still use 1.5.7 at the moment? Would you be keen to get into testing newer versions again to see if we can figure this out?
-
@Sebastian-Roth Possibly a global setting “Create dedup friendly image files” then if that global parameters are set it sets a kernel flag to tell FOS to add in the dedup command line parameters for partclone and image compression.
I don’t see a value in making this an image level option. You are either used dedup storage for your images or not. I don’t see a value in having image 1 configured for dedup storage and image 2 not. It should be all or nothing IMO.
-
@Sebastian-Roth I actually ended up removing the B128 option (https://github.com/FOGProject/fos/commit/e151e674b14279375884c8597e06f82272fe3f92) when I noticed some issues with it, so it’s not in the current inits.
The a0 option is to disable checksum creation, shouldn’t negatively impact speed either.
Although, it’s possible that the whole checksum thing is bugged, which is one of the issues raised at partclone by Junkhacker and somehow causing issues?
-
@Quazz Good point! Your comment made me look at this again. Now I see that I was too quick in assuming those changes were causing the slowness because all those parameters added are only used when creating/uploading the image. Seems like I had a too narrow mindset after hours of digging through this to not have noticed such an obvious thing.
The slowness I noticed yesterday must have been because of the removal of
--ignore_crc
parameter in 3e16cf58 - while still using partclone 0.2.89 in this test. So test is ongoing. -
@Junkhacker @Quazz Looks like the
--ignore_crc
parameter (discussed here as well) really makes the difference. Will do a test with the latest FOS build later today but a first run - exact same FOS one with parameter and one without - show the time difference. -
@Sebastian-Roth Can we add it as an optional parameter since it breaks compatibility with partclone 2 images?
-
@Quazz Ohhh well, how could I forget about this… Obviously there have been many other things nagging in my head this year.
After a long time digging into this things seem to add up at least. You mentioned the CRC patch in a chat session only a good week ago but I did not grasp it back then.
So now I manually added the patch mentioned to our 0.3.13 build and deployed an image (captured with 0.2.89) using the patched 0.3.13 partclone with parameter
--ignore_crc
. Unfortunately this does not seem to fix the issue. I suppose it’s worth finding out why the patch doesn’t work instead of adding ignore_crc as optional parameter to the code.