SOLVED Error "rcu_sched self detected stall on CPU" on legacy BIOS Capture job
- OS version: CentOS 7
- FOG version: 1.5.5
We deployed FOG 1.2.0 here, with which we had no problems at all, until some UEFI machines came to our network and we faced a need to upgrade the system. That upgrade was made by a fresh install on a brand new VM server.
The error on the title occurs when I try to capture image from machines with legacy BIOS. It starts well, but at some point it presents the mentioned error and stucks. I already tried a sort of bzImages, both x86 and x64 archs, to no avail.
I read about it in this topic:
… and tried to roll back kernel for 4.15.2, but I get a “FATAL: kernel too old” message and then everything freezes on client machine. I made Surface Disk Tests and Memtests, all passed.
These are machines that worked very well while I was using FOG 1.2.0, so I don’t know what happened. Unfortunatelly I cannot upgrade these machines BIOS firmwares because they communicate with industrial hardware that might fail if this kind of change is made.
My inventory data from these machines follows:
- System Type: Type: Desktop
- BIOS Vendor: Phoenix Technologies, LTD
- BIOS Version: 6.00 PG
- BIOS Date: 03/31/2008
- Motherboard Manufacturer: Intel
- Motherboard Product Name: Broadwater
- Motherboard Version: Fab D
That machine is on a different subnet than my FOG server and because I have both UEFI and Legacy BIOS in my network but DHCP is Win2k8 with no access for me, I’m using dnsmasq directing “undionly.kpxe” to these machines.
Any help will be appreciated! Thank you all in advance.
I’m using bzImage at latest version, did extensive tests on Legacy BIOS systems that were presenting the “rcu_sched” warnings and so far I’ve never saw them again or any other hanging issues.
If I can help with any othe kind of tests, please let me know.
Thanks everyone, awesome work!
@george1421 yes, I understood that, I’m downloading the inits and will test it asap. What I stated was just to confirm why these issues were happening.
Using bzImage 4.15.2 + FOG 1.5.5 init.xz gave me kernel panic “FATAL: Kernel too old” messages on every single system I’ve tried it.
Ok, I just compiled inits that should work with kernels all the way back to 4.15.x (64 bit and 32bit). Can you guys give those a try in your environments before I make those the default?
What Sebastian is saying here is he recompiled the inits to move the minimum kernel requirement back to support the 4.15.x series of linux kernels. So (for now) you only need to manage bzImage and bzImage4152 kernels using the same inits (virtual hard drive).
Hi guys, I didn’t forget about this topic. I’m just currently dealing with some iPXE booting challenges due to the big differences between system archs I have here. I’m almost finished, so I can try out these tests you asked.
So far, some answers:
Are you saying that it does work “sometimes” without an issue. Is that on the same kernel version 4.19.6 that is causing the error initially posted?? Would make it even harder for us to nail this issue down.
As much as I wanted to answer it technically, the best I have is: yes, it’s kinda random. Our business model demands constant infrastructure changes as our clients points out their needs, so we have lots of machines that although are the same models, have slightly different CPUs and BIOS versions, a challenging scenario for applications such as FOG to be set up as an automation tool. So at each node I have to test what FOS image will be the best fit.
So far I had 4.19.6 bzImage + FOG 1.5.5 init.xz working on about 90% of my systems with no bugs, hangings or issues of other nature. For the ones I did find issues, switching it to 4.15.2 as suggested by @george1421 fixed the problems, but only when I used init.xz packed with FOG 1.5.2 binaries.
Using bzImage 4.15.2 + FOG 1.5.5 init.xz gave me kernel panic “FATAL: Kernel too old” messages on every single system I’ve tried it. It happens also with bzImage of all versions from this up to 4.19.6 with the same init.xz, which works fine to boot and start the task, but throws me the errors reported in the title of this topic at given point in image deploy/capture tasks (it’s not always the same point and I didn’t test other kinds of tasks).
Trying to figure out what might be causing this on your hardware I started by reading the kernel docs on this. Essentially it says that this can be caused by many different things (see a detailed list in the document linked) and we might need to turn on CONFIG_RCU_TRACE in the kernel to get an idea where things go wrong. But as a start we would need to have a clear picture of the exact error messages on screen.
Ok, I’ll reproduce the error scenario and take a picture of the screen. I’m doing this right now.
@fenix_team @george1421 @Quazz Ok, I just compiled inits that should work with kernels all the way back to 4.15.x (64 bit and 32bit). Can you guys give those a try in your environments before I make those the default?
Will test it right after the rcu_sched issue.
Tom Elliott last edited by
@Sebastian-Roth you’re absolutely right about the knot sizes. One thing I want to add, however, is the old inits from 0.32 were 30mb in size and often the kernels were aroun 10-15 mb in size. Our current inits are 18-20mb and the kernels are around 7-9 mb in size. So we’re actually doing pretty well I think.
My original FOS image is the latest available at Kernel Update GUI page, which currently is 4.19.6 (both bzImage and bzImage32)
I just finished one of the 2 machines with systems as described in OP. The capture job succeeded with not much as a single warning! The system with American Megatrends BIOS is also smoothly past the point in which the issue was happening.
Hey, thanks for reporting so many details about this! I started to look into this and reading all the messages posted. That one really caught my attention. Are you saying that it does work “sometimes” without an issue. Is that on the same kernel version 4.19.6 that is causing the error initially posted?? Would make it even harder for us to nail this issue down.
And a quick comment on the kernel/init versions. There is no strict rule that kernels are compiled against exactly one init version or vice versa. But looking into this more closely I just figured something out that I wasn’t aware of until now: There is an option within buildroot (the toolstack we use for the inits) that is used to optimize glibc compilation. The more recent kernel version you choose the less compatibility code is needed to be build into glibc and therefore the binaries are smaller in size. Sounds pretty straight forward and if I had known this before (there are hundreds of buildroot options and I really don’t know what they are all doing exactly) I would have build with more compatibility!
I will compile a new set of inits with more compatibility now and see if it matters in size much. I guess it won’t as the inits are huge ( just under 20 MB) anyway. We’ll see. I will let you all know.
Ok, back to the initial posted issue: Trying to figure out what might be causing this on your hardware I started by reading the kernel docs on this. Essentially it says that this can be caused by many different things (see a detailed list in the document linked) and we might need to turn on CONFIG_RCU_TRACE in the kernel to get an idea where things go wrong. But as a start we would need to have a clear picture of the exact error messages on screen.
I also noticed another change, all of these machines sometimes got stuck in iPXE boot while loading “/default.ipxe”, at 0%, and forced me to reboot lots of times until randomly it boot correctly. After changing kernel and init versions, that problem vanished (I don’t know if things are related, tho).
From my point of view those two things can’t be related as the Linux kernel is not running when you get to loading default.ipxe yet! It’s interesting you seem to have this fixed by changing the Linux kernel and inits though. I suspect it to be just a coincidence. Usually when thinks don’t load properly at that stage it’s a network driver problem within the iPXE code. Another thing very hard to debug as it is hardware specific and needs to be reproduced to find and fix. But for iPXE there might be a different solution for you. We provide a set of different binaries which you all find in
/tftpbooton your FOG server. Default for legacy BIOS machines is
undionly.kkpxe. You can try
undionly.*pxe(UNDI network stack only),
ipxe.*pxe(native driver stack all included),
intel.*pxe(native driver but Intel NICs only) and
realtek.*pxe(native driver but Realtek NICs only).
@george1421 The problem is for new kernels we need to update the kernel headers. Programs built against that (such as the programs in the init files) require the minimum supported kernel of those headers to run.
In practice, it will often work anyway, but sometimes the changes make it impossible, nothing that can really be done about that from our side afaik.
@george1421 very well, I’ll test as you specified and report the results later.
@fenix_team OK, I didn’t think the FOG project devs held the kernel that tightly with the inits. Please use the 1.5.2 inits then. But I need to ask you to try the 4.19.6 kernel with the 1.5.2 inits for completeness. Its almost clear in my head the inits are not the issue here, but to rule it out and to complete the truth table if you have time, please test.
We know 4.15.2 is good with 1.5.2 inits
We know 4.19.6 is bad with the 1.5.5 inits.
@george1421 question, should I use original init.xz packed with FOG 1.5.5 or should I downgrade it as well to the 1.5.2 one as you suggested earlier?
I ask it because I already extensively tested 4.18.x down to 4.16.x branches (for both archs) and in all cases I had kernel panic “FATAL: Kernel too old” issues.
@fenix_team Since you seem to have a fleet (smile) of impacted systems, could you help with debugging? What I’d like to know if you use the 1.5.5 inits and then test kernel 4.18.3 from this site: https://fogproject.org/kernels/ What I want to see if its an issue with the 4.19.x branch. We know that 4.15.2 was a very stable kernel build and we had to backup to that release a few times because of dramatic changes in the linux kernel after the 4.15.x versions.
@george1421 My original FOS image is the latest available at Kernel Update GUI page, which currently is 4.19.6 (both bzImage and bzImage32)
I just finished one of the 2 machines with systems as described in OP. The capture job succeeded with not much as a single warning! The system with American Megatrends BIOS is also smoothly past the point in which the issue was happening. I also noticed another change, all of these machines sometimes got stuck in iPXE boot while loading “/default.ipxe”, at 0%, and forced me to reboot lots of times until randomly it boot correctly. After changing kernel and init versions, that problem vanished (I don’t know if things are related, tho).
My last machine with legacy Phoenix Awards BIOS did not complete the capture job, but looking at partclone.log I found out a bad block issue. I’m now trying to capture it using DD method, which increased time but is worth the test. I’ll report on it later.
@fenix_team Again thank you for the details all of it is helpful trying to deduce the issue. The one question I forgot to ask is what FOS kernel version where you originally on? Looking at the FOS kernel list I see as current 4.19.6 released in December, 4.19.1 released in November, and 4.18.3 released in August.
I’m having this same issue with another type of Legacy BIOS machine, which BIOS and CPU details are:
- System Type Type: Main Server Chassis
- BIOS Vendor: American Megatrends Inc.
- BIOS Version: 2.1
- BIOS Date: 12/30/2011
- Motherboard Manufacturer: Supermicro
- Motherboard Product Name: X8DAL
- CPU Manufacturer: Intel
- CPU Version: Intel Xeon CPU E5620 @ 2.40GHz Intel Xeon CPU E5620 @ 2.40GHz
- CPU Normal Speed: Current Speed: 2400 MHz
- CPU Max Speed: Max Speed: 2400 MHz
I’m executing the tests as I post it, including that above described machine.
@george1421 Glad to know the details sufficed! I did the kernel rollback test before but didn’t uptade the init.xz file like you said. I will test this workaround right away and post the outcomes as soon as finished.
About the CPU details, follows below:
Main Processor: Intel 2.12GHz (266x8.0)
CPU Brand Name: Intel Core2 Duo CPU
C1E BIOS Supported
Not an answer but a workaround: https://forums.fogproject.org/post/120555
In short for these machines use the 4.15.2 kernel with the inits from the 1.5.2 binaries zip file. The developers are not sure if its a regression bug in the linux kernel or something new. We may have to defer to the linux kernel developers to look into the issue.
The details you provided in your OP are great! The more details we know about the quicker the solution will happen. Can you tell me for that system what CPU is installed since the error appears to be a CPU specific issue (possible missing microcode??)