Crash due to timeout in tg3 kernel module: tg3_stop_block timed out, ofs=4c00, enable_bit=2
- FOG Version: 1.4.4
- OS: 16.04.3 LTS
- Service Version:
- OS: Win 10 + Ubuntu 16.04.3
In order to better organize ideas and to separate unrelated issues, I am creating a new post as suggested by Sebastian Roth. You’re right, thanks. This one will focus on the tg3_stop_block timed out problem.
My first post is here (sorry, was describing the two problems in one place).
I am seeing a timeout error during the cloning process. I believe it is related to the tg3 kernel module, which is responsible for handling the tigor3 wired Ethernet device.
The observed behavior is as follows. I start a deploy, the machine sometimes starts the deploy process and after a while, it gets stuck. Then after some time (a few minutes), the kernel crashes with a timeout error.
This happens with both a crossover cable and over a wired Ethernet across a switch. It is an intermittent issue. Last Friday I managed to clone about five machines with the crossover cable, plus one that failed.
Today, two failed using the crossover cable. The deploy starts but at some point during the partition writing, it crashes. After 8 minutes or so, with an NTFS partition partially deployed. I tested only one machine at a time, due to the limitation of the crossover cable.
All tests I did through the network switch also failed, but in a somewhat different way. Right after writing GPT data, but before starting to write data inside the partition. I tested with a small group of four, then two and finally with a single machine. All tests failed the same way, both with UDPCAST method (multicast deploy) and NFS method (unicast, if I remember
My first guess was related to an issue on the crossover cable being too loose. Now I don’t think this is the root cause, since I replaced the cable by a new one. With the new cable I observed both successful image capture and image deploy. But failed captures
and deploys happened too. So, I don’t think it’s the cable anymore.
Failure on the tg3 kernel module.
After some reading, I’ve found a few references.
This (old) message suggests that the problem happens in one kernel version, but not in the previous one.
This (also old) message points out that:
“When using TSO property of the TG3 driver to transmit a packet with a large header, such as over 80 bytes, an error message similar to the following appears in the Kernel log when using the TG3 3.66d version of the driver with GA3 firmware…”
Is also suggests a workaround.
"Turn off the TSO functionality of the driver using the following command from Linux:
ethtool -K eth0 tso off
I started a “deploy (debug)” task and tried to do that once, manually. But the problem is still there in the very same way. If it worked, I would workaroud it by using a postinit script.
I also tried to limit bandwidth with wondershaper a few times, but could not see much difference: same error. The idea was based on a possible concurrency issue. If the tg3 problem is due to some subtle race condition on the buffer handling for the network card, slowing it down could (possibly) reduce the issue likelyhood.
Finally, I started playing with different kernels.
With Kernel.TomElliott.18.104.22.168, it was “too old” and refused to work.
With the following kernel versions, the issue is still there.
By the way, other than showing the same issue, the last version (22.214.171.124) also complained about an APIC issue. It reads: “Firmware bug”, and also “APIC ID mismatch”. Here is the screenshot.
And that’s it: I’m getting out of ideas, other than trying the other kernels.
Any suggestion? Anything I can do under a debug deploy, even manually to workaround this? Is there a wireless option? Anything?
Thank you very much,
p.s.: tomorrow I will try this search:
linux kernel tg3 tso
And read this to see what happens.
Can you confirm this is fixing your issue? Have you used one of the official FOG kernels since then? Which versions?
Somehow I have lost track of this. Luckily I somehow came across this again and added the patch now as it seems like it still hasn’t made it into the main line kernel. Also added the patch information to our wiki article on kernel compiling. Just in case anyone reads this thread and wonders where it all went.
@Paulo-Guedes Have you ever heard back from that broadcom guy?
Hello, just updating.
No answer so far from Broadcom. Tom, adding the patch would be good.
Added a link to this discussion in another thread. I think it’s the same problem.
Maybe they can also report on the problem.
Mentioned the patch and test results in another forum. Hope this helps the patch to enter the main kernel faster.
@Tom-Elliott Paulo told me that he’s sent a message to the guy at Broadcom to ask if the fix would be included in the main line kernel at some point but he hasn’t got an answer from him. So I am wondering if you are happy adding the patch to our kernel for now? Paulo has had huge trouble and the patch solved the network issues for him. Take a look here: https://firstname.lastname@example.org/msg189923/0001-tg3-Add-clock-override-support-for-5762.patch
@Paulo-Guedes Great stuff! Keep it up and I am sure we’ll have you up and running soon.
About multicast… First, please open a new thread on this topic. I don’t like to mix things up all in one thread. And then keep in mind that it’s always the slowest part of the chain which limiting the speed. So if there is just one single client with a crappy hard drive it will slow down all the other hosts. So I’d start by testing multicast in groups of maybe 3 to 5 machines each and see if those are all going at the same slow pace or if some groups are faster than others.
Hello Sebastian, all,
Stable kernels 4.13.3 and 4.15 crash without the patch. Patch is not merged yet in the main branch.
Stable kernels 4.13.3 and 4.15 work great with the patch: no timeouts on tg3. Fast transfers on gigabit links and 10/100 links.
Wrote to the patch author as Sebastian suggested, with my results and asking when it will be merged. Waiting for his answers. Patch has a slight offset for 4.15 (2 lines, probably new comments or code) but works anyway. Will keep you updated on this.
Deploy for single machines (in parallel without multicast) is finally checked. Tested overnight with a bunch of machines and it’s ok.
If you wish, I can upload the patched 4.15 kernel tomorrow, just in case someone wants to use it.
Multicast deploy for groups of machines is working too, but much slower (about 10x) than my 10/100 network could transfer. Same network, same machines, no cable touched, nothing reset and… the deploy already starts at a slow speed (between 100 and 200 MB/min). Just reporting. Will start reading about it, to try to understand the problem. If anyone can point me on the right direction, please answer this message.
@Paulo-Guedes Yeah right, seems like the patch didn’t make it into the kernel yet. Probably a good idea to get in contact with the guy posting the patch. You can find his e-mail address in the patch file! Definitely send him a short message to see what the current state is and tell him that the fix is working great to fix your issue.
As far as I can tell, the patch for tg3 was not inside the release candidates for the current kernel. I’ve tested 4.15-RC8 and it was not working. Then RC9 was released (no idea about it). Two days ago a brand new stable version was released. Will try it and see what happens.
I just checked the changelog and it mentions nothing related to tg3, tigon, timeout or broadcom. I would bet this patch is not in here yet. Here it is.
I will try to run more tests today. One with a 4.13.3 without the patch, to see if it breaks (and hence, the patch is the real fix). And another with 4.15 (with and without patch), to see if it is fixed and, in case it’s not, if the patch applies cleandly and works. Meanwhile, yesterday I wrote in another thread (with the same bug), asking people from there to double check our findings. Maybe they can take a look too, and see what happens.
@Paulo-Guedes Oh that’s really great to hear that we have figured out this at least! Probably a real pleasure to see it image nicely now!!!
We are more than happy to add a patch to the FOG kernel but we also should look into if it will make it into the official kernel as well. Last comment on the mailing list was:
Good. We will work on required changes and upstream proper patch after
sanity test with multiple speeds.
Can anyone figure out if and where this patch made it into the upstream kernel? If not we ought to push the developers to do so.
Hello Tom, I have added a few dmesg logs in the messages below. I think it’s not related to the firmwares, since the kernel builds ok, but the module crashes.
Hello all, it’s a real pleasure to finally say that IT WORKED!!! Wow, it finally worked! I almost can’t believe it. Thank you so much for all your help.
Aham. The solution was found by Sebastian (thanks Sebastian!!!). Here I just describe the process.
The message thread that contains the solution and a patch. It describes precisely the failure scenario: The same NIC, boot over the network, then a 10/100 switch, then the way the tg3 kernel module breaks with a timeout.
The kernel version: 4.13.3
Basically I followed the instructions to rebuild a static image.
Download the kernel and the patch; extract the kernel, apply the patch. Build an image (mine was a 64 bit one).
Install the build inside fog, then try to image something over ethernet with the regular procedure: using pxe to boot.
Without a patch, the deploy will fail with a timeout crash inside tg3. Now it should work flawlessly.
If you wish to just
If you wish, I’ve built a 64-bit image, ready to be used inside fog. Here it is.
p.s.: I really hope nothing has changed inside the firmware repository, and the fix is not due to a new firmware. Maybe it’s worth trying the same kernel with the same firmware repository, but without the patch (to see if it breaks). Anyway, it works, and this is what matters:)
What does the command
dmesgshow? It sounds to me like we just need to add the tg3 firmware module to the build like all the other tg3 nics I had to do before.
- Bug still hapening.
- Addded more info on launchpad bug 1447664 (basically what you see in here), just to share the logs.
- Will test 2 things you suggested tomorrow.
- Thank you both!
Now the long version:
the bug is still hapenning. Let me explain.
I have tried what you suggested first about modifying the patch for Dell machines. Tried it with a few printk comments, to prove it was being correctly built. The patch “as is” fails the if condition (as expected), most probably because my machines are not Dell. It prints my message that proves the module is being compiled, but not the one stating that the “if body” runs.
Then I commented the “if condition” as you suggested, in order to force the body to always execute. It runs (added a printk to prove that), but the problem still happens.
I could not yet try your last suggestion (next link).
But it states that “Booting from a harddrive works fine”, which is very encouraging. It also describes precisely the scenario I see in here, with the bios code loading the kernel and ramdisk file correctly over ethernet, and with the network breaking after the boot process.
Well, I will prepare a new kernel today and will run two new tests tomorow.
- Try the patch you suggested from the kernel list.
- Try to boot from an USB boot drive as suggested by George
Will return to you as soon as I have more information.
By the way, you’re right: “Nasty stuff and really hard to find and fix”. By the way, I stumbled upon the NIC development datasheet and it’s quite large (600+ pages: ouch!), so I gave up this route.
Next monday I will test with “lspci -nn”.
Actually the issue only hapens in slow speeds (100). With a gigabit link it never happens.
Good to know about the usb boot FOS image. May I create it as described in the following link, or it’s something else? Anyway I just created a bootx64.efi as it describes. Will try it tomorrow morning, as soon as I get at work.
About the fog usb boot drive and the USB network adapter, if it works, it will be a great solution. Currently, we really don’t care much about the link speed, since we have more than 100 machines to clone. We can let them work overnight and that would be just fine, even if it takes two days.
Please read through all the posts and answers. At first it might not sound like the same issue but that’s because he uses an older kernel at first. But later on gets to a 4.13.3 kernel resulting in the same
tg3_stop_block timed outlockups. There is a patch provided in one of the posts. Please give that a try and see if it helps. Sounds promising to me.
- As for the hardware ID you are in the right neighborhood but the number if interest would look something like this [8086:15D3] (totally made up intel number). The first group is the manufacturer the second is the device ID. It will not help us solve this issue, but will help document what nic you have issues with.
- Well I was hoping that slowing down the network transfer would help with the timing issue. I know back in the days some systems didn’t negotiate GbE speeds well and would hang.
- For the USB3 network adapter route. Actually USB3 may be a bit more difficult than USB2 just because of the usb3 interface. I think I would try it if you have one. Or just use USB2 nic. The issue you will have if the target system is uefi based is that uefi firmware only supports known network adapters for pxe booting. Its not like with bios where you can use most random usb network adapter that can pxe boot. With that said, we have a usb boot FOS image that we have to use some times for debugging and for those systems who can’t pxe boot. As far as FOS is concerned it will look for the first 3 network interfaces it finds in the computer, it will image across the first one that can reach the fog server.
Now once imaging is done, then FOG is out of the picture and what ever your target OS is, will have to deal with that eg3 adapter. But I’m pretty sure we can get you imaging using a usb network adapter and a fog usb boot drive.
The HW id: this is a Broadcom card. This came from lspci. Is that what you’re asking?
01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5762 Gigabit Ethernet PCIe (rev 10)
I have tried that. Forcing the port to 100mb does not change anything when it’s connected to a 10/100 switch: the bug still happens. Actually, the bug ALWAYS happens when connected to a slower switch (10/100). When I connect to a gigabit port (crossover cable or gigabit switch), the communication flows perfectly. It’s clearly something time-related.
We tried to do that, but I actually… don’t know how. I mean, I can use a tethered cell phone with an “ethernet over USB” inside Ubuntu, quite easily. I can write udev rules and the like. However, I never had to do that inside busybox. The kernel is clearly recognizing the device, but I don’t know how to proceed in order to set a (virtual) network interface. Today I tried with “mdev -s”, but could not properly search for a tutorial to learn how to finish it. If you can point me on the right direction, it would be great.
Sebastian, I’ve also tried the kernel patch without success. With and without the “if condition” (and with printk messages). Later I will elaborate on that a bit further.
Unfortunately we’re still stuck.
Thank you all for your ideas,
It talks about the fix having an effect, because it’s only specific for DELL machines. So you can try modifying the patch/code and remove the
ifstatement - line 10061, 10062 and 10065. Then recompile and try again (I guess without any kernel parameters as a starter.
Launchpad comment 34 says that this did fix the issue for him on HP EliteDesk 705 G3 Desktop Mini! Give it a try!
PS: Thanks for taking the time to gather those logs. The messages look pretty much the same. It’s definitely something very low level “killing” the NIC which then does wakeup the Linux watchdog to kill the tg3 driver before it locks the system. Nasty stuff and really hard to find and fix.
This is just a bit of random thinking.
- Do we know the hardware ID (vend & class) of this network adapter?
- If you use a network switch to connect to this computer, what happens if you force (configure) the port to 100Mb full duplex then try to image?
- Fall back position, use a USB3 network adapter and boot into FOG using a FOS usb drive. Just bypass the LOM network adapter all together for imaging.
Hello, sorry for taking long to answer this. Too much (time-sensitive) things at work. Well, I managed to find out a few new details on this bug. It seems that this AMD architecture is still not yet well supported.
This message mentions a few changes on the tigon3, including a workaround that is specific for my network card. I tested it, but it’s not working.
Siva Reddy Kallam (3):
tg3: Update copyright
tg3: Add workaround to restrict 5762 MRRS to 2048
tg3: Enable PHY reset in MTU change path for 5720
According to this thread, the fix still does not solve the issue. Last post: 2018-01-16.
It’s the patch for tg3, aimed to my specific ethernet card (5762).
Meanwhile, I have downloaded and rebuilt the latest linux release candidate, which has this patch for the tg3 module.
The 4.15-rc8 is available here:
The bzimage file was created as a static TomElliot 64 bit image.
Unfortunately, my tests with this kernel showed no improvements on the timeout issue. The problem still happens. I tried a few kernel parameters, without success. This is a vanilla (+TomElliot config) kernel. Not tainted, although it has the firmware repository inside.
However, I finally got kernel logs. You can check them in the links below.
The kernel parameters were used as follows. Some were inspired by the logs (tsc), some just to… see what happens.
debug loglevel=7 acpi=off
debug loglevel=7 acpi=off tsc=unstable
debug loglevel=7 acpi=off tsc=unstable maxcpus=1
debug loglevel=7 acpi=off tsc=unstable maxcpus=1 nmi_watchdog=0
debug loglevel=7 acpi=off tsc=unstable maxcpus=1 nmi_watchdog=0 noapic nolapic
Sometimes it’s difficult to get logs as the machine hangs right after the network stops working.
Here is the mrrs patch for tg3, related to the 5762 hw version. My test has this applied, but still does not fix the problem.
Any ideas? Would you like logs with other parameters? Is there anything I can do to provide further information? lsusb? lspci? lscpu? anything?
p.s.: by the way, I also spotted network issues on a live Ubuntu image (17.10.1), both on wired (tg3) and wireless (iwlwifi) network cards.
@Paulo-Guedes Any news on this?