Crash due to timeout in tg3 kernel module: tg3_stop_block timed out, ofs=4c00, enable_bit=2
-
@sebastian-roth
Hello Sebastian,well, yes and no. I have a lot of information on this, but I’m still working to bring up my labs. Please allow me a few days to work this out. I have a few hundred students and teachers that are eagerly expecting our labs to be ready, so it’s a big issue for us.
Currently I am working with a very small team, cloning about 100+ machines, one by one. Yes, you’ve read it right: it’s currently impossible to multicast images with this bug and our infrastructure (gigabit ethernet mixed with 10/100 switches).
We setup three fog servers and are using them with crossover cables. I’ve got also two external hard drives (USB 3.0), and hacked my way out through them by using a few shell scripts and a lot of tinkering.
We are able to clone about five machines in parallel with this scheme. However, the cloning process is very very unstable. About 30% to 50% of the cloning operations are failing (roughly).
From these, only about half is due to the tg3 problem and is related to fog. Yes, that’s right: with a pair of distinct machines and a single crossover cable across them, the “tg3 timeout” issue is still happening. Both machines (in each pair) have gigabit cards, but they are different. The bug is way less frequent, and we managed to finish many cloning operations successfully. But it’s still hapening.
This means the 10/100 switch makes the bug more reproducible, but it’s not the root cause. It still happens, even without any 10/100 network interface in the middle.
The other half of the failures are due to crashes and freezes from a couple of live memory sticks running ubuntu and pumping about 200GB over USB3.0 (about 45 min to 1h to finish).
I could not dig deeper into this since we need to finish the work. Hope to have it done by next friday, maybe before of that.
About iommu=soft, I also tried it a few times, without any success. Both in a 64 and 32bit kernel, and also with the latest “vanilla + firmware repo” kernel. I also tried many other things, such as noapic, nolapic, both of them, turning off autonegotiation, raising the log level to look for more messages and the like. Oh, and I also updated the HP BIOS firmware, turned on traffic shaping and tried other things (isolated and combined).
Nothing solved the problem. It’s clearly a regression somewhere between the HW, the firmware and the kernel driver.
With all the respect to Broadcom, this is something that they should have catched in a reasonably easy way. Since they gives explicit support for the kernel module, a goot testbench should have exposed the problem. Most probably their test setup has only gigabit cards, otherwise the bug would be exposed more easily.
I really believe that a testbench with Fog, a set of machines (with many distinct cards) and a set of images (with many distinct sizes, partition layouts and the like) would be great to catch this kind of thing. Oh my, I would love to help them setup something like that…
Aham. Well, let me see how things are moving. Will get back to you in a few days.
Thank you all for your support,
Paulo -
@Paulo-Guedes Oh man, this sounds very unfortunate that you need to sail this with one broken arm, crippled eyes and let’s hope there is no storm coming up on the last leg of the turn. I really hope and keep my fingers crossed that you can get this done in time. After that I am more than happy to get into finding and fixing this with you. Maybe I can even get a piece of hardware myself to test.
So yes, finish that ugly job and let me know when you have time again. Wish you all the best!
-
@sebastian-roth Hello, you described it very well: it was a really difficult time for us to manually clone all our labs. That’s something I don’t wish for anyone.
Anyway, I should have time to check again on this issue next week, I hope. Will try with the latest fog + latest kernel, to see if the issue is still happening. I already checked some posts and it seems that the problem was not yet fixed. If anyone has more information on the problem, please let me know. I will report back when I reinstall, run and test it.
Regards, Paulo -
@Paulo-Guedes I have looked through the code and search the web a fair bit now and I have a feeling that we might be mislead by this old 2012 to 2013 (kernel 2.6) posts. Not saying this is totally different but I am not sure if it’s a regression or maybe something new.
The messages “Host status block”, “NAPI info” and partly also “tg3_stop_block timed out” don’t help us much right now as those are all just messages that tell us that the NIC has been reset. But so far we have no idea why this happens. Can you schedule a debug task and when you hit the error break out using keys Ctrl+c? Now get an usb key and run:
mount /dev/sdb1 /mnt dmesg > /mnt/tg3_dmesg.txt umout /mnt
Possibly the sdb1 could be sdc1 or what on your system. Just see what you have. Please post the full text message here (in a code block) or upload that file somewhere and post a link.
-
@sebastian-roth Hello Sebastian,
I believe it’s a regression. I saw the old messages on the 2.6 kernels, this bug has happened before, maybe a few times already. Most probably the module worked for similar devices, but for this specific combination of new device plus old switch, it breaks. If I understood it correctly, there is a watchdog which keeps running, expecting something that never happens (such as a reply/control flow message/packet).Anyway, I agree that solid evidence is always better than the best guess. I will schedule a debug test as you mentioned. Hopefully, will have a result in the next few days (things are a bit tricky in here, this week). By the way, I can add some printk messages inside the module, in case you want to see something specific into the sequence of function calls or about the variables (e.g. state of the device, etc.).
Thank you!
Paulo -
@Paulo-Guedes Any news on this?
-
Hello, sorry for taking long to answer this. Too much (time-sensitive) things at work. Well, I managed to find out a few new details on this bug. It seems that this AMD architecture is still not yet well supported.
This message mentions a few changes on the tigon3, including a workaround that is specific for my network card. I tested it, but it’s not working.
https://lkml.org/lkml/2017/12/31/125
<…>
Siva Reddy Kallam (3):
tg3: Update copyright
tg3: Add workaround to restrict 5762 MRRS to 2048
tg3: Enable PHY reset in MTU change path for 5720
<…>According to this thread, the fix still does not solve the issue. Last post: 2018-01-16.
It’s the patch for tg3, aimed to my specific ethernet card (5762).
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1447664Meanwhile, I have downloaded and rebuilt the latest linux release candidate, which has this patch for the tg3 module.
The 4.15-rc8 is available here:
https://git.kernel.org/torvalds/t/linux-4.15-rc8.tar.gzThe bzimage file was created as a static TomElliot 64 bit image.
https://wiki.fogproject.org/wiki/index.php?title=Build_TomElliott_KernelUnfortunately, my tests with this kernel showed no improvements on the timeout issue. The problem still happens. I tried a few kernel parameters, without success. This is a vanilla (+TomElliot config) kernel. Not tainted, although it has the firmware repository inside.
However, I finally got kernel logs. You can check them in the links below.
log_01_acpi_off.txt
https://pastebin.com/FGQNiLqklog_02_maxcpus_1.txt
https://pastebin.com/2eEJnA3Zlog_03_nmi_watchdog_off.txt
https://pastebin.com/Su44AqiXlog_04_nmi_watchdog_off.txt
https://pastebin.com/4ja0UZ0clog_05_noapic_nolapic.txt
https://pastebin.com/fZNJbME5The kernel parameters were used as follows. Some were inspired by the logs (tsc), some just to… see what happens.
debug loglevel=7
debug loglevel=7 acpi=off
debug loglevel=7 acpi=off tsc=unstable
debug loglevel=7 acpi=off tsc=unstable maxcpus=1
debug loglevel=7 acpi=off tsc=unstable maxcpus=1 nmi_watchdog=0
debug loglevel=7 acpi=off tsc=unstable maxcpus=1 nmi_watchdog=0 noapic nolapicSometimes it’s difficult to get logs as the machine hangs right after the network stops working.
Here is the mrrs patch for tg3, related to the 5762 hw version. My test has this applied, but still does not fix the problem.
https://github.com/torvalds/linux/commit/4419bb1cedcda0272e1dc410345c5a1d1da0e367#diff-ee9b0abeec638cc316efd5b30e0e01e8Any ideas? Would you like logs with other parameters? Is there anything I can do to provide further information? lsusb? lspci? lscpu? anything?
Regards,
Paulop.s.: by the way, I also spotted network issues on a live Ubuntu image (17.10.1), both on wired (tg3) and wireless (iwlwifi) network cards.
-
This is just a bit of random thinking.
- Do we know the hardware ID (vend & class) of this network adapter?
- If you use a network switch to connect to this computer, what happens if you force (configure) the port to 100Mb full duplex then try to image?
- Fall back position, use a USB3 network adapter and boot into FOG using a FOS usb drive. Just bypass the LOM network adapter all together for imaging.
-
@Paulo-Guedes Did you see comment 29 in the launchpad bug discussion?
It talks about the fix having an effect, because it’s only specific for DELL machines. So you can try modifying the patch/code and remove the
if
statement - line 10061, 10062 and 10065. Then recompile and try again (I guess without any kernel parameters as a starter.Launchpad comment 34 says that this did fix the issue for him on HP EliteDesk 705 G3 Desktop Mini! Give it a try!
PS: Thanks for taking the time to gather those logs. The messages look pretty much the same. It’s definitely something very low level “killing” the NIC which then does wakeup the Linux watchdog to kill the tg3 driver before it locks the system. Nasty stuff and really hard to find and fix.
-
@george1421
Hello George,-
The HW id: this is a Broadcom card. This came from lspci. Is that what you’re asking?
01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5762 Gigabit Ethernet PCIe (rev 10) -
I have tried that. Forcing the port to 100mb does not change anything when it’s connected to a 10/100 switch: the bug still happens. Actually, the bug ALWAYS happens when connected to a slower switch (10/100). When I connect to a gigabit port (crossover cable or gigabit switch), the communication flows perfectly. It’s clearly something time-related.
-
We tried to do that, but I actually… don’t know how. I mean, I can use a tethered cell phone with an “ethernet over USB” inside Ubuntu, quite easily. I can write udev rules and the like. However, I never had to do that inside busybox. The kernel is clearly recognizing the device, but I don’t know how to proceed in order to set a (virtual) network interface. Today I tried with “mdev -s”, but could not properly search for a tutorial to learn how to finish it. If you can point me on the right direction, it would be great.
Sebastian, I’ve also tried the kernel patch without success. With and without the “if condition” (and with printk messages). Later I will elaborate on that a bit further.
Unfortunately we’re still stuck.
Thank you all for your ideas,
Paulo -
-
- As for the hardware ID you are in the right neighborhood but the number if interest would look something like this [8086:15D3] (totally made up intel number). The first group is the manufacturer the second is the device ID. It will not help us solve this issue, but will help document what nic you have issues with.
- Well I was hoping that slowing down the network transfer would help with the timing issue. I know back in the days some systems didn’t negotiate GbE speeds well and would hang.
- For the USB3 network adapter route. Actually USB3 may be a bit more difficult than USB2 just because of the usb3 interface. I think I would try it if you have one. Or just use USB2 nic. The issue you will have if the target system is uefi based is that uefi firmware only supports known network adapters for pxe booting. Its not like with bios where you can use most random usb network adapter that can pxe boot. With that said, we have a usb boot FOS image that we have to use some times for debugging and for those systems who can’t pxe boot. As far as FOS is concerned it will look for the first 3 network interfaces it finds in the computer, it will image across the first one that can reach the fog server.
Now once imaging is done, then FOG is out of the picture and what ever your target OS is, will have to deal with that eg3 adapter. But I’m pretty sure we can get you imaging using a usb network adapter and a fog usb boot drive.
-
@Paulo-Guedes Just found this: https://www.mail-archive.com/netdev@vger.kernel.org/msg189347.html
Please read through all the posts and answers. At first it might not sound like the same issue but that’s because he uses an older kernel at first. But later on gets to a 4.13.3 kernel resulting in the same
tg3_stop_block timed out
lockups. There is a patch provided in one of the posts. Please give that a try and see if it helps. Sounds promising to me. -
Short version:
- Bug still hapening.
- Addded more info on launchpad bug 1447664 (basically what you see in here), just to share the logs.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1447664 - Will test 2 things you suggested tomorrow.
- Thank you both!
Now the long version:
@sebastian-roth
Hello Sebastian,the bug is still hapenning. Let me explain.
I have tried what you suggested first about modifying the patch for Dell machines. Tried it with a few printk comments, to prove it was being correctly built. The patch “as is” fails the if condition (as expected), most probably because my machines are not Dell. It prints my message that proves the module is being compiled, but not the one stating that the “if body” runs.
Then I commented the “if condition” as you suggested, in order to force the body to always execute. It runs (added a printk to prove that), but the problem still happens.
I could not yet try your last suggestion (next link).
https://www.mail-archive.com/netdev@vger.kernel.org/msg189347.htmlBut it states that “Booting from a harddrive works fine”, which is very encouraging. It also describes precisely the scenario I see in here, with the bios code loading the kernel and ramdisk file correctly over ethernet, and with the network breaking after the boot process.
Well, I will prepare a new kernel today and will run two new tests tomorow.
- Try the patch you suggested from the kernel list.
- Try to boot from an USB boot drive as suggested by George
Will return to you as soon as I have more information.
By the way, you’re right: “Nasty stuff and really hard to find and fix”. By the way, I stumbled upon the NIC development datasheet and it’s quite large (600+ pages: ouch!), so I gave up this route.
Thanks!
@george1421
Hello George,-
Next monday I will test with “lspci -nn”.
-
Actually the issue only hapens in slow speeds (100). With a gigabit link it never happens.
-
Good to know about the usb boot FOS image. May I create it as described in the following link, or it’s something else? Anyway I just created a bootx64.efi as it describes. Will try it tomorrow morning, as soon as I get at work.
https://wiki.fogproject.org/wiki/index.php?title=USB_Bootable_Media
About the fog usb boot drive and the USB network adapter, if it works, it will be a great solution. Currently, we really don’t care much about the link speed, since we have more than 100 machines to clone. We can let them work overnight and that would be just fine, even if it takes two days.
-
What does the command
dmesg
show? It sounds to me like we just need to add the tg3 firmware module to the build like all the other tg3 nics I had to do before. -
@tom-elliott
Hello Tom, I have added a few dmesg logs in the messages below. I think it’s not related to the firmwares, since the kernel builds ok, but the module crashes.Hello all, it’s a real pleasure to finally say that IT WORKED!!! Wow, it finally worked! I almost can’t believe it. Thank you so much for all your help.
Aham. The solution was found by Sebastian (thanks Sebastian!!!). Here I just describe the process.
The message thread that contains the solution and a patch. It describes precisely the failure scenario: The same NIC, boot over the network, then a 10/100 switch, then the way the tg3 kernel module breaks with a timeout.
https://www.mail-archive.com/netdev@vger.kernel.org/msg189347.htmlThe kernel version: 4.13.3
https://www.kernel.org/pub/linux/kernel/v4.x/
https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.13.3.tar.xzBasically I followed the instructions to rebuild a static image.
Download the kernel and the patch; extract the kernel, apply the patch. Build an image (mine was a 64 bit one).
https://wiki.fogproject.org/wiki/index.php?title=Build_TomElliott_KernelInstall the build inside fog, then try to image something over ethernet with the regular procedure: using pxe to boot.
Without a patch, the deploy will fail with a timeout crash inside tg3. Now it should work flawlessly.
If you wish to justIf you wish, I’ve built a 64-bit image, ready to be used inside fog. Here it is.
https://goo.gl/n1qBESRegards,
Paulo
p.s.: I really hope nothing has changed inside the firmware repository, and the fix is not due to a new firmware. Maybe it’s worth trying the same kernel with the same firmware repository, but without the patch (to see if it breaks). Anyway, it works, and this is what matters:) -
@Paulo-Guedes Oh that’s really great to hear that we have figured out this at least! Probably a real pleasure to see it image nicely now!!!
We are more than happy to add a patch to the FOG kernel but we also should look into if it will make it into the official kernel as well. Last comment on the mailing list was:
Good. We will work on required changes and upstream proper patch after
sanity test with multiple speeds.Can anyone figure out if and where this patch made it into the upstream kernel? If not we ought to push the developers to do so.
-
@sebastian-roth
As far as I can tell, the patch for tg3 was not inside the release candidates for the current kernel. I’ve tested 4.15-RC8 and it was not working. Then RC9 was released (no idea about it). Two days ago a brand new stable version was released. Will try it and see what happens.
https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xzI just checked the changelog and it mentions nothing related to tg3, tigon, timeout or broadcom. I would bet this patch is not in here yet. Here it is.
https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.14.15I will try to run more tests today. One with a 4.13.3 without the patch, to see if it breaks (and hence, the patch is the real fix). And another with 4.15 (with and without patch), to see if it is fixed and, in case it’s not, if the patch applies cleandly and works. Meanwhile, yesterday I wrote in another thread (with the same bug), asking people from there to double check our findings. Maybe they can take a look too, and see what happens.
-
@Paulo-Guedes Yeah right, seems like the patch didn’t make it into the kernel yet. Probably a good idea to get in contact with the guy posting the patch. You can find his e-mail address in the patch file! Definitely send him a short message to see what the current state is and tell him that the fix is working great to fix your issue.
-
@sebastian-roth
Hello Sebastian, all,-
Stable kernels 4.13.3 and 4.15 crash without the patch. Patch is not merged yet in the main branch.
-
Stable kernels 4.13.3 and 4.15 work great with the patch: no timeouts on tg3. Fast transfers on gigabit links and 10/100 links.
-
Wrote to the patch author as Sebastian suggested, with my results and asking when it will be merged. Waiting for his answers. Patch has a slight offset for 4.15 (2 lines, probably new comments or code) but works anyway. Will keep you updated on this.
-
Deploy for single machines (in parallel without multicast) is finally checked. Tested overnight with a bunch of machines and it’s ok.
-
If you wish, I can upload the patched 4.15 kernel tomorrow, just in case someone wants to use it.
-
Multicast deploy for groups of machines is working too, but much slower (about 10x) than my 10/100 network could transfer. Same network, same machines, no cable touched, nothing reset and… the deploy already starts at a slow speed (between 100 and 200 MB/min). Just reporting. Will start reading about it, to try to understand the problem. If anyone can point me on the right direction, please answer this message.
-
-
@Paulo-Guedes Great stuff! Keep it up and I am sure we’ll have you up and running soon.
About multicast… First, please open a new thread on this topic. I don’t like to mix things up all in one thread. And then keep in mind that it’s always the slowest part of the chain which limiting the speed. So if there is just one single client with a crappy hard drive it will slow down all the other hosts. So I’d start by testing multicast in groups of maybe 3 to 5 machines each and see if those are all going at the same slow pace or if some groups are faster than others.