Dell Precision Tower 5820 - FlexBay MiniSAS PCIe NVMe SSD not recognized
-
Well I’m a bit confused now. The ‘G’ version works correctly on a 3050 as well as a 7060. There is a short delay after bzImage is copied before the kernel starts, like 3-4 seconds where initially I thought that it crashed like on your system.
In your environment, can you roll back to the C-G bzImages and ensure the older ones still boot? For the G build, I went back to FOG defaults and then only enabled hotplug. There might be other things that I turned on that is needed for hotplug. Rolling back to an earlier boot kernel on your end will tell me where to look in the config files.
-
@george1421 Same here, The 5820 bios has some different options regarding NVMe drives connected to the FrontFlex Bay. These devices actually show up in bios, while the same drive connected via PCIe adapter does not populate under any bios menu that I have located. This suggest something is very different at the bios level for the two connection methods.
version C
- Boots
- Error parsing PCC subspaces from PCCT
- NVMe not in lsblk
version D
- Boots
- Error parsing PCC subspaces from PCCT
- nvme nvme0: failed to set APST feature (-19)
- NVMe not in lsblk
version E
- Kernel Panic
- Error parsing PCC subspaces from PCCT
- pciehp 0000:b2:02.0:pcie004: Slot(12): Power Fault
- pciehp 0000:b2:03.0:pcie004: Slot(13): Power Fault
- acpiphp_ibm: ibm_acpiphp_init: acpi_walk_namespace failed
- nvme nvme0: failed to set APST feature (-19)
- (/sbin/init & /bin/sh) exists but couldn’t execute it (error -8)
version F
- kernel panic
- Error parsing PCC subspaces from PCCT
- acpiphp_ibm: ibm_acpiphp_init: acpi_walk_namespace failed error
- (/sbin/init & /bin/sh) exists but couldn’t execute it (error -8)
version G
- Boots (No idea what I was doing wrong before…)
- Error parsing PCC subspaces from PCCT
- bzImage41713g.log
- bzImage41713g_lsmod.log
-
@hlalex Thank you for taking the time to test all of these kernels. I’m glad ‘G’ is working correctly. Let me take a look at the logs you provided so I can keep pushing forward. I understand about ‘F’ crashing, I simply added some settings that sounded good without first researching. That is why I did the reset with ‘G’.
I’ll touch base again when I have a chance to digest your new logs.
-
@george1421 If you would like I can upload the bash script I am using to collect the data. Its very basic, but it speeds things up quite a bit (and automatically masks personal info).
-
@hlalex I’m in the process of rebuilding the kernel for the ( i ) release. One thing I discovered is that I need to have a better change management process in place. I compared your latest syslog with the output from FC27 and I see the error messages I fixed in an earlier release back in the syslog (mainly because I reset the configuration back to a known good setting and did not have good enough documentation on what I changed to get me to that point). Its a bit like starting over, but with the knowledge I fixed the issue once before.
The i build has the pcie hot plug enabled plus MTD support (I think your solution is a combination of drivers that need to be enabled). I’m interested in the results you will generate from this build as compared to the previous one. The level of detail you are providing is excellent.
As for the error “Error parsing PCC subspaces from PCCT” that error is actually present in both FOS and FC27 kernels.
-
@george1421 rev I logs:
-
Well, I found out why version E was so close but blew up with the inits. Actually I was chatting with one of the developers and he actually called the problem 2 days ago. He said the inits were mismatched with the kernel architecture (i.e. 64 bit kernel with 32 bit inits). I found it was actually the other way around, the kernel switched to 32 bits and you were trying to boot with the 64 bit inits. I have no idea why the kernel settings switched with the ‘E’ version.
I found this doing a side by side comparison with the kernel build config files. So in the end we were really close, but I shot myself in the foot.
I have a bit more time today so hopefully we can get this knocked out.
-
@george1421 No worries! I run into gremlins like that all the time. I’ll keep an eye out for updates.
-
@george1421 rev J boots just fine, but I did notice an error message to the effect of “/dev … error creating epoll fd”. It was gone almost before I saw it, but that is what I was able to remember. I vaguely remember seeing this error before, but I do not know if it was with any of the other test kernels or some other project.
Here are the logs:
-
@hlalex Very nice. I’m happy to see where the kernel is at, at the moment (its not currently working as you need it, but its close. I’ve also been able to address a few other issues not related to your issue).
acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] acpi PNP0A08:02: _OSC: platform does not support [PME AER] acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PCIeCapability] pciehp 0000:b2:02.0:pcie004: Slot(12-1): Power fault
As soon as I get the above bits worked out to have them show in the FOS kernel, we should have access to that nvme drive. This is the step we were at just before the kernel got switched to 32 bit mode. Let me research these and I’ll come back with a ‘K’ release.
-
You guys are doing an awesome job here! Keep it up!
-
Version K has been posted. This one adds PCIe DMA support. We are getting very close to the config that was blowing up before where the kernel switched to 32 bit.
Not related, but I added USB-C support to this kernel for those devices that hide behind it like network adapters on usb-c docks.
-
@george1421 That’s a great addition, especially with all the new devices with USB-C (these 5820s have 2 front C ports).
Here are logs from Rev K: -
@george1421 version L logs:
-
@hlalex Well I’m down to researching this error:
[ 3.638397] nvme nvme0: failed to set APST feature (-19)
I roughly have equivalency between Fedora Core 27 (4.13.9) and FOS (4.17.13) The nvme device is being seen by the kernel, but it can’t mount it at the moment.
-
@george1421 Found a few references to this error:
https://bugs.archlinux.org/task/57331
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1678184
http://lists.infradead.org/pipermail/linux-nvme/2017-February/008008.html
it looks like the variable to set is
nvme_core.default_ps_max_latency_us=<some_number_here>
I tried setting it to 0, 250, and 300 according some those posts (using the “Host Kernel Arguments” option in the host record) and nothing seems to change.
-
nvme nvme0: failed to set APST feature (-19)
That
-19
is usually means “No such device” (reference). Very strange.In that arch linux bug report there are
CONFIG_PCIEASPM_...
kernel configs mentioned. Had a look at those yet? @george1421What I was just thinking: Maybe Fedora has some special NVME patch included in their kernel that we don’t know about yet. Has anyone ever looked into the full Fedora kernel patchset?
EDIT: Not sure but that might be the one: https://git.kernel.org/pub/scm/linux/kernel/git/jwboyer/fedora.git/snapshot/fedora-kernel-4.13.9-300.fc27.tar.gz
EDIT2: Ok, sorry. This seems to be the full fedora kernel code. Anyone keen to create a diff to a vanilla kernel with that?
-
@sebastian-roth I just tested both options as kernel arguments and nothing seems to have changed.
I also tried the
pcie_aspm.policy=powersave
to no avail.Let me know and I can try to post some logs before I have to punch out.
-
@sebastian-roth The config parameter
CONFIG_PCIEASPM_POWER_SUPERSAVE
is currently not set.I’ll take a peek at the patch and see if there is anything helpful. Its so close to working (at least dmsg wise). I’d hate to give up now…
-
@george1421 Found this in the diff… no idea if that could be related:
diff -Nur linux-4.13/drivers/nvme/host/pci.c fedora-kernel-4.13.9-300.fc27/drivers/nvme/host/pci.c --- linux-4.13/drivers/nvme/host/pci.c 2017-09-03 22:56:17.000000000 +0200 +++ fedora-kernel-4.13.9-300.fc27/drivers/nvme/host/pci.c 2017-10-23 22:25:50.000000000 +0200 @@ -93,7 +93,7 @@ struct mutex shutdown_lock; bool subsystem; void __iomem *cmb; - dma_addr_t cmb_dma_addr; + pci_bus_addr_t cmb_bus_addr; u64 cmb_size; u32 cmbsz; u32 cmbloc; @@ -1218,7 +1218,7 @@ if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz)) { unsigned offset = (qid - 1) * roundup(SQ_SIZE(depth), dev->ctrl.page_size); - nvmeq->sq_dma_addr = dev->cmb_dma_addr + offset; + nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset; nvmeq->sq_cmds_io = dev->cmb + offset; } else { nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), @@ -1517,7 +1517,7 @@ resource_size_t bar_size; struct pci_dev *pdev = to_pci_dev(dev->dev); void __iomem *cmb; - dma_addr_t dma_addr; + int bar; dev->cmbsz = readl(dev->bar + NVME_REG_CMBSZ); if (!(NVME_CMB_SZ(dev->cmbsz))) @@ -1530,7 +1530,8 @@ szu = (u64)1 << (12 + 4 * NVME_CMB_SZU(dev->cmbsz)); size = szu * NVME_CMB_SZ(dev->cmbsz); offset = szu * NVME_CMB_OFST(dev->cmbloc); - bar_size = pci_resource_len(pdev, NVME_CMB_BIR(dev->cmbloc)); + bar = NVME_CMB_BIR(dev->cmbloc); + bar_size = pci_resource_len(pdev, bar); if (offset > bar_size) return NULL; @@ -1543,12 +1544,11 @@ if (size > bar_size - offset) size = bar_size - offset; - dma_addr = pci_resource_start(pdev, NVME_CMB_BIR(dev->cmbloc)) + offset; - cmb = ioremap_wc(dma_addr, size); + cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size); if (!cmb) return NULL; - dev->cmb_dma_addr = dma_addr; + dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset; dev->cmb_size = size; return cmb; } @@ -1609,18 +1609,16 @@ dev->host_mem_descs = NULL; } -static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) +static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred, + u32 chunk_size) { struct nvme_host_mem_buf_desc *descs; - u32 chunk_size, max_entries, len; + u32 max_entries, len; dma_addr_t descs_dma; int i = 0; void **bufs; u64 size = 0, tmp; - /* start big and work our way down */ - chunk_size = min(preferred, (u64)PAGE_SIZE << MAX_ORDER); -retry: tmp = (preferred + chunk_size - 1); do_div(tmp, chunk_size); max_entries = tmp; @@ -1647,15 +1645,9 @@ i++; } - if (!size || (min && size < min)) { - dev_warn(dev->ctrl.device, - "failed to allocate host memory buffer.\n"); + if (!size) goto out_free_bufs; - } - dev_info(dev->ctrl.device, - "allocated %lld MiB host memory buffer.\n", - size >> ilog2(SZ_1M)); dev->nr_host_mem_descs = i; dev->host_mem_size = size; dev->host_mem_descs = descs; @@ -1676,21 +1668,35 @@ dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs, descs_dma); out: - /* try a smaller chunk size if we failed early */ - if (chunk_size >= PAGE_SIZE * 2 && (i == 0 || size < min)) { - chunk_size /= 2; - goto retry; - } dev->host_mem_descs = NULL; return -ENOMEM; } -static void nvme_setup_host_mem(struct nvme_dev *dev) +static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) +{ + u32 chunk_size; + + /* start big and work our way down */ + for (chunk_size = min_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); + chunk_size >= PAGE_SIZE * 2; + chunk_size /= 2) { + if (!__nvme_alloc_host_mem(dev, preferred, chunk_size)) { + if (!min || dev->host_mem_size >= min) + return 0; + nvme_free_host_mem(dev); + } + } + + return -ENOMEM; +} + +static int nvme_setup_host_mem(struct nvme_dev *dev) { u64 max = (u64)max_host_mem_size_mb * SZ_1M; u64 preferred = (u64)dev->ctrl.hmpre * 4096; u64 min = (u64)dev->ctrl.hmmin * 4096; u32 enable_bits = NVME_HOST_MEM_ENABLE; + int ret = 0; preferred = min(preferred, max); if (min > max) { @@ -1698,7 +1704,7 @@ "min host memory (%lld MiB) above limit (%d MiB).\n", min >> ilog2(SZ_1M), max_host_mem_size_mb); nvme_free_host_mem(dev); - return; + return 0; } /* @@ -1712,12 +1718,21 @@ } if (!dev->host_mem_descs) { - if (nvme_alloc_host_mem(dev, min, preferred)) - return; + if (nvme_alloc_host_mem(dev, min, preferred)) { + dev_warn(dev->ctrl.device, + "failed to allocate host memory buffer.\n"); + return 0; /* controller must work without HMB */ + } + + dev_info(dev->ctrl.device, + "allocated %lld MiB host memory buffer.\n", + dev->host_mem_size >> ilog2(SZ_1M)); } - if (nvme_set_host_mem(dev, enable_bits)) + ret = nvme_set_host_mem(dev, enable_bits); + if (ret) nvme_free_host_mem(dev); + return ret; } static int nvme_setup_io_queues(struct nvme_dev *dev) @@ -2161,8 +2176,11 @@ "unable to allocate dma for dbbuf\n"); } - if (dev->ctrl.hmpre) - nvme_setup_host_mem(dev); + if (dev->ctrl.hmpre) { + result = nvme_setup_host_mem(dev); + if (result < 0) + goto out; + } result = nvme_setup_io_queues(dev); if (result)