Waveshare 2xNVME HAT not showing up (PCI-Express issue)

I don’t have any other HATs… Also, Armbian doesn’t offer the option to build images with the edge kernel for Rock 2F. I think the board is not mainlined yet, it’s pretty new. I cannot check the disconnection messages now, anything else I can do?

Simple mistake is to plug fpc incorrectly or in wrong side, that is why I asked You about different HAT to test :slight_smile: Double check this particular thing :slight_smile:

For Armbian - You can just clone repo and try Yourself with different kernels and fixes, few other things may not work, but probably You will get something about pcie.

Pcie errors can mean unsupported device, controller problem or just lack of pcie device (power, signal). You should see hat pcie switch with lspci first, so make sure that fpc port is working then hat itself :slight_smile:

I’ve plugged it in correctly, as per the board instructions.

I have no idea how to build a mainline kernel for a board for which armbian doesn’t offer the build option yet. If possible, please link some documentation. So far I tried building one with a kernel with more nvme support and more pcie debug features. I still need to flash it.

Just make sure fpc is correctly inserted, deep into socket. This is simple mistake that happens to everyone :slight_smile:

It’s WIP board so it’s something good to start from, I don’t have 2F and cannot test anything on it, but You can on Your own risk :slight_smile:

1 Like

Please believe me that when I say I plugged the FPC connector correctly it is plugged in correctly…

Thank you for the edge image, I’ll flash it today and see! Can you share how you built them?

Sure, this is just tricky connector, it’s not forcing You to insert it in correct side as well as way to the end. Maybe some dirt can get inside and make things even more difficult. If You already double checked that - then it’s ok. It’s just simple mistake that happens to everyone :slight_smile: Check out first impressions videos about pi5 - many creators have done that :smiley:

That was super simple, I knew that armbian has some support for rk35xx family, so it was as easy as adding ‘edge’ to board wip config and running build with default values. Without board I could not test anything more specific :slight_smile: PM me if You need any other explanation :slight_smile:

1 Like

Thanks, sadly the edge image doesn’t boot… Only green light, no flashing blue light. I’ll paste dmesg when the pcie is disconnected.

1 Like

UART output would be much more helpful, but I afraid that without board I can’t help much. There were many things changed recently for RK3528 on kernel git/mailing lists, so my guess is that it should soon have some initial support and at least boot. Yet again armbian is prepared here to build something, test, retry with some patch sets.

1 Like

UART will be a bit tricky because the HAT blocks the pins a bit. I’ll try

1 Like

Interesting, without the cable connected, the output is similar:

[    8.147657] vcc3v3_pcie20: will resolve supply early: vin
[    8.147666] reg-fixed-voltage vcc3v3-pcie20: Looking up vin-supply from device tree
[    8.147677] vcc3v3_pcie20: supplied by vcc5v0_sys
[    8.147720] vcc3v3_pcie20: 3300 mV, enabled
[    8.147876] reg-fixed-voltage vcc3v3-pcie20: vcc3v3_pcie20 supplying 3300000uV
[    8.186497] PCI: CLS 0 bytes, default 64
[    8.736473] dw-pcie fe4f0000.pcie: invalid resource
[    8.736494] dw-pcie fe4f0000.pcie: Failed to initialize host
[    8.736501] dw-pcie: probe of fe4f0000.pcie failed with error -22
[    8.737339] rk-pcie fe4f0000.pcie: invalid prsnt-gpios property in node
[    8.737482] rk-pcie fe4f0000.pcie: Looking up vpcie3v3-supply from device tree
[    8.738002] rk-pcie fe4f0000.pcie: can't get current limit.
[    8.738531] rk-pcie fe4f0000.pcie: max MSI vector is 32
[    8.738604] rk-pcie fe4f0000.pcie: host bridge /pcie@fe4f0000 ranges:
[    8.738657] rk-pcie fe4f0000.pcie:       IO 0x00fc100000..0x00fc1fffff -> 0x00fc100000
[    8.738681] rk-pcie fe4f0000.pcie:      MEM 0x00fc200000..0x00fdffffff -> 0x00fc200000
[    8.738697] rk-pcie fe4f0000.pcie:      MEM 0x0100000000..0x013fffffff -> 0x0100000000
[    8.738974] rk-pcie fe4f0000.pcie: iATU unroll: enabled
[    8.738993] rk-pcie fe4f0000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G[    8.941205] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    8.962235] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    8.983257] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.004284] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.025317] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.046340] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.067361] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.088382] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.109403] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    9.130424] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[   11.042700] rk-pcie fe4f0000.pcie: PCIe Link Fail, LTSSM is 0x0, hw_retries=0
[   11.042722] rk-pcie fe4f0000.pcie: failed to initialize host

One more question. How to reset the armbian kernel config to the default one? Mine won’t build.

I saw such thing on Rock 4SE with m.2 slot. If nothing was connected - same link error message. For me it’s bit misleading and it eventually turn out that m.2 slot (or some connection to SoC) was faulty but I got it only by comparing second SBC and few devices.

Just to be sure - this can indicate linking problem or just no device connected to pcie port.

Maybe

git reset --hard HEAD

:slight_smile:

It seems like the PCIe works on the fully updated Radxa b6 image (linked here: Unable to manage Overlays using rsetup) but the docs are outdated - there is no pcie overlay needed.

Why is this, @Peter.Wang?

Also the xfce image is a total pain to use in comparison to armbian - the disabled GPU acceleration makes it extremely slow.

1 Like

I also successfully made a RAID0 array of two 16 GB optane drives, which was my objective. The write speed is about 2x each drive (240 MB/s) but the read speed is only 24 MB/s (a single drive is about 20). Again, I don’t really understand why. The read speed should be about what the interface can do (and not slower than write speed)

The HAT powerup seems quite unstable. Only 2 times did I get it to work with both drives (after a reboot). On a few occasions I saw 1 drive. On most occasions, none show up, but the controllers do. Here is the dmesg output when none of the drives show up but the PCI controllers do:

[    5.833047] vcc3v3_pcie20: 3300 mV, enabled
[    5.833149] reg-fixed-voltage vcc3v3-pcie20: Looking up vin-supply from device tree
[    5.833156] vcc3v3_pcie20: supplied by vcc5v0_sys
[    5.833241] reg-fixed-voltage vcc3v3-pcie20: vcc3v3_pcie20 supplying 3300000uV
[    6.742913] PCI: CLS 0 bytes, default 64
[    7.215345] rk-pcie fe4f0000.pcie: invalid prsnt-gpios property in node
[    7.215360] rk-pcie fe4f0000.pcie: Looking up vpcie3v3-supply from device tree
[    7.216161] rk-pcie fe4f0000.pcie: max MSI vector is 8
[    7.216173] rk-pcie fe4f0000.pcie: Missing *config* reg space
[    7.216239] rk-pcie fe4f0000.pcie: host bridge /pcie@fe4f0000 ranges:
[    7.216278] rk-pcie fe4f0000.pcie:      err 0x00fc000000..0x00fc0fffff -> 0x00fc000000
[    7.216304] rk-pcie fe4f0000.pcie:       IO 0x00fc100000..0x00fc1fffff -> 0x00fc100000
[    7.216321] rk-pcie fe4f0000.pcie:      MEM 0x00fc200000..0x00fdffffff -> 0x00fc200000
[    7.216332] rk-pcie fe4f0000.pcie:      MEM 0x0100000000..0x013fffffff -> 0x0100000000
[    7.216373] rk-pcie fe4f0000.pcie: Missing *config* reg space
[    7.216456] rk-pcie fe4f0000.pcie: invalid resource
[    7.275009] ehci-pci: EHCI PCI platform driver
[    7.423390] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x3
[    7.505630] rk-pcie fe4f0000.pcie: PCIe Link up, LTSSM is 0x130011
[    7.505804] rk-pcie fe4f0000.pcie: PCI host bridge to bus 0000:00
[    7.505815] pci_bus 0000:00: root bus resource [bus 00-ff]
[    7.505821] pci_bus 0000:00: root bus resource [??? 0xfc000000-0xfc0fffff flags 0x0]
[    7.505827] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus address [0xfc100000-0xfc1fffff])
[    7.505832] pci_bus 0000:00: root bus resource [mem 0xfc200000-0xfdffffff]
[    7.505837] pci_bus 0000:00: root bus resource [mem 0x100000000-0x13fffffff pref]
[    7.505872] pci 0000:00:00.0: [1d87:3528] type 01 class 0x060400
[    7.505948] pci 0000:00:00.0: supports D1 D2
[    7.505953] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    7.511483] pci 0000:01:00.0: [1b21:1182] type 01 class 0x060400
[    7.511915] pci 0000:01:00.0: enabling Extended Tags
[    7.514839] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    7.527463] pci 0000:01:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.529009] pci 0000:02:03.0: [1b21:1182] type 01 class 0x060400
[    7.529569] pci 0000:02:03.0: enabling Extended Tags
[    7.530226] pci 0000:02:03.0: PME# supported from D0 D3hot D3cold
[    7.544451] pci 0000:02:07.0: [1b21:1182] type 01 class 0x060400
[    7.545045] pci 0000:02:07.0: enabling Extended Tags
[    7.547831] pci 0000:02:07.0: PME# supported from D0 D3hot D3cold
[    7.553067] pci 0000:02:03.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.553110] pci 0000:02:07.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.558639] pci_bus 0000:03: busn_res: [bus 03-ff] end is updated to 03
[    7.564187] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    7.564218] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 04
[    7.564261] pci 0000:02:03.0: PCI bridge to [bus 03]
[    7.564392] pci 0000:02:07.0: PCI bridge to [bus 04]
[    7.564500] pci 0000:01:00.0: PCI bridge to [bus 02-04]
[    7.564633] pci 0000:00:00.0: PCI bridge to [bus 01-ff]
[    7.565917] pcieport 0000:00:00.0: PME: Signaling with IRQ 81

lspci:

00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 3528 (rev 01)
01:00.0 PCI bridge: ASMedia Technology Inc. Device 1182
02:03.0 PCI bridge: ASMedia Technology Inc. Device 1182
02:07.0 PCI bridge: ASMedia Technology Inc. Device 1182

When it works, there should be two nvme drives below.

When the drives do initiate upon booting, this is the dmesg output:

[    5.882083] vcc3v3_pcie20: 3300 mV, enabled
[    5.882208] reg-fixed-voltage vcc3v3-pcie20: Looking up vin-supply from device tree
[    5.882216] vcc3v3_pcie20: supplied by vcc5v0_sys
[    5.882279] reg-fixed-voltage vcc3v3-pcie20: vcc3v3_pcie20 supplying 3300000uV
[    6.796952] PCI: CLS 0 bytes, default 64
[    7.272808] rk-pcie fe4f0000.pcie: invalid prsnt-gpios property in node
[    7.272824] rk-pcie fe4f0000.pcie: Looking up vpcie3v3-supply from device tree
[    7.273559] rk-pcie fe4f0000.pcie: max MSI vector is 8
[    7.273571] rk-pcie fe4f0000.pcie: Missing *config* reg space
[    7.273643] rk-pcie fe4f0000.pcie: host bridge /pcie@fe4f0000 ranges:
[    7.273674] rk-pcie fe4f0000.pcie:      err 0x00fc000000..0x00fc0fffff -> 0x00fc000000
[    7.273691] rk-pcie fe4f0000.pcie:       IO 0x00fc100000..0x00fc1fffff -> 0x00fc100000
[    7.273710] rk-pcie fe4f0000.pcie:      MEM 0x00fc200000..0x00fdffffff -> 0x00fc200000
[    7.273723] rk-pcie fe4f0000.pcie:      MEM 0x0100000000..0x013fffffff -> 0x0100000000
[    7.273762] rk-pcie fe4f0000.pcie: Missing *config* reg space
[    7.273843] rk-pcie fe4f0000.pcie: invalid resource
[    7.332758] ehci-pci: EHCI PCI platform driver
[    7.479372] rk-pcie fe4f0000.pcie: PCIe Linking... LTSSM is 0x0
[    7.561532] rk-pcie fe4f0000.pcie: PCIe Link up, LTSSM is 0x130011
[    7.561705] rk-pcie fe4f0000.pcie: PCI host bridge to bus 0000:00
[    7.561715] pci_bus 0000:00: root bus resource [bus 00-ff]
[    7.561721] pci_bus 0000:00: root bus resource [??? 0xfc000000-0xfc0fffff flags 0x0]
[    7.561728] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus address [0xfc100000-0xfc1fffff])
[    7.561733] pci_bus 0000:00: root bus resource [mem 0xfc200000-0xfdffffff]
[    7.561737] pci_bus 0000:00: root bus resource [mem 0x100000000-0x13fffffff pref]
[    7.561772] pci 0000:00:00.0: [1d87:3528] type 01 class 0x060400
[    7.561849] pci 0000:00:00.0: supports D1 D2
[    7.561854] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    7.567300] pci 0000:01:00.0: [1b21:1182] type 01 class 0x060400
[    7.567599] pci 0000:01:00.0: enabling Extended Tags
[    7.567989] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    7.583232] pci 0000:01:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.584313] pci 0000:02:03.0: [1b21:1182] type 01 class 0x060400
[    7.584605] pci 0000:02:03.0: enabling Extended Tags
[    7.585014] pci 0000:02:03.0: PME# supported from D0 D3hot D3cold
[    7.586387] pci 0000:02:07.0: [1b21:1182] type 01 class 0x060400
[    7.586688] pci 0000:02:07.0: enabling Extended Tags
[    7.587094] pci 0000:02:07.0: PME# supported from D0 D3hot D3cold
[    7.591552] pci 0000:02:03.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.591610] pci 0000:02:07.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    7.592187] pci 0000:03:00.0: [8086:2522] type 00 class 0x010802
[    7.592304] pci 0000:03:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    7.592494] pci 0000:03:00.0: reg 0x20: [mem 0x00000000-0x0000ffff 64bit]
[    7.592570] pci 0000:03:00.0: enabling Extended Tags
[    7.593376] pci 0000:03:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:00.0 (capable of 15.752 Gb/s with 8.0 GT/s PCIe x2 link)
[    7.598746] pci_bus 0000:03: busn_res: [bus 03-ff] end is updated to 03
[    7.599280] pci 0000:04:00.0: [8086:2522] type 00 class 0x010802
[    7.599402] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    7.599544] pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x0000ffff 64bit]
[    7.599655] pci 0000:04:00.0: enabling Extended Tags
[    7.600410] pci 0000:04:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:00.0 (capable of 15.752 Gb/s with 8.0 GT/s PCIe x2 link)
[    7.605949] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    7.605990] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 04
[    7.606044] pci 0000:00:00.0: BAR 8: assigned [mem 0xfc200000-0xfc3fffff]
[    7.606054] pci 0000:01:00.0: BAR 8: assigned [mem 0xfc200000-0xfc3fffff]
[    7.606061] pci 0000:02:03.0: BAR 8: assigned [mem 0xfc200000-0xfc2fffff]
[    7.606068] pci 0000:02:07.0: BAR 8: assigned [mem 0xfc300000-0xfc3fffff]
[    7.606076] pci 0000:03:00.0: BAR 4: assigned [mem 0xfc200000-0xfc20ffff 64bit]
[    7.606159] pci 0000:03:00.0: BAR 0: assigned [mem 0xfc210000-0xfc213fff 64bit]
[    7.606209] pci 0000:02:03.0: PCI bridge to [bus 03]
[    7.606238] pci 0000:02:03.0:   bridge window [mem 0xfc200000-0xfc2fffff]
[    7.606293] pci 0000:04:00.0: BAR 4: assigned [mem 0xfc300000-0xfc30ffff 64bit]
[    7.606335] pci 0000:04:00.0: BAR 0: assigned [mem 0xfc310000-0xfc313fff 64bit]
[    7.606394] pci 0000:02:07.0: PCI bridge to [bus 04]
[    7.606414] pci 0000:02:07.0:   bridge window [mem 0xfc300000-0xfc3fffff]
[    7.606455] pci 0000:01:00.0: PCI bridge to [bus 02-04]
[    7.606482] pci 0000:01:00.0:   bridge window [mem 0xfc200000-0xfc3fffff]
[    7.606540] pci 0000:00:00.0: PCI bridge to [bus 01-ff]
[    7.606546] pci 0000:00:00.0:   bridge window [mem 0xfc200000-0xfc3fffff]
[    7.607758] pcieport 0000:00:00.0: PME: Signaling with IRQ 81
[    7.608163] pcieport 0000:01:00.0: enabling device (0000 -> 0002)
[    7.608759] pcieport 0000:02:03.0: enabling device (0000 -> 0002)
[    7.609514] pcieport 0000:02:07.0: enabling device (0000 -> 0002)
[    7.610551] nvme nvme0: pci function 0000:03:00.0
[    7.611092] nvme nvme1: pci function 0000:04:00.0

@radxa please investigate how to make them work every time. Can I use a dumb 5V power supply, or does Rock 2F need PD?
Docs say “5V power only”

Feels like some kind of powering issue. Can you install the HAT on another device to power it with their 5V pin, while connecting the PCIe cable to 2F to check?

1 Like

I have no other devices that take this HAT, unless it’s every device with these GPIOs. I’ll try this!

The drives get initiated every 8th reboot on average.

You can try to rescan pcie bus:

echo 1 > /sys/bus/pci/rescan

If there is issue with power, then probably delayed rescan will work.

1 Like

Tried rescanning:
echo 1 | sudo tee /sys/bus/pci/rescan

This did not make the nvme drive appear (sometimes just 1 is detected on boot).

Yes you just need to power this thing with another device’s 40-pin header for 5V. Still connect the ribbon to your Radxa device.