NVMe Not Detected on Rock 5T Running Debian 12 – Need Help

Hi Radxa Team / Community,

I’ve installed Debian 12 with Desktop on my Rock 5T board, and I’m running into an issue where my NVMe SSD is not being detected.

Here are the details:

  • Board : Rock 5T
  • OS : Debian 12 with Desktop
  • Storage : NVMe
  • Issue : The NVMe drive is not detected at all. It does not show up in lsblk , fdisk -l , or dmesg | grep nvme .

What I’ve tried:

  • Verified NVMe is properly seated.
  • Ran modprobe nvme (no errors, but no device appears).
  • lspci shows PCIe controllers are present and active.
  • No nvme module appears loaded via lsmod .
  • dmesg does not show anything related to NVMe or PCIe device enumeration for the SSD.
  • DTB directory ( /boot/dtbs/<kernel>/rockchip/ ) appears to be empty or missing the expected files.

It seems like the device tree might not have the correct configuration for enabling the PCIe lane that the NVMe SSD uses. I’m using the kernel that came with the Radxa image Link, so I expected it to have support out of the box — but maybe I’m missing something?

Can you please confirm:

  • Is NVMe supported on Rock 5T in the current Radxa Debian image?
  • Is there a specific DTB file I should be using?
  • Any kernel/firmware update needed to get NVMe working?

Appreciate any help or direction on how to get this resolved.

Thanks in advance!

Not all nvme drives are well supported, some are missing, some could not boot system, others have firmware issues (fixed by new version or not). I don’t expect that there is no support in particular build, rather drive is somehow specific.
Get different nvme to compare (one or more). You may try some mainline kernel build, there were so many changes in rockchip pcie controller that it may bring support for something else (in both ways, also something already working can be broken).
Good luck :slight_smile:

1 Like

@dominik Thanks for the suggestion!
I did some additional testing: I connected another SSD via USB, and it was detected without any issues. However, the same NVMe drive still isn’t detected when connected through the M.2 slot on the Rock 5T.

What’s interesting is that this exact same NVMe drive was working fine for the past couple of months on my Rock 5B board running a Radxa Ubuntu image — so the drive itself seems to be OK.

This makes me wonder if there might be a difference in PCIe lane configuration, DTB setup, or kernel behavior between the Rock 5B and Rock 5T boards.

Let me know if there’s anything else I can try — I appreciate the help!

When you connect an SSD through USB, it’s an SCSI device, nothing to do with NVME any more.
Try another power supply?

1 Like

What model of SSD are you using? Can’t both M.2 interfaces be recognized?

1 Like

Hi @incognito,
Thanks for your suggestion.
Power Supply: I’m using a 12V DC 2A adapter .

Should I try another ?

Hi @ken,

Thanks for your help! Here’s an update:

  • SSD: The SSD I’m using is a Western Digital 256GB NVMe .
  • M.2 Slots: I’ve tried both M.2 interfaces, but unfortunately, neither of them is working with the NVMe SSD.

Let me know if you have any suggestions!

Send your kernel log with dmesg | grep pcie

Hi @jack

FYI,
sudo dmesg | grep pcie
[sudo] password for radxa:
[ 10.824048] reg-fixed-voltage vcc3v3-pcie2x1l0: Looking up vin-supply from device tree
[ 10.824054] vcc3v3_pcie2x1l0: supplied by vcc5v0_sys
[ 10.884895] vcc3v3_pcie2x1l0: 3300 mV, enabled
[ 10.884972] reg-fixed-voltage vcc3v3-pcie2x1l0: vcc3v3_pcie2x1l0 supplying 3300000uV
[ 10.885034] vcc3v3_pcie30: 3300 mV, disabled
[ 10.885082] reg-fixed-voltage vcc3v3-pcie30: Looking up vin-supply from device tree
[ 10.885088] vcc3v3_pcie30: supplied by vcc5v0_sys
[ 10.885120] reg-fixed-voltage vcc3v3-pcie30: vcc3v3_pcie30 supplying 3300000uV
[ 11.465301] rk-pcie fe150000.pcie: invalid prsnt-gpios property in node
[ 11.465388] rk-pcie fe150000.pcie: Looking up vpcie3v3-supply from device tree
[ 11.465452] rk-pcie fe160000.pcie: invalid prsnt-gpios property in node
[ 11.465531] rk-pcie fe160000.pcie: Looking up vpcie3v3-supply from device tree
[ 11.465576] rk-pcie fe170000.pcie: invalid prsnt-gpios property in node
[ 11.465678] rk-pcie fe170000.pcie: Looking up vpcie3v3-supply from device tree
[ 11.482406] rk-pcie fe170000.pcie: host bridge /pcie@fe170000 ranges:
[ 11.482424] rk-pcie fe170000.pcie: IO 0x00f2100000…0x00f21fffff -> 0x00f2100000
[ 11.482435] rk-pcie fe170000.pcie: MEM 0x00f2200000…0x00f2ffffff -> 0x00f2200000
[ 11.482443] rk-pcie fe170000.pcie: MEM 0x0980000000…0x09bfffffff -> 0x0980000000
[ 11.482463] rk-pcie fe150000.pcie: host bridge /pcie@fe150000 ranges:
[ 11.482501] rk-pcie fe170000.pcie: iATU unroll: enabled
[ 11.482505] rk-pcie fe170000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 11.482517] rk-pcie fe160000.pcie: host bridge /pcie@fe160000 ranges:
[ 11.482545] rk-pcie fe150000.pcie: IO 0x00f0100000…0x00f01fffff -> 0x00f0100000
[ 11.482598] rk-pcie fe160000.pcie: IO 0x00f1100000…0x00f11fffff -> 0x00f1100000
[ 11.482601] rk-pcie fe150000.pcie: MEM 0x00f0200000…0x00f0ffffff -> 0x00f0200000
[ 11.482631] rk-pcie fe150000.pcie: MEM 0x0900000000…0x093fffffff -> 0x0900000000
[ 11.482643] rk-pcie fe160000.pcie: MEM 0x00f1200000…0x00f1ffffff -> 0x00f1200000
[ 11.482672] rk-pcie fe160000.pcie: MEM 0x0940000000…0x097fffffff -> 0x0940000000
[ 11.482707] rk-pcie fe150000.pcie: iATU unroll: enabled
[ 11.482716] rk-pcie fe150000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 11.482734] rk-pcie fe160000.pcie: iATU unroll: enabled
[ 11.482744] rk-pcie fe160000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 11.673314] vcc3v3_pcie2x1l2: will resolve supply early: vin
[ 11.673327] reg-fixed-voltage vcc3v3-pcie2x1l2: Looking up vin-supply from device tree
[ 11.673345] vcc3v3_pcie2x1l2: supplied by vcc_3v3_s3
[ 11.673684] vcc3v3_pcie2x1l2: 3300 mV, enabled
[ 11.673865] reg-fixed-voltage vcc3v3-pcie2x1l2: vcc3v3_pcie2x1l2 supplying 3300000uV
[ 11.673979] reg-fixed-voltage vcc3v3-pcie2x1l1: Looking up vin-supply from device tree
[ 11.673990] vcc3v3_pcie2x1l1: supplied by vcc_3v3_s3
[ 11.674328] vcc3v3_pcie2x1l1: 3300 mV, enabled
[ 11.674454] reg-fixed-voltage vcc3v3-pcie2x1l1: vcc3v3_pcie2x1l1 supplying 3300000uV
[ 11.675206] rk-pcie fe180000.pcie: invalid prsnt-gpios property in node
[ 11.675279] rk-pcie fe180000.pcie: Looking up vpcie3v3-supply from device tree
[ 11.675553] rk-pcie fe190000.pcie: invalid prsnt-gpios property in node
[ 11.675694] rk-pcie fe190000.pcie: Looking up vpcie3v3-supply from device tree
[ 11.676175] rk-pcie fe180000.pcie: host bridge /pcie@fe180000 ranges:
[ 11.676201] rk-pcie fe180000.pcie: IO 0x00f3100000…0x00f31fffff -> 0x00f3100000
[ 11.676215] rk-pcie fe180000.pcie: MEM 0x00f3200000…0x00f3ffffff -> 0x00f3200000
[ 11.676225] rk-pcie fe180000.pcie: MEM 0x09c0000000…0x09ffffffff -> 0x09c0000000
[ 11.676279] rk-pcie fe180000.pcie: iATU unroll: enabled
[ 11.676284] rk-pcie fe180000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 11.676605] rk-pcie fe190000.pcie: host bridge /pcie@fe190000 ranges:
[ 11.676650] rk-pcie fe190000.pcie: IO 0x00f4100000…0x00f41fffff -> 0x00f4100000
[ 11.676694] rk-pcie fe190000.pcie: MEM 0x00f4200000…0x00f4ffffff -> 0x00f4200000
[ 11.676719] rk-pcie fe190000.pcie: MEM 0x0a00000000…0x0a3fffffff -> 0x0a00000000
[ 11.676800] rk-pcie fe190000.pcie: iATU unroll: enabled
[ 11.676811] rk-pcie fe190000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 11.684316] rk-pcie fe150000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.684995] rk-pcie fe160000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.705445] rk-pcie fe150000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.705915] rk-pcie fe160000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.725963] rk-pcie fe150000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.727032] rk-pcie fe160000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.741983] rk-pcie fe170000.pcie: PCIe Link up, LTSSM is 0x30011
[ 11.741997] rk-pcie fe170000.pcie: PCIe Gen.1 x1 link up
[ 11.742152] rk-pcie fe170000.pcie: PCI host bridge to bus 0002:20
[ 11.747013] rk-pcie fe150000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.748148] rk-pcie fe160000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.764354] pcieport 0002:20:00.0: PME: Signaling with IRQ 138
[ 11.768138] rk-pcie fe150000.pcie: PCIe Linking… LTSSM is 0x1
[ 11.769268] rk-pcie fe160000.pcie: PCIe Linking… LTSSM is 0x0
[ 11.941866] rk-pcie fe190000.pcie: PCIe Link up, LTSSM is 0x130011
[ 11.941865] rk-pcie fe180000.pcie: PCIe Link up, LTSSM is 0x130011
[ 11.941892] rk-pcie fe180000.pcie: PCIe Gen.2 x1 link up
[ 11.941903] rk-pcie fe190000.pcie: PCIe Gen.2 x1 link up
[ 11.944324] rk-pcie fe190000.pcie: PCI host bridge to bus 0004:40
[ 11.944372] rk-pcie fe180000.pcie: PCI host bridge to bus 0003:30
[ 11.984617] pcieport 0003:30:00.0: PME: Signaling with IRQ 148
[ 12.003029] pcieport 0004:40:00.0: PME: Signaling with IRQ 158
[ 13.777160] rk-pcie fe160000.pcie: PCIe Link Fail, LTSSM is 0x0, hw_retries=0
[ 13.777246] rk-pcie fe160000.pcie: failed to initialize host
[ 13.780109] rk-pcie fe150000.pcie: PCIe Link Fail, LTSSM is 0x0, hw_retries=0
[ 13.780194] rk-pcie fe150000.pcie: failed to initialize host
[ 13.780917] rockchip-pm-domain fd8d8000.power-management:power-controller: Looking up pcie-supply from device tree
[ 13.780982] rockchip-pm-domain fd8d8000.power-management:power-controller: Looking up pcie-supply property in node /power-management@fd8d8000/power-controller failed
radxa@rock-5t:~$

Kindly let me know if anything going wrong.

I’ve found the root cause of the problem. It turns out the issue was related to insufficient power. Previously, I was using a 12V 2A (24W) adapter , which wasn’t providing enough power for the NVMe SSD to initialize properly.

After switching to PoE (Power over Ethernet) for power delivery, the NVMe drive is now being detected correctly on the Rock 5T.

Thanks to everyone for your suggestions and support throughout the troubleshooting process.

1 Like

Sorry for the offtopic question, but where did you buy the 5T PoE version from? I’m regularly checking the Radxa stores for 5T with the PoE header installed but I haven’t seen any so far.

@vially Thank you for your message.

The Rock 5T board typically does not come with the PoE header pre-installed—you would usually need to purchase the header separately and install it manually.
I purchased mine from EVELTA, which serves my region.

Let me know if you need any more details.

2 Likes