Largest capacity NVME SSD usable in Rock 5?

I was wondering if anyone knew if the Rock 5 supported NVME drives greater than 2TB in capacity on its PCIe 3.0 x4 slot.

I see some 4TB drives available with PCIe 3.0 and 4.0 interfaces (hoping the PCIe 4.0 ones would downgrade to PCIe 3.0 speeds as is standard).

I’m using a Rock Pi 4 as a compact low power file server/small app server in a small RV, and upgrading storage space along with CPU and memory would be nice.

Thanks!

NVMe has been designed from the ground up to overcome limitations of AHCI/SATA (where with LBA48 the highest addressable capacity is 131072 TiB or 128 PiB). So once you want to use NVMe drives 65.536 larger than the 2TB thing you want to use now only then I would start to think about capacity limitations.

PCIe 3.0 and 4.0 are specification revisions. The speeds are defined as Gen1, Gen2 and so on and PCIe link training is part of the PCIe specs.

As such the PCIe 3.0 controller in RK3588 allowing for Gen3 link speeds with 4 lanes max (x4) in combination with a PCIe4/Gen4 capable SSD will negotiate Gen3 speeds with as much lanes as available. With dusty contacts negotiated link speed might even be lower so it’s always a great idea to use lspci -vv after installing a NVMe SSD to see what has been negotiated.

BTW: on paper a Gen4 x2 SSD and a Gen3 x4 should perform somewhat similar when looking at theoretical transfer speeds. But in a slot behind a PCIe 3 controller that maxes out at Gen3 link speed the difference becomes obvious since the Gen4 capable SSD ends up with half the speed since this results in a Gen3 x2 device.

As such: when buying SSDs those supporting x4 connections are the better choice if the plan is to connect it to older controllers only capable of lower speeds.

Thanks!

Perhaps I should ask a slightly different question since certain NVMe SSDs are reported to have problems on Rock 4 SBCs:

Has anyone used specific NVMe drives greater than 2TB in capacity working without problems on the Rock 5, and if so which model(s)?

I’ve been wondering how to determine the link speed - thank you for the lspci -vv pointer. Do you know of a good reference describing how to interpret the results to check the speed?

I think my Rock 4 with a Samsung 970 EVO Plus 2TB may be suffering from degraded negotiated link speed (after Gen2 enabled):

            LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                    ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
            LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk-
                    ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
            LnkSta: Speed 5GT/s (downgraded), Width x2 (downgraded)

Rock 4 is based on RK3399 which is only capable of Gen2 x4. As such Speed 5GT/s (downgraded) is to be expected.

Translation of GT/s to PCIe ‘Gen’ speed classes: https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions (with Gen3 the coding switched to something more efficient as such 8 GT/s with Gen3 is pretty much or almost twice as fast as 5 GT/s with Gen2)

Missing two lanes (Width x4 -> Width x2) is not expected (by me) but the only Rock 4 device I own is an early developer sample from 2018 where even the CE logo is printed wrongly so I can’t say much about that.

As for the reports about ‘certain NVMe SSDs having problems’ I better not comment since that’s just like other storage issues in this weird SBC world: people blame technology without understanding it and copy&paste the same disinformation over and over again (as with UAS or ‘USB attached SCSI’ which is blamed for every USB cable problem in this SBC world).

NVMe as protocol that is directly used has no capacity limitation today (unlike some crappy USB enclosures – at least that was a thing with SATA where drives were capped at 2TB in an USB enclosure while happily showing full capacity when used natively).

Neither Radxa nor Rockchip can change anything about this since there’s nowhere special voodoo involved. Radxa is just slapping Rockchip SoCs on their boards and Rockchip just licenses its PCIe controller over there: https://www.synopsys.com/designware-ip/interface-ip/pci-express.html

1 Like

The TL;DR version is:

  • there’s no size limitation with NVMe SSDs
  • since Rock 5B (RK3588’s PCIe30X4(4L) controller [1]) maxes out at Gen3 x4 if you buy an SSD capable of Gen4 speeds (or better) take care that it is not just x2 since this ends up with degraded speed: Gen3 x2. Buy a x4 SSD if you’re after maximum performance (though buy quality SSDs anyway since cheap SSD garbage will underperform always regardless of interface specs)
  • if you don’t care about performance buy whatever fits as long as it’s not garbage (faking capacity or SMART values)

[1] RK3588 consists of 5 PCIe controllers to provide PCIe Gen3 with 4 lanes max and Gen2 with 3 lanes max, the latter pinmuxed with SATA/USB3:

1 Like

After disassembling everything, blowing out and cleaning all the contacts, and replacing the PCIe ribbon cable, my Rock Pi 4 B+ PCIe Width is back to x4 and NVMe speeds are back to normal. So maybe either dirt/dust or the EcoPi Pro HP case bending/damaging the ribbon cable caused the issue.

Thanks @tkaiser for the guidance and info!

I look forward to repeating this all with a Rock Pi 5, and I’m still interested in hearing people’s Rock 5 experiences with specific NVMe drives - especially fastest Gen3 speeds, lowest power consumption, and largest capacity (I understand all capacities should work) used successfully.

Symptoms point more to “and” and not “either/or”. :slight_smile:

At least you experienced PCIe link training first hand (slowing down your PCIe connection between host and SSD to 25%) and would probably now vote for a startup service that checks the following stuff:

  • is a PCIe device (like a NVMe SSD) attached (easy to spot with lspci)?
  • if so are there PCIe link training problems reported (easy to spot with lspci -vv | grep -E "LnkCap|LnkSta")? If so somehow inform the user about these issues
  • in case the PCIe device is an NVMe SSD are there dangerous settings active (easy to spot by checking ASPM settings for powersupersave)

Something I suggested to Radxa already months ago and still hoping for them to implement since these PCIe link issues are far more common than expected especially once ‘extender cables’ or ‘extender boards’ (like on Khadas boards) are used.

1 Like