[Rock 5B] 980 EVO SSD SATA

This is stuff from the past. The new socket was called NGFF back then but after standardization happened is called M.2 now and socket 2/3 have names (those funny keyings and so on).

RK3588 has five PCIe controllers and three Combo PIPE PHYs. Of these three latter things the first is routed to the RTL8125BG chip, one to the key E slot and the other to USB3. As such as you experienced if you turn just a random PCIe Gen2 lane to SATA (sata0) things break (the RTL8125BG is only PCIe capable and if you change the PHY mode to SATA it will get disconnected).

The only two ‘slots’ where you could get SATA signals are the key E slot and one USB3-A receptacle. In both cases ‘all’ that’s needed is a DT overlay and some custom made adapter for your B/M keyed SATA SSD.

The M.2 key M slot has no SATA capabilities whatsoever so simply forget about inserting a SATA SSD here. It will never work and power is the least problem. The only way to bring the key M slot together with SATA is by slapping a SATA host controller into and by wiring your SSD to this thing (maybe on Aliexpress M.2 SATA host controllers exist that feature another M.2 slot to insert a SATA SSD).

BTW: the keying also has no direct relationship to the physical protocol layer since key B is specified to carry ‘PCIe ×2, SATA, USB 2.0 and 3.0, audio, UIM, HSIC, SSIC, I2C and SMBus’ and key M is able to carry ‘PCIe ×4, SATA and SMBus’.

And there’s a reason nowhere NVMe is mentioned since NVMe is the storage protocol above the physical layer (that is almost everywhere PCIe with NVMe, with SATA the storage protocol is always AHCI but a decade ago PCIe SSDs that relied on AHCI instead of NVMe were a thing).

TL;DR: the way Radxa decided to route the available PCIe/SATA lanes there’s no way to get any SATA SSD attached directly into the M.2 key M slot to work. Since this slot is unable to be turned into SATA. This is both Rock 5B and RK3588 specific and what happens on mainboards or somewhere else is irrelevant :slight_smile:

Check the SSD for fake capacity with f3 and then use it somewhere else unless you’re willing to build/use mechanical adapters and then attach the thing to the key E slot or USB3-A.

2 Likes

I bought Patriot M.2 P300 256GB and it works like a charm.
Here’s a hdparm benchmark on a running ubuntu focal image:

/dev/nvme0n1:
Timing cached reads: 8120 MB in 2.00 seconds = 4063.56 MB/sec
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3584 MB in 3.00 seconds = 1194.56 MB/sec

Interesting, here it is rated read 1700MB/s, write 1100MB/s with good price.
The only problem i see is if i start compiling things, it will be filled up in 2 months…:smiley:
Never mind, what is important is the TBW: Up to 960GB.

A time back i saw a video about a external SSD drive that was bought on a chinese website. These guys are not dumb. They know how to work the controller firmware and put some microsdcards in it to fool everything.

About SSD having power by just inserting it into the m.2 slot. Both the M.2 NGFF (SATA) as well as M.2 NVME use very to almost no wattage. So you do not need extra power to make it work. The only difference is for example with a M.2 NGFF (SATA) you need to connect it to a SATA port, while the M.2 NVME uses the PCI slot itself for connection.

And yes with the Rock 5B, you just put the M.2 NVME in the M.2 slot and it works. At least the Samsung 970 Evo Plus NVME SSD of mine.

You are comparing benchmarks with random numbers generated by hdparm (which was a benchmark last century but not today any more).

This is 5 times hdparm with a cheap Kioxa consumer SSD (the junk currently mounted on my Rock 5B):

Timing buffered disk reads: 622 MB in  3.00 seconds = 207.27 MB/sec
Timing buffered disk reads: 1506 MB in  3.00 seconds = 501.95 MB/sec
Timing buffered disk reads: 2670 MB in  3.00 seconds = 889.99 MB/sec
Timing buffered disk reads: 4894 MB in  3.00 seconds = 1630.62 MB/sec
Timing buffered disk reads: 4242 MB in  3.00 seconds = 1413.98 MB/sec

First two are with cpufreq on lowest setting, next two with cpufreq on the highest setting. Always a little core first followed by a big one.

With Radxa’s OS images until recently (see here for details) and with Armbian still and most probably forever the needed tweaks aren’t applied to ramp up CPU clockspeeds with I/O workloads. As such with hdparm in ‘fire and forget’ mode you’re ‘benchmarking’ the cpufreq scheduler/governor more than storage. And as can be seen above this makes a difference of 210 - 1630 MB/s.

Last check above was with default settings (random behaviour based on what scheduler and cpufreq governor do). Are these 1413.98 MB/sec the ‘drive performance’?

No of course since hdparm uses a block size from last century (128KB were huge back then when Linux developers added the -t and -T switches to the tool when they dealt with spinning rust attached to an IDE interface or worse). And also the --direct switch was missing.

Today we should check with 1M or better 16M. With my junk Kioxa SSD this then looks like this using taskset -c 4 iozone -e -I -a -s 1000M -r 16384k -i 0 -i 1:

          kB  reclen    write  rewrite    read    reread
     1024000   16384  1046735  1047446  2572526  2582696

That’s more close to ‘drive performance’ than misusing hdparm. 1000/2600 MB/s write/read.

OMG, these SBC forums…

Please stop posting such nonsense. NGFF is how M.2 was called. It’s a mechanical connector, nothing more, nothing less. Just think about to which page this link redirects: https://en.wikipedia.org/wiki/NGFF

Bro chillll… i am talking about something like this, which i use in combination with my M.2 NGFF (SATA) ssd. It has mSATA, M.2 NGFF (SATA) & M.2 NVME

I have an M3 Station board with RK3588s processor that works with m2 sata by default but can also work with M.2 NVME depending on dtb, I tried it personally. I saw that it works with the m2 to sata adapter, so it should work. I didn’t try it on rock 5b because it has a lower speed…

sure m3 is faster because of the s of super :rofl:

there always be a adapter board to get sata out of the pcie lanes on top and bottom and a dts change

and i saw this and that doesnt help ,and pc this and that doesnt apply to a sbc .

M3 Station cannot work on 4 pcie buses because it does not have them and then it works better on m2 sata.
sure m3 is more expensive because of the s of super :rofl:

I would like to share i just received this NVMe and it is compatible.

sudo fdisk -l /dev/nvme0n1 
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SSD 512GB                               
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

sudo fdisk -l /dev/nvme0n1 
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SSD 512GB                               
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 224CFD96-A6DD-7B4F-8AF7-ECDF49F3D1E8

Device           Start        End   Sectors   Size Type
/dev/nvme0n1p1   32768    1081343   1048576   512M EFI System
/dev/nvme0n1p2 1081344 1000215182 999133839 476.4G Linux filesystem

If it’s NVMe ofc it’s compatible. I would better check whether it has a fake capacity with f3 and check what this thing really is (“Goldenfir” does neither NAND flash nor SSD controllers).

smartctl -x /dev/nvme0n1 might tell.

1 Like
sudo smartctl -x /dev/nvme0n1
smartctl 7.2 2020-12-30 r5155 [aarch64-linux-5.10.110-debugx] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       SSD 512GB
Serial Number:                      AA2021103905
Firmware Version:                   T1103F0L
PCI Vendor/Subsystem ID:            0x126f
IEEE OUI Identifier:                0x000001
Total NVM Capacity:                 512,110,190,592 [512 GB]
Unallocated NVM Capacity:           0
Controller ID:                      1
NVMe Version:                       1.3
Number of Namespaces:               1
Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            000001 0000000000
Local Time is:                      Fri Mar  3 19:45:40 2023 -03
Firmware Updates (0x12):            1 Slot, no Reset required
Optional Admin Commands (0x0007):   Security Format Frmw_DL
Optional NVM Commands (0x0014):     DS_Mngmt Sav/Sel_Feat
Log Page Attributes (0x03):         S/H_per_NS Cmd_Eff_Lg
Maximum Data Transfer Size:         64 Pages
Warning  Comp. Temp. Threshold:     83 Celsius
Critical Comp. Temp. Threshold:     85 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     6.00W       -        -    0  0  0  0        0       0

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        40 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    1,000,427 [512 GB]
Data Units Written:                 1,002,862 [513 GB]
Host Read Commands:                 1,958,105
Host Write Commands:                1,966,249
Controller Busy Time:               95
Power Cycles:                       5
Power On Hours:                     6
Unsafe Shutdowns:                   3
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

f3 - v8
running f3write … 15%… 340 MB/s
running f3write … 30%… 95 ~ 340 MB/s (getting hot)
running f3write … 39.37%… 115 ~ 354 MB/s (getting hot - board 50 ºC)
running f3write … 55%… 328 MB/s (getting hot - board 50.8 ºC)
let’s wait till finish…

Listening to: https://www.youtube.com/watch?v=Nh1E6ov6wC0 while melting the NVMe…
Things are tense… almost there… switched to https://www.youtube.com/watch?v=hp1V98qZzfU

Creating file 459.h2w ... OK!                          
Creating file 460.h2w ... OK!                         
Creating file 461.h2w ... OK!                         
Creating file 462.h2w ... OK!                         
Creating file 463.h2w ... OK!                         
Creating file 464.h2w ... OK!                         
Creating file 465.h2w ... OK!                         
Creating file 466.h2w ... OK!                         
Creating file 467.h2w ... OK!                         
Creating file 468.h2w ... OK!                        
Free space: 16.00 MB
Average writing speed: 156.43 MB/s
/dev/nvme0n1p2  468G  468G     0 100% /home/rock/rockchip/nvme/nvme

Searching for this reveals that this thing claims to be based on a Silicon Motion SM2263EN controller (used by countless OEMs like Fanxiang, Kingston, Kingchuxing and whoever else). While I personally would never buy from those OEM brands slapping together 3rd party NAND flash and 3rd party SSD controllers (Kingston included) maybe that’s a legit drive.

Now you know that this thing gets dog slow with continuous writes even if the usual hdparm BS shows way higher numbers.

But did you execute f3read as well? Since that’s the verify step to ensure your flash devices doesn’t show a fake capacity…

1 Like

BTW: when refering to f3 to check for fake/real drive capacity (regardless of type, be it an SSD or an SD card or whatever else flash storage) I’m usually talking about f3probe first at a stage where the new device is yet not mounted (the check is non-destructive unless you set --destructive):

root@rock-5b:~# f3probe /dev/nvme0n1
F3 probe 7.2
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

WARNING: Probing normally takes from a few seconds to 15 minutes, but
         it can take longer. Please be patient.

Probe finished, recovering blocks... Done

Good news: The device `/dev/nvme0n1' is the real thing

Device geometry:
	         *Usable* size: 238.47 GB (500118192 blocks)
	        Announced size: 238.47 GB (500118192 blocks)
	                Module: 256.00 GB (2^38 Bytes)
	Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
	   Physical block size: 512.00 Byte (2^9 Bytes)

Probe time: 12.70s

(latest f3 version is 8.0 from 2020 and since then the master branch was updated countless times but I guess the distro version at least when running latest Ubuntu/Debian versions is fine)

rock@rock5b:~/rockchip/nvme/f3-8.0$ sudo ./f3probe /dev/nvme0n1
F3 probe 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

WARNING: Probing normally takes from a few seconds to 15 minutes, but
         it can take longer. Please be patient.

Probe finished, recovering blocks... Done

Good news: The device `/dev/nvme0n1' is the real thing

Device geometry:
	         *Usable* size: 476.94 GB (1000215216 blocks)
	        Announced size: 476.94 GB (1000215216 blocks)
	                Module: 512.00 GB (2^39 Bytes)
	Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
	   Physical block size: 512.00 Byte (2^9 Bytes)

Probe time: 16.43s

Regarding f3read i wasn’t aware it would do an error check, i then deleted the files, and later i ran with fewer files, it was about 600 MB/s.

I am building the latest BSP kernel and will redo a complete test with performance and optimization and see what i get.

This SSD was not really cheap, i have seen a video that states Kingston NV2 1TB been almost the same price (well, not really the same as i paid for my 512GB). I tried to find this device with claimed price, but did not find.

Reference:

I am thinking to order a Lexar SSD NVME M2 7500MB/s 1TB M.2 2280 PCIe 4.0 for $65 (low-end).
Would that work with PCIe 3 and still be fast with PCIe3 (or just a waste)?

Sure, PCIe ‘link training’ is part of the specs and two PCIe devices will always negotiate highest link width/speed both support.

You only need to care if this Lexar thing is a Gen4 x2 offer since then performance on a PCIe Gen3 capable host will be severly harmed.

If it’s a Gen4 x4 drive then Gen3 x4 will be negotiated as such you ‘loose’ half the theoretical max bandwidth and a little bit of random I/O performance (which is way more important anyway even if the whole hobbyist/SBC world only cares about those silly sequential transfer speeds).

And if this SSD will be not already worn out in 5 years being already capable of Gen4 speeds is an advantage since the host it will be combined then with is most likely at least Gen4 capable.

It claims Gen4 x4

I just received the Lexar SSD NVME M2 7500MB/s 1TB M.2 2280 PCIe 4.0.
I found out some interesting info about my Nvme journey.

Some basic info for USERs:

  • not all are equal (duh?)
    Calm down, what i mean is, some come with a screwdriver kit (two screws) to install it, and some come with nothing. Finding a screw for this is a pain.

  • I have 3 SSDs (now two) , one with 1 GPIO LED, one with 2 GPIO LEDs, and the other without GPIO LED.

  • Power consumption seems to be an issue to boot the board, i will try to explain my findings.

I think this experience (preliminary info) can help others.

I have one board with Nvme 512GB booting from SPI and powered by a 65W PD, earlier i booted from SD Card.

The second one with the Lexar and powered by the 65W PD and the same SD Card (the one that booted with 512GB) can’t boot. The usual boot loop.
Luckily i have here a dumb 5V 4A power adapter that works with the 1TB. I have not tested if it is stable or not. I will find a way to measure consumption with my Kill a Watt (i don’t think it is precise, it is old)

Anyway, to be short, the same setup that works for one does not work for the other.

Info:

sudo smartctl -x /dev/nvme0n1
smartctl 7.2 2020-12-30 r5155 [aarch64-linux-5.10.110-rk3588] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Lexar SSD NM710 1TB
Serial Number:                      NB20002007335P2200
Firmware Version:                   8212
PCI Vendor/Subsystem ID:            0x1d97
IEEE OUI Identifier:                0xcaf25b
Total NVM Capacity:                 1,000,204,886,016 [1.00 TB]
Unallocated NVM Capacity:           0
Controller ID:                      0
NVMe Version:                       1.4
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1,000,204,886,016 [1.00 TB]
Namespace 1 Utilization:            2,097,152 [2.09 MB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            caf25b 02a00003ee
Local Time is:                      Fri Mar 17 11:26:52 2023 -03
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x001f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat
Log Page Attributes (0x02):         Cmd_Eff_Lg
Maximum Data Transfer Size:         128 Pages
Warning  Comp. Temp. Threshold:     120 Celsius
Critical Comp. Temp. Threshold:     130 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     6.50W       -        -    0  0  0  0        0       0
 1 +     5.80W       -        -    1  1  1  1        0       0
 2 +     3.60W       -        -    2  2  2  2        0       0
 3 -   0.0500W       -        -    3  3  3  3     5000   10000
 4 -   0.0025W       -        -    4  4  4  4     8000   45000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        41 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    5 [2.56 MB]
Data Units Written:                 0
Host Read Commands:                 293
Host Write Commands:                0
Controller Busy Time:               0
Power Cycles:                       8
Power On Hours:                     0
Unsafe Shutdowns:                   7
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               41 Celsius
Temperature Sensor 2:               29 Celsius

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

This is the Nvme for reference:

Edit:
The kernel takes longer to probe this device, thus resetting the PD communication, apparently.

Measured with “Kill a Watt”: 6.5 Watt (Peak), 3.4 Watt (iddle)

Edit2:

I tried with a 33W PD charger and guess what, it works.
So, SSD probe timing is irrelevant. I don’t have any means to measure the input Voltage.

rock@rock5b:~$ sensors
gpu_thermal-virtual-0
Adapter: Virtual device
temp1:        +30.5°C  

littlecore_thermal-virtual-0
Adapter: Virtual device
temp1:        +30.5°C  

bigcore0_thermal-virtual-0
Adapter: Virtual device
temp1:        +30.5°C  

tcpm_source_psy_4_0022-i2c-4-22
Adapter: rk3x-i2c
in0:          20.00 V  (min = +20.00 V, max = +20.00 V)
curr1:         1.35 A  (max =  +1.35 A)

npu_thermal-virtual-0
Adapter: Virtual device
temp1:        +31.5°C  

center_thermal-virtual-0
Adapter: Virtual device
temp1:        +30.5°C  

bigcore1_thermal-virtual-0
Adapter: Virtual device
temp1:        +30.5°C  

soc_thermal-virtual-0
Adapter: Virtual device
temp1:        +31.5°C  (crit = +115.0°C)

f3probe:

sudo ./f3probe /dev/nvme0n1
F3 probe 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

WARNING: Probing normally takes from a few seconds to 15 minutes, but
         it can take longer. Please be patient.

Probe finished, recovering blocks... Done

Good news: The device `/dev/nvme0n1' is the real thing

Device geometry:
	         *Usable* size: 931.51 GB (1953525168 blocks)
	        Announced size: 931.51 GB (1953525168 blocks)
	                Module: 1.00 TB (2^40 Bytes)
	Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
	   Physical block size: 512.00 Byte (2^9 Bytes)

Probe time: 15.46s