KingSpec 128GB M.2 NVMe not found Debian+Ubuntu

Thank you for your answer. Good it wasnt too expensive. I dont really need it, it was just for a video about different storage media. I still got USB memory stick, SD-cards, eMMC,and SATA ssd`s over USB3.

I`ll advice to buy the tested nvme models in my review. Thank you.

Not sure I would advise to buy the tested NVME’s. 2 of three failed.

what brand and models have you tried? Could you list here, it might helps others.

Sorry I was not clear. I was referring to the ones you guys tested. 2 of your 3 failed to perform correctly.
I am interested in using NVMe on the RockPi4B.

  1. Can we boot from it?
  2. If we cant boot from it can you provide instructions to create a boot mSD or EMMC and have the file system on the NVMe?
  3. Is there some type of HAT or adapter board to secure it in place?

I meant the ones that are tested and found ok. But the KingSpec is still in the list.

- Known working -

  • Samsung EVO series( M key, NVMe ), work well on ROCK Pi 4, fast speed
  • KingSpec NVMe M.2 2280( M key, NVMe ), works well
  • MaxMemory NVMe M.2 128G( B&M key, NVMe ), works well

- Known not working -

  • HP EX900( B&M Key, NVMe ), detection failed on ROCK Pi 4, works with PC.

I tried the KingSpec128GB in Armbian and also no luck.

@TheDude Q3 :
“We have made a M.2 extended board to put the M.2 SSD on top of ROCK Pi4. It looks like this. See picture here and here.”
https://wiki.radxa.com/Rockpi4/FAQs

Thanks. I knew I saw something like that somewhere.
Allnet China is out of stock and the austria & german sites wont let me register to purchase.
I was also looking at that big heatsink too but its also out of stock. Where else can I purchase?

The big heatsink will start shipping after next Wednesday.

1 Like

I bay GoodRAM 120GB m2 2280 and on Armbian image not working. Is m2 included in armbans kernel?

Samsung EVO 970 NVME M.2 250GB installed and seems to be working good. $77 delivered.
Booting from MicroSD and filesystem on NVME

------------------------------------------------------------------------------------------------------------------------------------
FULL WRITE PASS
------------------------------------------------------------------------------------------------------------------------------------

writefile: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=200
fio-3.1
Starting 1 process
Jobs: 1 (f=0): [f(1)][100.0%][r=0KiB/s,w=1468MiB/s][r=0,w=367 IOPS][eta 00m:00s]
writefile: (groupid=0, jobs=1): err= 0: pid=2343: Sat Dec 22 16:03:24 2018
  write: IOPS=198, BW=792MiB/s (831MB/s)(10.0GiB/12923msec)
    slat (usec): min=719, max=8530, avg=2995.68, stdev=847.41
    clat (msec): min=78, max=1076, avg=970.65, stdev=140.19
     lat (msec): min=80, max=1079, avg=973.65, stdev=140.15
    clat percentiles (msec):
     |  1.00th=[  203],  5.00th=[  718], 10.00th=[  995], 20.00th=[ 1003],
     | 30.00th=[ 1003], 40.00th=[ 1003], 50.00th=[ 1003], 60.00th=[ 1003],
     | 70.00th=[ 1003], 80.00th=[ 1003], 90.00th=[ 1011], 95.00th=[ 1011],
     | 99.00th=[ 1020], 99.50th=[ 1028], 99.90th=[ 1070], 99.95th=[ 1070],
     | 99.99th=[ 1083]
   bw (  KiB/s): min=114688, max=825675, per=96.36%, avg=781855.67, stdev=142262.46, samples=24
   iops        : min=   28, max=  201, avg=190.83, stdev=34.72, samples=24
  lat (msec)   : 100=0.20%, 250=1.17%, 500=1.91%, 750=1.95%, 1000=10.82%
  lat (msec)   : 2000=83.95%
  cpu          : usr=39.53%, sys=13.87%, ctx=3292, majf=0, minf=22
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=0,2560,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=200

Run status group 0 (all jobs):
  WRITE: bw=792MiB/s (831MB/s), 792MiB/s-792MiB/s (831MB/s-831MB/s), io=10.0GiB (10.7GB), run=12923-12923msec

Disk stats (read/write):
  nvme0n1: ios=24/10340, merge=0/0, ticks=80/813176, in_queue=813520, util=90.69%
------------------------------------------------------------------------------------------------------------------------------------
RAND READ PASS
------------------------------------------------------------------------------------------------------------------------------------

benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=664MiB/s,w=0KiB/s][r=170k,w=0 IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=4): err= 0: pid=2388: Sat Dec 22 16:03:55 2018
   read: IOPS=167k, BW=653MiB/s (685MB/s)(19.1GiB/30001msec)
    slat (usec): min=6, max=28457, avg=13.17, stdev=41.48
    clat (usec): min=67, max=44178, avg=3041.52, stdev=1214.98
     lat (usec): min=82, max=44203, avg=3055.53, stdev=1216.38
    clat percentiles (usec):
     |  1.00th=[ 1975],  5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2278],
     | 30.00th=[ 2507], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3097],
     | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 4817],
     | 99.00th=[ 8717], 99.50th=[10159], 99.90th=[12780], 99.95th=[15533],
     | 99.99th=[28181]
   bw (  KiB/s): min=97653, max=205186, per=25.15%, avg=168204.46, stdev=13527.72, samples=239
   iops        : min=24413, max=51296, avg=42050.92, stdev=3381.90, samples=239
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=1.91%, 4=91.08%, 10=6.44%, 20=0.52%, 50=0.02%
  cpu          : usr=28.59%, sys=58.64%, ctx=238159, majf=0, minf=580
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=5017168,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=653MiB/s (685MB/s), 653MiB/s-653MiB/s (685MB/s-685MB/s), io=19.1GiB (20.6GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=4993296/151, merge=0/0, ticks=7113608/20, in_queue=7271444, util=100.00%
------------------------------------------------------------------------------------------------------------------------------------
RAND WRITE PASS
------------------------------------------------------------------------------------------------------------------------------------

benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=309MiB/s][r=0,w=78.0k IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=4): err= 0: pid=2397: Sat Dec 22 16:04:26 2018
  write: IOPS=141k, BW=550MiB/s (576MB/s)(16.1GiB/30007msec)
    slat (usec): min=7, max=29844, avg=15.78, stdev=39.77
    clat (usec): min=155, max=58346, avg=3615.78, stdev=1959.79
     lat (usec): min=200, max=58356, avg=3632.55, stdev=1963.67
    clat percentiles (usec):
     |  1.00th=[ 2212],  5.00th=[ 2212], 10.00th=[ 2245], 20.00th=[ 2245],
     | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 3359], 60.00th=[ 3425],
     | 70.00th=[ 3458], 80.00th=[ 5669], 90.00th=[ 6521], 95.00th=[ 6915],
     | 99.00th=[ 7767], 99.50th=[ 8717], 99.90th=[18744], 99.95th=[32637],
     | 99.99th=[45351]
   bw (  KiB/s): min=72848, max=228256, per=25.06%, avg=141069.82, stdev=58131.44, samples=239
   iops        : min=18212, max=57064, avg=35267.34, stdev=14532.85, samples=239
  lat (usec)   : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.04%, 4=76.87%, 10=22.70%, 20=0.30%, 50=0.07%
  lat (msec)   : 100=0.01%
  cpu          : usr=29.55%, sys=56.29%, ctx=50270, majf=0, minf=73
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=0,4222617,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=550MiB/s (576MB/s), 550MiB/s-550MiB/s (576MB/s-576MB/s), io=16.1GiB (17.3GB), run=30007-30007msec

Disk stats (read/write):
  nvme0n1: ios=49/4212455, merge=0/0, ticks=164/5898416, in_queue=6182236, util=100.00%

By default, the nvme is running at gen1 mode for compatibility, so the speed is limited. You can enabled gen2 mode by decompile and recompile the device tree:

fdtdump rockpi-4b-linux.dtb  > /tmp/rockpi4.dts

Find pcie@f8000000 section

change from

max-link-speed = <0x00000001>;

to

max-link-speed = <0x00000002>;

Now recompile the dtb:

dtc -I dts -O dtb /tmp/rockpi4.dts -o /tmp/rockpi4.dtb

Replace the original rockpi-4b-linux.dtb(backup it first), reboot.

2 Likes

Thanks Jack

Thats a pretty big increase.

------------------------------------------------------------------------------------------------------------------------------------
FULL WRITE PASS
------------------------------------------------------------------------------------------------------------------------------------

writefile: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=200
fio-3.1
Starting 1 process
writefile: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][88.9%][r=0KiB/s,w=1390MiB/s][r=0,w=347 IOPS][eta 00m:01s]
writefile: (groupid=0, jobs=1): err= 0: pid=1777: Mon Dec 24 16:43:26 2018
  write: IOPS=346, BW=1384MiB/s (1452MB/s)(10.0GiB/7397msec)
    slat (usec): min=955, max=36941, avg=1049.63, stdev=733.97
    clat (msec): min=6, max=613, avg=551.03, stdev=89.82
     lat (msec): min=7, max=614, avg=552.08, stdev=89.83
    clat percentiles (msec):
     |  1.00th=[   79],  5.00th=[  368], 10.00th=[  558], 20.00th=[  567],
     | 30.00th=[  567], 40.00th=[  567], 50.00th=[  567], 60.00th=[  575],
     | 70.00th=[  575], 80.00th=[  575], 90.00th=[  584], 95.00th=[  609],
     | 99.00th=[  617], 99.50th=[  617], 99.90th=[  617], 99.95th=[  617],
     | 99.99th=[  617]
   bw (  MiB/s): min= 1194, max= 1426, per=99.10%, avg=1371.84, stdev=64.89, samples=13
   iops        : min=  298, max=  356, avg=342.69, stdev=16.30, samples=13
  lat (msec)   : 10=0.08%, 20=0.12%, 50=0.43%, 100=0.66%, 250=2.07%
  lat (msec)   : 500=3.44%, 750=93.20%
  cpu          : usr=56.41%, sys=32.29%, ctx=2551, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=0,2560,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=200

Run status group 0 (all jobs):
  WRITE: bw=1384MiB/s (1452MB/s), 1384MiB/s-1384MiB/s (1452MB/s-1452MB/s), io=10.0GiB (10.7GB), run=7397-7397msec

Disk stats (read/write):
  nvme0n1: ios=0/10367, merge=0/0, ticks=0/84952, in_queue=84948, util=85.55%
------------------------------------------------------------------------------------------------------------------------------------
RAND READ PASS
------------------------------------------------------------------------------------------------------------------------------------

benchmark: (g=0): rw=randread, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB, ioengine=libaio, iodepth=128
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=967MiB/s,w=0KiB/s][r=30.9k,w=0 IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=4): err= 0: pid=1792: Mon Dec 24 16:43:57 2018
   read: IOPS=30.2k, BW=944MiB/s (990MB/s)(27.7GiB/30002msec)
    slat (usec): min=15, max=9778, avg=125.19, stdev=100.40
    clat (usec): min=301, max=37557, avg=16798.40, stdev=2704.00
     lat (usec): min=355, max=37832, avg=16924.36, stdev=2722.99
    clat percentiles (usec):
     |  1.00th=[11338],  5.00th=[12387], 10.00th=[13435], 20.00th=[14353],
     | 30.00th=[15008], 40.00th=[15795], 50.00th=[16909], 60.00th=[17957],
     | 70.00th=[18482], 80.00th=[19006], 90.00th=[20055], 95.00th=[20841],
     | 99.00th=[23725], 99.50th=[25035], 99.90th=[26870], 99.95th=[27395],
     | 99.99th=[28967]
   bw (  KiB/s): min=172544, max=304578, per=25.02%, avg=241929.63, stdev=25099.97, samples=240
   iops        : min= 5392, max= 9518, avg=7560.12, stdev=784.36, samples=240
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.07%, 20=90.32%, 50=9.60%
  cpu          : usr=4.86%, sys=30.53%, ctx=771047, majf=0, minf=4169
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=906608,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=944MiB/s (990MB/s), 944MiB/s-944MiB/s (990MB/s-990MB/s), io=27.7GiB (29.7GB), run=30002-30002msec

Disk stats (read/write):
  nvme0n1: ios=903426/235, merge=0/0, ticks=980104/188, in_queue=990116, util=100.00%
------------------------------------------------------------------------------------------------------------------------------------
RAND WRITE PASS
------------------------------------------------------------------------------------------------------------------------------------

benchmark: (g=0): rw=randwrite, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB, ioengine=libaio, iodepth=128
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=317MiB/s][r=0,w=10.2k IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=4): err= 0: pid=1800: Mon Dec 24 16:44:28 2018
  write: IOPS=20.6k, BW=643MiB/s (674MB/s)(18.8GiB/30005msec)
    slat (usec): min=19, max=45426, avg=186.90, stdev=394.09
    clat (msec): min=2, max=105, avg=24.69, stdev=19.49
     lat (msec): min=3, max=106, avg=24.88, stdev=19.64
    clat percentiles (usec):
     |  1.00th=[ 8291],  5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503],
     | 30.00th=[10159], 40.00th=[12125], 50.00th=[12911], 60.00th=[13960],
     | 70.00th=[41681], 80.00th=[49021], 90.00th=[55837], 95.00th=[58459],
     | 99.00th=[63701], 99.50th=[70779], 99.90th=[85459], 99.95th=[89654],
     | 99.99th=[96994]
   bw (  KiB/s): min=68369, max=447353, per=25.06%, avg=164956.83, stdev=132069.05, samples=240
   iops        : min= 2136, max=13979, avg=5154.59, stdev=4127.13, samples=240
  lat (msec)   : 4=0.01%, 10=28.40%, 20=37.59%, 50=15.06%, 100=18.96%
  lat (msec)   : 250=0.01%
  cpu          : usr=6.27%, sys=19.83%, ctx=479319, majf=0, minf=82
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=0,617292,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=643MiB/s (674MB/s), 643MiB/s-643MiB/s (674MB/s-674MB/s), io=18.8GiB (20.2GB), run=30005-30005msec

Disk stats (read/write):
  nvme0n1: ios=0/615562, merge=0/0, ticks=0/985784, in_queue=987744, util=100.00%
------------------------------------------------------------------------------------------------------------------------------------

2 Likes

Hi @jack
witch one do i decompile, I found 4 files.
/usr/lib/linux-image-4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb
/media/rock/ubt-bionic/usr/lib/linux-image-4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb
/boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb
/boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb.dpkg-tmp

Thanks
Pierre

I guess this is the one:

This should be a backup from a previous version that was created before an update:

I have created a script to Do this. Mayby we can Put Together a serier of them and create a RocKpiConfig(A want to be raspiconfig) Please feel free to Improve the script. It Need Some Love.

#!/bin/bash
#By default, the nvme is running at gen1 mode for compatibility, so the speed is limited.
#You can enabled gen2 mode by decompile and recompile the device tree:

sudo cp -p /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb.bak
sudo fdtdump /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb > /tmp/rockpi4.dts
sudo sed -i ‘s/max-link-speed = <0x00000001>;/max-link-speed = <0x00000002>;/g’ /tmp/rockpi4.dts

#Find pcie@f8000000 section

#change from

#max-link-speed = <0x00000001>;

#to

#max-link-speed = <0x00000002>;

#Now recompile the dtb:

sudo dtc -I dts -O dtb /tmp/rockpi4.dts -o /tmp/rockpi4.dtb
sudo mv /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb.tmp
sudo cp /tmp/rockpi4.dtb /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb
ls -l /boot/dtbs/4.4.154-59-rockchip-g5e70f14/rockchip/rockpi-4b-linux.dtb
#Replace the original rockpi-4b-linux.dtb(backup it first), reboot.

2 Likes

What NVMe SSDs are recommended?
I think I’m going to order the large heatsink and the SSD extender (I should have ordered them when I got my rock pi4b but I wanted to test the SBC out a bit first)
Is there a list where I can see which ones are compatible, I don’t want to order one that isn’t compatible.

I’m using a Samsung MZ-V7S250BW SSD 970 EVO Plus 250 GB M.2 Internal NVMe SSD, for about 85,- Euro and I’m satisfied with it.

FAQs: What M.2 SSD are supported?

1 Like

Thanks I’ll keep that in mind when i shop for one :):grinning:

Update:

Now we have a hw_config to set the pcie gen2 mode. Uncomment to enable PCIE gen2 mode. Update the rockpi4-dtbo package to 0.7 or later.

# PCIE running on GEN2 mode
intfc:dtoverlay=pcie-gen2
3 Likes

Here is my modified Pierre’s script working on Armbian

#!/bin/bash
#By default, the nvme is running at gen1 mode for compatibility, so the speed is limited.
#You can enabled gen2 mode by decompile and recompile the device tree:

sudo cp -p /boot/dtb/rockchip/rockpi-4b-linux.dtb /boot/dtb/rockchip/rockpi-4b-linux.dtb.bak
sudo fdtdump /boot/dtb/rockchip/rockpi-4b-linux.dtb > /tmp/rockpi4.dts
sudo sed -i 's/max-link-speed = <0x00000001>;/max-link-speed = <0x00000002>;/g' /tmp/rockpi4.dts

#Find pcie@f8000000 section

#change from

#max-link-speed = <0x00000001>;

#to

#max-link-speed = <0x00000002>;

#Now recompile the dtb:

sudo dtc -I dts -O dtb /tmp/rockpi4.dts -o /tmp/rockpi4.dtb
sudo mv /boot/dtb/rockchip/rockpi-4b-linux.dtb /boot/dtb/rockchip/rockpi-4b-linux.dtb.tmp
sudo cp /tmp/rockpi4.dtb /boot/dtb/rockchip/rockpi-4b-linux.dtb
ls -l /boot/dtb/rockchip/rockpi-4b-linux.dtb
#Replace the original rockpi-4b-linux.dtb(backup it first), reboot.
2 Likes

there should be somewhere a place with tested disks which work and which dont.

One of m2’s which I use is from my old lenovo notebook which performs not worse than 970 Evo with my rock4, I would need to measure it properly, including power usage etc…, but here is data of the disk, its quite cheap if you find used one, Lenovo supplied their notebooks from 2017:
P/N MZVLW256HEHP
Model: MZ-VLW2560
RATED: DC+3.3V 2.91A

which is actually a samsung disk:

Solid State Module (SSM)
M.2 2280
M.2/​M-Key (PCIe 3.0 x4)
read 2800MB/​s
write 1100MB/​s
IOPS 4K read/write 250k/​180k
3D-NAND TLC, Samsung, 48 Layer (V-NAND v3)
1.5 Mio. Hours (MTBF)
Controller Samsung Polaris (S4LP077X01-8030), 8 Channels
Protocol NVMe 1.2
Power usage 6.1W (normal usage), 0.45W (idle state)
Dimensions 80x22x3.7mm
Additional features L1.2 Low-Power-Standby

works without any issues.

EDIT1: here are some benchmarks, this disk would actually equal samsung 960 EVO, benchmarks from stressed m2 disk without any overclocking or something similar. Using external, real raid would speed it a lot up, softraid wouldnt make it 2x, but it would be around 150% I guess, however, more than one disk, if you do so, take care to get enough power, usb3 hubs are usefull here, especially as you can create lvm’s containing mix of raid’s and other empty disks making it really, really fast. However, current speeds are more than enough for this HARDWARE, I got second of such Lenovo M2’s for $ 26:

PCI info on rock:

/media/root/ubt-bionic/home/rock# lspci -vv | grep -E 'PCI bridge|LnkCap'
00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100 (prog-if 00 [Normal decode])
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us
####################################################
GLOBAL TEST SETTINGS
####################################################
LOGFILE=benchmark-8806.log
SIZE=500m
IOSIZE=10g
IOENGINE=libaio
RUNTIMEVAR=60
TEMPFILE=fio-tempfile.dat



####################################################
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive)
####################################################
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
TEST: Laying out IO file (1 file / 500MiB)

TEST: (groupid=0, jobs=1): err= 0: pid=2031: Tue Jun  4 04:06:26 2019
   read: IOPS=757, BW=757MiB/s (794MB/s)(10.0GiB/13519msec)
    slat (usec): min=437, max=4842, avg=785.83, stdev=238.58
    clat (usec): min=5139, max=82437, avg=40094.31, stdev=8563.96
     lat (usec): min=5672, max=83086, avg=40881.74, stdev=8549.84
    clat percentiles (usec):
     |  1.00th=[21627],  5.00th=[24249], 10.00th=[26870], 20.00th=[40633],
     | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
     | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[51643],
     | 99.00th=[74974], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217],
     | 99.99th=[81265]
   bw (  KiB/s): min=727040, max=833154, per=99.85%, avg=774491.15, stdev=24396.81, samples=27
   iops        : min=  710, max=  813, avg=756.30, stdev=23.76, samples=27
  lat (msec)   : 10=0.11%, 20=0.08%, 50=94.42%, 100=5.39%
  cpu          : usr=1.60%, sys=58.20%, ctx=9445, majf=0, minf=8212
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwt: total=10240,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=757MiB/s (794MB/s), 757MiB/s-757MiB/s (794MB/s-794MB/s), io=10.0GiB (10.7GB), run=13519-13519msec

Disk stats (read/write):
  nvme0n1: ios=200985/3, merge=0/0, ticks=7353008/20, in_queue=7366908, util=98.93%



####################################################
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive)
####################################################
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2034: Tue Jun  4 04:06:40 2019
  write: IOPS=767, BW=768MiB/s (805MB/s)(10.0GiB/13334msec)
    slat (usec): min=531, max=37143, avg=839.95, stdev=391.87
    clat (usec): min=6484, max=79592, avg=39465.07, stdev=8592.54
     lat (usec): min=7601, max=80315, avg=40307.42, stdev=8580.45
    clat percentiles (usec):
     |  1.00th=[23987],  5.00th=[25560], 10.00th=[26608], 20.00th=[36963],
     | 30.00th=[40633], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633],
     | 70.00th=[40633], 80.00th=[40633], 90.00th=[40633], 95.00th=[53216],
     | 99.00th=[76022], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168],
     | 99.99th=[79168]
   bw (  KiB/s): min=726444, max=845824, per=99.90%, avg=785615.88, stdev=26894.32, samples=26
   iops        : min=  709, max=  826, avg=767.08, stdev=26.21, samples=26
  lat (msec)   : 10=0.05%, 20=0.27%, 50=94.09%, 100=5.59%
  cpu          : usr=19.35%, sys=47.43%, ctx=9068, majf=0, minf=21
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,10240,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=768MiB/s (805MB/s), 768MiB/s-768MiB/s (805MB/s-805MB/s), io=10.0GiB (10.7GB), run=13334-13334msec

Disk stats (read/write):
  nvme0n1: ios=0/173203, merge=0/0, ticks=0/6129400, in_queue=6138796, util=98.57%



####################################################
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure)
####################################################
TEST: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2037: Tue Jun  4 04:07:40 2019
   read: IOPS=7582, BW=29.6MiB/s (31.1MB/s)(1777MiB/60001msec)
    slat (usec): min=13, max=2154, avg=28.46, stdev= 9.42
    clat (usec): min=2, max=3842, avg=93.22, stdev=23.09
     lat (usec): min=81, max=3899, avg=123.19, stdev=25.81
    clat percentiles (usec):
     |  1.00th=[   79],  5.00th=[   81], 10.00th=[   81], 20.00th=[   82],
     | 30.00th=[   83], 40.00th=[   88], 50.00th=[   92], 60.00th=[   93],
     | 70.00th=[   94], 80.00th=[  101], 90.00th=[  113], 95.00th=[  121],
     | 99.00th=[  149], 99.50th=[  163], 99.90th=[  192], 99.95th=[  281],
     | 99.99th=[  603]
   bw (  KiB/s): min=25747, max=34248, per=99.95%, avg=30315.75, stdev=2513.92, samples=119
   iops        : min= 6436, max= 8562, avg=7578.89, stdev=628.50, samples=119
  lat (usec)   : 4=0.01%, 10=0.03%, 20=0.01%, 50=0.03%, 100=78.66%
  lat (usec)   : 250=21.22%, 500=0.03%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=12.87%, sys=36.60%, ctx=456278, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=454963,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=1777MiB (1864MB), run=60001-60001msec

Disk stats (read/write):
  nvme0n1: ios=453928/29, merge=0/0, ticks=30584/0, in_queue=30124, util=50.27%



####################################################
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually 1-10% of the number listed in the spec sheet)
####################################################
TEST: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2040: Tue Jun  4 04:08:41 2019
   read: IOPS=223, BW=894KiB/s (916kB/s)(52.4MiB/60001msec)
    slat (usec): min=15, max=2667, avg=72.00, stdev=31.50
    clat (usec): min=7, max=2329, avg=134.48, stdev=43.62
     lat (usec): min=90, max=2680, avg=209.62, stdev=53.89
    clat percentiles (usec):
     |  1.00th=[  106],  5.00th=[  109], 10.00th=[  110], 20.00th=[  112],
     | 30.00th=[  115], 40.00th=[  122], 50.00th=[  128], 60.00th=[  137],
     | 70.00th=[  141], 80.00th=[  149], 90.00th=[  153], 95.00th=[  178],
     | 99.00th=[  285], 99.50th=[  310], 99.90th=[  383], 99.95th=[  594],
     | 99.99th=[ 1663]
   bw (  KiB/s): min=  632, max= 1125, per=99.97%, avg=893.76, stdev=96.35, samples=119
   iops        : min=  158, max=  281, avg=223.39, stdev=24.09, samples=119
  write: IOPS=223, BW=894KiB/s (916kB/s)(52.4MiB/60001msec)
    slat (usec): min=18, max=1938, avg=87.36, stdev=27.32
    clat (usec): min=7, max=3127, avg=104.43, stdev=50.98
     lat (usec): min=62, max=3218, avg=194.96, stdev=54.05
    clat percentiles (usec):
     |  1.00th=[   41],  5.00th=[   46], 10.00th=[   47], 20.00th=[  106],
     | 30.00th=[  110], 40.00th=[  111], 50.00th=[  112], 60.00th=[  113],
     | 70.00th=[  115], 80.00th=[  122], 90.00th=[  127], 95.00th=[  131],
     | 99.00th=[  176], 99.50th=[  194], 99.90th=[  355], 99.95th=[  603],
     | 99.99th=[ 3097]
   bw (  KiB/s): min=  704, max= 1106, per=100.00%, avg=894.60, stdev=76.05, samples=119
   iops        : min=  176, max=  276, avg=223.59, stdev=19.00, samples=119
  lat (usec)   : 10=0.31%, 20=0.01%, 50=8.78%, 100=0.79%, 250=88.48%
  lat (usec)   : 500=1.55%, 750=0.03%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=2.06%, sys=8.31%, ctx=78987, majf=0, minf=24
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=13415,13417,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=894KiB/s (916kB/s), 894KiB/s-894KiB/s (916kB/s-916kB/s), io=52.4MiB (54.9MB), run=60001-60001msec
  WRITE: bw=894KiB/s (916kB/s), 894KiB/s-894KiB/s (916kB/s-916kB/s), io=52.4MiB (54.0MB), run=60001-60001msec

Disk stats (read/write):
  nvme0n1: ios=13372/76145, merge=0/0, ticks=1936/47704, in_queue=49444, util=81.21%

For comparission, my desktop uses 2x 960 EVO in raid0 mode (softraid, will try mainboards fake soft mode to compare, should be faster than dmraid), here are same benchmark test, where I must admit, it has been done on quite heavy loaded system and raid0 is my root partition making this test a little inaccurate, as the test on rock was performed by booting from SD card

####################################################
GLOBAL TEST SETTINGS
####################################################
LOGFILE=benchmark-28650.log
SIZE=500m
IOSIZE=10g
IOENGINE=libaio
RUNTIMEVAR=60
TEMPFILE=fio-tempfile.dat



####################################################
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive)
####################################################
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
TEST: Laying out IO file (1 file / 500MiB)

TEST: (groupid=0, jobs=1): err= 0: pid=2005: Tue Jun  4 06:04:45 2019
   read: IOPS=3497, BW=3497MiB/s (3667MB/s)(10.0GiB/2928msec)
    slat (usec): min=34, max=529, avg=79.96, stdev=33.73
    clat (usec): min=1172, max=17610, avg=8930.91, stdev=1829.89
     lat (usec): min=1213, max=17663, avg=9011.43, stdev=1825.17
    clat percentiles (usec):
     |  1.00th=[ 3064],  5.00th=[ 4948], 10.00th=[ 8848], 20.00th=[ 8979],
     | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 9110],
     | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9110], 95.00th=[10814],
     | 99.00th=[16319], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695],
     | 99.99th=[17695]
   bw (  MiB/s): min= 3458, max= 3548, per=100.00%, avg=3498.18, stdev=45.21, samples=5
   iops        : min= 3458, max= 3548, avg=3498.00, stdev=45.41, samples=5
  lat (msec)   : 2=0.28%, 4=3.41%, 10=90.73%, 20=5.58%
  cpu          : usr=1.33%, sys=30.17%, ctx=9934, majf=0, minf=8204
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwt: total=10240,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=3497MiB/s (3667MB/s), 3497MiB/s-3497MiB/s (3667MB/s-3667MB/s), io=10.0GiB (10.7GB), run=2928-2928msec

Disk stats (read/write):
    md0: ios=19902/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=10240/0, aggrmerge=0/0, aggrticks=47064/0, aggrin_queue=45022, aggrutil=94.69%
  nvme0n1: ios=10240/0, merge=0/0, ticks=6008/0, in_queue=4100, util=10.29%
  nvme1n1: ios=10240/0, merge=0/0, ticks=88120/0, in_queue=85944, util=94.69%



####################################################
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive)
####################################################
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2034: Tue Jun  4 06:04:49 2019
  write: IOPS=2838, BW=2838MiB/s (2976MB/s)(10.0GiB/3608msec)
    slat (usec): min=39, max=5212, avg=111.56, stdev=61.20
    clat (usec): min=2158, max=22716, avg=10984.86, stdev=2369.00
     lat (usec): min=2228, max=22810, avg=11097.01, stdev=2363.87
    clat percentiles (usec):
     |  1.00th=[ 3916],  5.00th=[ 5604], 10.00th=[10290], 20.00th=[10683],
     | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[11076],
     | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[14222],
     | 99.00th=[20055], 99.50th=[21103], 99.90th=[21890], 99.95th=[22152],
     | 99.99th=[22414]
   bw (  MiB/s): min= 2822, max= 2854, per=100.00%, avg=2839.71, stdev=12.78, samples=7
   iops        : min= 2822, max= 2854, avg=2839.71, stdev=12.78, samples=7
  lat (msec)   : 4=1.25%, 10=8.39%, 20=89.31%, 50=1.05%
  cpu          : usr=11.48%, sys=22.59%, ctx=9845, majf=0, minf=12
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,10240,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=2838MiB/s (2976MB/s), 2838MiB/s-2838MiB/s (2976MB/s-2976MB/s), io=10.0GiB (10.7GB), run=3608-3608msec

Disk stats (read/write):
    md0: ios=0/19247, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/10246, aggrmerge=0/15, aggrticks=0/83292, aggrin_queue=80996, aggrutil=94.34%
  nvme0n1: ios=0/10247, merge=0/28, ticks=0/58344, in_queue=56364, util=90.13%
  nvme1n1: ios=0/10246, merge=0/3, ticks=0/108240, in_queue=105628, util=94.34%



####################################################
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure)
####################################################
TEST: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2083: Tue Jun  4 06:05:49 2019
   read: IOPS=13.4k, BW=52.5MiB/s (55.0MB/s)(3148MiB/60001msec)
    slat (usec): min=2, max=491, avg= 6.85, stdev= 2.92
    clat (nsec): min=1370, max=3663.7k, avg=64609.81, stdev=14305.57
     lat (usec): min=16, max=3685, avg=72.04, stdev=14.94
    clat percentiles (usec):
     |  1.00th=[   56],  5.00th=[   57], 10.00th=[   58], 20.00th=[   59],
     | 30.00th=[   60], 40.00th=[   60], 50.00th=[   61], 60.00th=[   61],
     | 70.00th=[   62], 80.00th=[   75], 90.00th=[   80], 95.00th=[   91],
     | 99.00th=[   95], 99.50th=[   97], 99.90th=[  116], 99.95th=[  147],
     | 99.99th=[  474]
   bw (  KiB/s): min=51680, max=54728, per=100.00%, avg=53738.96, stdev=567.43, samples=119
   iops        : min=12920, max=13682, avg=13434.70, stdev=141.86, samples=119
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (usec)   : 100=99.71%, 250=0.25%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=6.40%, sys=13.74%, ctx=806131, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=805867,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=52.5MiB/s (55.0MB/s), 52.5MiB/s-52.5MiB/s (55.0MB/s-55.0MB/s), io=3148MiB (3301MB), run=60001-60001msec

Disk stats (read/write):
    md0: ios=804431/796, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=402935/162, aggrmerge=0/256, aggrticks=24922/38, aggrin_queue=30, aggrutil=0.05%
  nvme0n1: ios=402964/156, merge=0/244, ticks=24904/32, in_queue=16, util=0.03%
  nvme1n1: ios=402907/168, merge=0/268, ticks=24940/44, in_queue=44, util=0.05%



####################################################
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually 1-10% of the number listed in the spec sheet)
####################################################
TEST: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=2231: Tue Jun  4 06:06:50 2019
   read: IOPS=285, BW=1144KiB/s (1171kB/s)(67.0MiB/60001msec)
    slat (usec): min=3, max=225, avg=17.64, stdev= 7.39
    clat (usec): min=2, max=1587, avg=74.61, stdev=19.38
     lat (usec): min=58, max=1604, avg=93.03, stdev=21.45
    clat percentiles (usec):
     |  1.00th=[   60],  5.00th=[   62], 10.00th=[   63], 20.00th=[   64],
     | 30.00th=[   65], 40.00th=[   67], 50.00th=[   69], 60.00th=[   73],
     | 70.00th=[   83], 80.00th=[   87], 90.00th=[   96], 95.00th=[   99],
     | 99.00th=[  104], 99.50th=[  109], 99.90th=[  159], 99.95th=[  198],
     | 99.99th=[ 1029]
   bw (  KiB/s): min=  857, max= 1400, per=100.00%, avg=1143.71, stdev=116.20, samples=120
   iops        : min=  214, max=  350, avg=285.88, stdev=29.06, samples=120
  write: IOPS=285, BW=1144KiB/s (1171kB/s)(67.0MiB/60001msec)
    slat (usec): min=4, max=353, avg=19.42, stdev= 7.80
    clat (nsec): min=1500, max=1807.8k, avg=21123.60, stdev=14328.23
     lat (usec): min=17, max=1823, avg=41.33, stdev=17.05
    clat percentiles (usec):
     |  1.00th=[   15],  5.00th=[   17], 10.00th=[   18], 20.00th=[   19],
     | 30.00th=[   20], 40.00th=[   20], 50.00th=[   21], 60.00th=[   22],
     | 70.00th=[   23], 80.00th=[   24], 90.00th=[   26], 95.00th=[   27],
     | 99.00th=[   32], 99.50th=[   35], 99.90th=[   47], 99.95th=[   97],
     | 99.99th=[  210]
   bw (  KiB/s): min=  904, max= 1336, per=100.00%, avg=1144.13, stdev=81.04, samples=120
   iops        : min=  226, max=  334, avg=285.98, stdev=20.27, samples=120
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=21.15%, 50=28.79%
  lat (usec)   : 100=48.01%, 250=2.00%, 500=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=0.61%, sys=1.83%, ctx=81391, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=17153,17159,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=1144KiB/s (1171kB/s), 1144KiB/s-1144KiB/s (1171kB/s-1171kB/s), io=67.0MiB (70.3MB), run=60001-60001msec
  WRITE: bw=1144KiB/s (1171kB/s), 1144KiB/s-1144KiB/s (1171kB/s-1171kB/s), io=67.0MiB (70.3MB), run=60001-60001msec

Disk stats (read/write):
    md0: ios=17125/96322, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=8577/64976, aggrmerge=0/11280, aggrticks=798/30242, aggrin_queue=96, aggrutil=0.14%
  nvme0n1: ios=8579/65042, merge=0/11293, ticks=824/30304, in_queue=108, util=0.12%
  nvme1n1: ios=8576/64911, merge=0/11268, ticks=772/30180, in_queue=84, util=0.14%

Here is a script which I used:

# for manual type: man fio
# ref and example scripts from: https://askubuntu.com/a/991311
LOGFILE=benchmark-$RANDOM.log && touch $LOGFILE
SIZE=500m
IOSIZE=10g
IOENGINE=libaio
RUNTIMEVAR=60
TEMPFILE=fio-tempfile.dat

echo "####################################################" >> $LOGFILE
echo "GLOBAL TEST SETTINGS" >> $LOGFILE
echo "####################################################" >> $LOGFILE
echo "LOGFILE=$LOGFILE" >> $LOGFILE
echo "SIZE=$SIZE" >> $LOGFILE
echo "IOSIZE=$IOSIZE" >> $LOGFILE
echo "IOENGINE=$IOENGINE" >> $LOGFILE
echo "RUNTIMEVAR=$RUNTIMEVAR" >> $LOGFILE
echo "TEMPFILE=$TEMPFILE" >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo "####################################################" >> $LOGFILE
echo "Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive)" >> $LOGFILE
echo "####################################################" >> $LOGFILE
fio --name TEST --eta-newline=5s --filename=$TEMPFILE --rw=read --size=$SIZE --io_size=$IOSIZE --blocksize=1024k --ioengine=$IOENGINE --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=$RUNTIMEVAR --group_reporting >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo "####################################################" >> $LOGFILE
echo "Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive)" >> $LOGFILE
echo "####################################################" >> $LOGFILE
fio --name TEST --eta-newline=5s --filename=$TEMPFILE --rw=write --size=$SIZE --io_size=$IOSIZE --blocksize=1024k --ioengine=$IOENGINE --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=$RUNTIMEVAR --group_reporting >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo "####################################################" >> $LOGFILE
echo "Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure)" >> $LOGFILE
echo "####################################################" >> $LOGFILE
fio --name TEST --eta-newline=5s --filename=$TEMPFILE --rw=randread --size=$SIZE --io_size=$IOSIZE --blocksize=4k --ioengine=$IOENGINE --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=$RUNTIMEVAR --group_reporting >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo ""  >> $LOGFILE
echo "####################################################" >> $LOGFILE
echo "Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually 1-10% of the number listed in the spec sheet)" >> $LOGFILE
echo "####################################################" >> $LOGFILE
fio --name TEST --eta-newline=5s --filename=$TEMPFILE --rw=randrw --size=$SIZE --io_size=$IOSIZE --blocksize=4k --ioengine=$IOENGINE --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=$RUNTIMEVAR --group_reporting >> $LOGFILE

chown rock $LOGFILE

# show results
cat < $LOGFILE

# cleanup
rm -f $TEMPFILE

EDIT2: I have this script under /usr/bin, I hope we will have unified testing scripts for all different benchmarks, script I wrote is just example which should be elaborated by devs which tests we want as full suite and as addition we might want this test to ask user if he wants to report this test which could automate HDD compatibility list without requirement for devs to waste time on maintenancing the list

1 Like