How to connect an M.2 Extension board to a Rock 5A?

I bought a Rock 5A and a M.2 Extension board, which is advertised as compatible with the 5A.
However I am at a loss trying to connect it to the SBC… I don’t see on the Rock 5A a suitable connector where to plug into the flat cable that arrived together with the extension board…
Can somebody advice please ? Maybe with a photo ?
Thanks

Fred

Maybe this guide will help?

2 Likes

Oohhhh…I see… Now I do understand what that little interface board was meant for…
Thanks indeed. You saved my day :grin:

Fred

@Freddy did you get the nvme hat working? What read/write speeds are you getting? Mine are dismal at about 25MB/s for write.

@cnogradi yes, it is working ok. How did you measure the speed? I could use exactly your method, so we could compare the results…

Fred

@Freddy It might be good to use the same commands as here to compare to rock 5b m.2 m connector: Rock 5b nvme speed test

I tried, but that test makes use of this command:

`

fio --name=write --ioengine=libaio --iodepth=4 --rw=write --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test

`
and the answer fron the Debian 11 installed on my Rock 5A is:

bash: fio: command not found

Fred

Yeah I had to install it: ‘sudo apt-get install fio’

Ok, installed fio.
The output of running it was:

fred@rock-5a:~/Downloads$ sudo fio --name=write --ioengine=libaio --iodepth=4 --rw=write --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test
write: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=4
...
fio-3.25
Starting 30 processes
write: Laying out IO file (1 file / 2048MiB)
Jobs: 30 (f=30): [W(30)][100.0%][w=396MiB/s][w=396 IOPS][eta 00m:00s]
write: (groupid=0, jobs=30): err= 0: pid=26790: Wed Aug  9 00:16:22 2023
  write: IOPS=398, BW=398MiB/s (418MB/s)(23.5GiB/60299msec); 0 zone resets
    slat (usec): min=139, max=298550, avg=3122.64, stdev=23617.93
    clat (msec): min=12, max=633, avg=297.46, stdev=56.13
     lat (msec): min=13, max=633, avg=300.58, stdev=51.18
    clat percentiles (msec):
     |  1.00th=[   92],  5.00th=[  239], 10.00th=[  271], 20.00th=[  275],
     | 30.00th=[  275], 40.00th=[  279], 50.00th=[  284], 60.00th=[  292],
     | 70.00th=[  313], 80.00th=[  334], 90.00th=[  368], 95.00th=[  388],
     | 99.00th=[  468], 99.50th=[  498], 99.90th=[  558], 99.95th=[  584],
     | 99.99th=[  609]
   bw (  KiB/s): min=212737, max=618477, per=100.00%, avg=408207.59, stdev=2608.97, samples=3600
   iops        : min=  193, max=  602, avg=393.07, stdev= 2.57, samples=3600
  lat (msec)   : 20=0.03%, 50=0.21%, 100=0.89%, 250=4.57%, 500=93.82%
  lat (msec)   : 750=0.48%
  cpu          : usr=0.15%, sys=0.31%, ctx=24425, majf=0, minf=434
  IO depths    : 1=0.1%, 2=0.2%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,24026,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=398MiB/s (418MB/s), 398MiB/s-398MiB/s (418MB/s-418MB/s), io=23.5GiB (25.2GB), run=60299-60299msec

Disk stats (read/write):
  nvme0n1: ios=16/96022, merge=0/176, ticks=369/27600379, in_queue=27602813, util=99.98%

Fred

[Edited - messed up read speed originally] Thanks @Freddy. I figured out what I was doing wrong and got a write speed comparable to yours below. I also tested read speed and it seems the same. I did not have the Radxa hat but used this one: https://www.amazon.com/Sintech-NGFF-NVME-WiFi-Cable/dp/B07DZF1W55/ref=sr_1_5?crid=3BO6JWA4PWEBL&keywords=sintech+m.2+ngff&qid=1691539738&sprefix=sintech+m.2+ngff%2Caps%2C117&sr=8-5

The NVME is a SK HYNIX Platinum P41 2TB

CMD: fio --name=write --ioengine=libaio --iodepth=4 --rw=write --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/nvme/test
write: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=4
...
fio-3.25
Starting 30 processes
write: Laying out IO file (1 file / 2048MiB)
Jobs: 30 (f=30): [W(30)][100.0%][w=402MiB/s][w=402 IOPS][eta 00m:00s]
write: (groupid=0, jobs=30): err= 0: pid=4131: Tue Aug  8 23:38:26 2023
  write: IOPS=401, BW=402MiB/s (421MB/s)(23.7GiB/60317msec); 0 zone resets
    slat (usec): min=75, max=302914, avg=3447.93, stdev=26606.33
    clat (msec): min=2, max=3459, avg=294.82, stdev=113.89
     lat (msec): min=5, max=3459, avg=298.27, stdev=111.48
    clat percentiles (msec):
     |  1.00th=[   62],  5.00th=[  161], 10.00th=[  243], 20.00th=[  288],
     | 30.00th=[  296], 40.00th=[  296], 50.00th=[  300], 60.00th=[  300],
     | 70.00th=[  300], 80.00th=[  300], 90.00th=[  313], 95.00th=[  388],
     | 99.00th=[  510], 99.50th=[  558], 99.90th=[ 2165], 99.95th=[ 2769],
     | 99.99th=[ 3406]
   bw (  KiB/s): min=149460, max=846094, per=100.00%, avg=416246.25, stdev=3132.75, samples=3558
   iops        : min=  142, max=  825, avg=402.06, stdev= 3.09, samples=3558
  lat (msec)   : 4=0.01%, 10=0.01%, 20=0.05%, 50=0.71%, 100=1.49%
  lat (msec)   : 250=9.27%, 500=87.30%, 750=0.89%, 1000=0.04%, 2000=0.12%
  lat (msec)   : >=2000=0.12%
  cpu          : usr=0.19%, sys=0.34%, ctx=24597, majf=0, minf=495
  IO depths    : 1=0.1%, 2=0.2%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,24220,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=402MiB/s (421MB/s), 402MiB/s-402MiB/s (421MB/s-421MB/s), io=23.7GiB (25.4GB), run=60317-60317msec

Disk stats (read/write):
  nvme0n1: ios=0/101217, merge=0/92, ticks=0/28001941, in_queue=28001952, util=99.92%
 
CMD: fio --name=read --ioengine=libaio --iodepth=4 --rw=read --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/nvme/testread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=4
...
fio-3.25
Starting 30 processes
Jobs: 30 (f=30): [R(30)][100.0%][r=404MiB/s][r=404 IOPS][eta 00m:00s]
read: (groupid=0, jobs=30): err= 0: pid=5405: Wed Aug  9 00:19:00 2023
  read: IOPS=398, BW=399MiB/s (418MB/s)(23.5GiB/60284msec)
    slat (usec): min=51, max=2481, avg=198.07, stdev=120.15
    clat (msec): min=11, max=565, avg=300.08, stdev=17.74
     lat (msec): min=12, max=565, avg=300.28, stdev=17.69
    clat percentiles (msec):
     |  1.00th=[  268],  5.00th=[  275], 10.00th=[  284], 20.00th=[  292],
     | 30.00th=[  296], 40.00th=[  300], 50.00th=[  300], 60.00th=[  305],
     | 70.00th=[  309], 80.00th=[  309], 90.00th=[  313], 95.00th=[  317],
     | 99.00th=[  326], 99.50th=[  330], 99.90th=[  418], 99.95th=[  456],
     | 99.99th=[  502]
   bw (  KiB/s): min=255767, max=492224, per=100.00%, avg=408737.62, stdev=1944.87, samples=3600
   iops        : min=  237, max=  480, avg=394.73, stdev= 1.94, samples=3600
  lat (msec)   : 20=0.02%, 50=0.05%, 100=0.08%, 250=0.27%, 500=99.56%
  lat (msec)   : 750=0.01%
  cpu          : usr=0.03%, sys=0.32%, ctx=24067, majf=0, minf=31218
  IO depths    : 1=0.1%, 2=0.2%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=24053,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=23.5GiB (25.2GB), run=60284-60284msec

Disk stats (read/write):
  nvme0n1: ios=95868/3, merge=0/1, ticks=27444806/42, in_queue=27444888, util=99.92%