Rock5 as Samba file share

Since we finally have pcie 3.0 x4 lanes (and 2.1 x1 lane), I have decided to expand my RockPi4 storage to Rock5. While I’m still waiting for a few parts, namely

Here is first tests of network&storage performance. At first I was testing speed with FIO, but it’s results always lower than diskspd or real performance, so I have choosed to use diskspd. Both U.2 NVMe drives in mdadm raid0

The graphs:
Reading:


Writing:

The raw data:
smb tests.zip (47.2 KB)

The scheme right now

The NIC is connected by using M.2 e-key (x1 pcie 2.1) to x8 pcie adapter

iperf3 between Windows & Rock5

root@rock-5b:/home/rock# iperf3 -s

Server listening on 5201

Accepted connection from 192.168.1.43, port 50063
[ 5] local 192.168.1.45 port 5201 connected to 192.168.1.43 port 50064
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 376 MBytes 3.16 Gbits/sec 0 4.01 MBytes
[ 5] 1.00-2.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 2.00-3.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 3.00-4.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 4.00-5.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 5.00-6.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 6.00-7.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 7.00-8.00 sec 366 MBytes 3.07 Gbits/sec 0 4.01 MBytes
[ 5] 8.00-9.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 9.00-10.00 sec 368 MBytes 3.08 Gbits/sec 0 4.01 MBytes
[ 5] 10.00-10.02 sec 6.25 MBytes 3.02 Gbits/sec 0 4.01 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 3.60 GBytes 3.09 Gbits/sec 0 sender

root@rock-5b:/home/rock# iperf3 -c 192.168.1.43
Connecting to host 192.168.1.43, port 5201
[ 5] local 192.168.1.45 port 40424 connected to 192.168.1.43 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 357 MBytes 2.99 Gbits/sec 816 1.06 MBytes
[ 5] 1.00-2.00 sec 358 MBytes 3.00 Gbits/sec 0 1.45 MBytes
[ 5] 2.00-3.00 sec 362 MBytes 3.04 Gbits/sec 15 1.23 MBytes
[ 5] 3.00-4.00 sec 360 MBytes 3.02 Gbits/sec 1 970 KBytes
[ 5] 4.00-5.00 sec 362 MBytes 3.04 Gbits/sec 0 1.37 MBytes
[ 5] 5.00-6.00 sec 361 MBytes 3.03 Gbits/sec 1 1.13 MBytes
[ 5] 6.00-7.00 sec 361 MBytes 3.03 Gbits/sec 10 851 KBytes
[ 5] 7.00-8.00 sec 362 MBytes 3.04 Gbits/sec 0 1.30 MBytes
[ 5] 8.00-9.00 sec 361 MBytes 3.03 Gbits/sec 3 1.05 MBytes
[ 5] 9.00-10.00 sec 362 MBytes 3.04 Gbits/sec 0 1.45 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 3.52 GBytes 3.03 Gbits/sec 846 sender
[ 5] 0.00-10.00 sec 3.51 GBytes 3.01 Gbits/sec receiver

iperf Done.

The target scheme

Also thanks @tkaiser for advice regarding HELIOS lantest. Here is also results from it:

image

The m.2 to SATA is working with switch. The power consumption is much lesser than i expected.

1 Like

because of all ssds/ pcie card / sbc has connect ground together, so you should use only one power supply to offer power

Easy since ‘real performance’ since Windows 7 at least when copying with Windows Explorer is characterized by

  • auto-tuning block sizes
  • using parallel streams

https://www.helios.de/web/EN/support/TI/157.html

For your iperf tests… can you repeat them with server on the Windows PC and then

  • taskset -c 5 iperf3 -c 192.168.1.43
  • taskset -c 5 iperf3 -R -c 192.168.1.43 (reverse direction)

Also do you see interrupts being bottlenecked by cpu0 (checking /proc/interrupts afterwards and in a separate run checking while benchmarking with atop what’s going on)?

Of course testing with Helios LanTest would also be welcomed :slight_smile:

That would explain fio vs explorer, but no fio vs diskspd. There is (average) ~7% difference based on 20 tests

Just a quick reminder, I’m bottlenecked by x1 pcie 2.1 lane, so 3.0Gb/s sounds true enough.

Erm, no? The ground is still on power cables.
And It’s not high speed system, where I need to sync everything to reach max performance.

Especially the 1st three numbers (not sequential transfer speeds) are super low and results variation (the orange triangles) is extremely high. You might want to compare with the 10GbE SMB numbers here if you want: https://github.com/openmediavault/openmediavault/issues/101#issuecomment-468270197 (made with macOS but shouldn’t matter since Microsoft should still be able to fabricate a better SMB client than Apple).

As for diskspd performance and iperf3 numbers. I was neither aware that your NIC card sits in the Key E slot nor that Explorer and diskspd numbers differed. Anyway: without knowing CLI parameters (count of parallel/asynchronous threads for example) it’s a bit pointless to compare and I’m still interested in /proc/interrupts output to get a better understanding of RK’s 5.10 BSP kernel and PCIe.

Well, connection wholly explained on scheme:
image
and as for diskspd cli - it’s all here


And as for

the first run is a few hundreds ms, but later it’s becoming ~5s