Rock boards as NAS solution

In this article, I share my experience with M.2 SATA controllers on three different Radxa boards. Since last year, my data relies on a new home made modern x86 NAS solution and one or two spare disks for regular backups, but this year, I would like to add a secondary NAS to replicate the data.

Radxa Rock models are interesting SBCs for the purpose, at least on paper, as some of them offer enough PCIe bandwidth for hard drives in RAID configurations. By the way, some low end to mid range NAS solutions from Synology or QNAP also rely on ARM for their 2 and even 4 bays offerings. In late September 2022, I only owned a Rock Pi 4B+ board and the SATA hats were unavailable, but four months later the Penta SATA hat was back in stock and I decided to first evaluate a secondary NAS solution based on Rock 3A and this Penta SATA hat, with low power usage and software RAID 5 on three regular hard drives in mind for reliability and cost.

Today, six months after my first tests, three Radxa boards and two M.2 SATA extensions boards have been roughly benchmarked and I also gathered some hardware to finalize the NAS case.

Here is the SBC hardware (excluding power supplies, case, cables, additional heatsinks…):

  • Rock 3A 8GB with 32 GB eMMc (RK3568 4xA55 core @ 2.0 GHz, M.2 M key with PCIe 3.0 x 2)
  • Rock 4B+ 4 GB with 32 GB eMMc (RK3399 OP1 2xA72@2.0 GHz and 4xA53@1.5 GHz, M.2 M key with PCIe 2.1 x 4)
  • Rock 5B 16 GB with 64 GB eMMc (RK3588 4xA76@2.4 GHz and 4xA55@1.8 GHz, M.2 M key with PCIe 3.0 x 4)
  • Radxa Penta SATA hat with ribbon cable, compatible with Rock 3A and Rock 4 (JMB585 chip - SATA 3.0 x 4 + eSATA, optional Molex ATX power and 12V DC)
  • QNINE M.2 NVME SATAx6 2280 module (ASM1166 chip – SATA 3.0 x 6), directly compatible with Rock 5B and usable on Rock 3A / 4 if using the M.2 extension board
  • Radxa M.2 extension board/hat (to use NVME module on Rock 3A and Rock 4B)

The drives for performance measurements are:

  • Seagate 4TB HDD (ST4000VN006) x 3
  • Samsung SSD 870 EVO 2TB

Tests are run in “open” conditions and aim at measuring RAID5 and RAID0 performance on 3 HDDs. EXT4 is used for all volumes. Read/write tests are run between the RAID array and a SSD drive on a 40 GB set of large files to limit memory buffer effects.


Rock 3A experience

Initial test covers the Penta SATA hat on the Rock 3A board, very similar to what is used in Synology and QNAP 2 or 4 bays solutions based on A55 cores (Realtek RTD1296 and RTD1619B chips). This board is passively cooled, but given the tiny heatsink is covered by the SATA hat, keeping the chip cool is not possible under load. To avoid issues, I also placed heatsinks on the memory chip and the JMB controller on the SATA hat.

System is Armbian 23.02 CLI for Rock 3A and based on Debian Bullseye with kernel 6.1.11 (, Penta hat is directly controlled through AHCI, without any additional package. RAID volumes are created and managed in OVM installed by means of the developer script for Debian.

Idle temperature of the SoC was about 48°C in static open conditions, at a room temperature of 23°C. A quick test with sbc-bench script led to temperatures above 70°C and the system finally crashed. I verified that a simple forced air flow in the correct orientation (table fan at 80 cm) allowed to stabilize temperature under full load at about 48°C (28°C in idle mode), meaning a mini-ITX case with a single fan at the back would suffice for decent cooling if the board is cleverly placed.

As network is limited to GbE, I used a SSD to benchmark the RAID performance. The SSD drive was placed directly on the hat (not shown on last picture) and powered by a molex from the flex ATX power supply on the case holding the three HDDs. SATA cables are specific here (male-female) and quite difficult to find. Mine showed low quality connectors as their plastic could easily be broken.

RAID5 (3xHDD) → SSD: 110 MB/s

SSD → RAID5 (3xHDD): 93 MB/s

RAID0 (3xHDD) → SSD: 168 MB/s

SSD → RAID0 (3xHDD): 136 MB/s

In these tests, the performance is always CPU bounded and as expected, RAID0 offers better performance. In the context of NAS, RAID5 is however the only suitable option and performance is just enough to fill the GbE bandwidth.

Note: I experienced some read/write IO errors and causing reset of the SATA links that slightly affected the measurements and was unable to solve them all even after replacement of some of the SATA cables.

The exact same operating system is used with the M.2 extension hat and the ASM1166 2280 NVME module. The RAID array defined earlier is recognized out-of-the-box after reboot. Performance is summarized below, with similar CPU bound and globally lower performance. Here, no I/O errors were detected. It seems that the JMB585 is a better choice than the ASM1166, although I cannot say if this is due to a better software support or a hardware advantage.

RAID5 (3xHDD) → SSD: 95 MB/s

SSD → RAID5 (3xHDD): 87 MB/s

RAID0 (3xHDD) → SSD: 136 MB/s

SSD → RAID0 (3xHDD): 123 MB/s

1 Like

Rock 4B+ experience

This board hosts an older chip, is more powerful, but supports PCIe 2.1 only. Even if only 2 of the 4 lanes available can be used by the SATA controller, we still have plenty of bandwidth for out tests. Again, It is possible to use both ASM1166 and JMB585 controllers on Rock 4B+, the same way as with the Rock 3A. Passive cooling is here facilitated by the large heatsink on the opposite side compared to the hat, but this also makes assembly a bit more difficult in a case as spacing is required on both sides.

In idle state, the temperature is quite low, about 36 °C at 23°C room temperature. Under stress, the maximum temperature never reached critical levels (topping at about 60 °C in cpuminer under sbc-bench). The large heatsink offers a huge inertia and a lot of dissipation surface despite the short height of the fins, so cooling is quite good even without forced air flow, maybe also helped by the Arctic Silver 5 thermal compound.

As we have some numbers with the Penta SATA hat on the Rock 3A, I decided to only use ASM1166 module on the NVME hat here. The operating system is Armbian 23.8.3 CLI with 6.1.50 kernel based on Debian 12 Bookworm, by mistake… it is not compatible with OMV installation script. It was then the opportunity for me to use mdadm directly for RAID management. Fortunately, I found a nice and clear receipt here to help, and this is perfectly enough for our performance tests.

This Armbian version was configured with “Radxa Rock Pi 4B” DTB file, not the “plus” version but it might simply have no impact on PCIe performance.

Here are the performance results with ASM1166 on the same set of files:

RAID5 (3xHDD) → SSD: 171 MB/s

SSD → RAID5 (3xHDD): 125 MB/s

RAID0 (3xHDD) → SSD: 211 MB/s

SSD → RAID0 (3xHDD): 167 MB/s

Again, I noticed some READ/WRITE errors in kernel messages, but they were not too critical. All drives showed green SMART status.

CPU usage was high in RAID0 configuration, and at about 70~80 % core usage in RAID5. Not excellent, not bad either. RAM size is only 4GB here, but in dual channel configuration, compared to the 8 GB of Rock 3A. Performance is always above the GbE bandwidth, which is exactly what we wish in RAID5 mode.

1 Like

Rock 5B experience

Performance of A76 cores of RK3588 is 2 to 3 times higher than the older A72 cores of RK3399 and RK3588 also embeds the same A55 efficient cores of RK3568. So Rock 5B is a much more powerful board that offers larger PCIe bandwidth and also a more powerful NPU compared to Rock 3A. In short, we have much more power and the maximum bandwidth for our two SATA controllers.

Recently, QNAP released its 6-bay NAS surveillance TS-AI642 based on RK3588, and this is exactly what we have here in hands in terms of hardware, although I saw no information on the SATA controller or its RAID level support yet on this dedicated model.

Here, it is not possible to firmly fix the Penta hat (or the ribbon cable) to the board while the ASM1166-based module can be directly screwed at the back of the board, so I logically decided to only benchmark the latter. Nevertheless, hosting the Rock 5B in a case is not that easy since both sides of the board require clearance, on top side for the heatsink, on bottom side for the SATA cables!

System is the official Radxa Debian 11 CLI image for Rock 5B with kernel 5.10. The ASM1166 NVME module was recognized flawlessly out-of-the-box and RAID volumes are created and managed in OVM, installed by means of the script on this Debian version.

Tests are run with the SSD and the three HDDs connected on the SATA board:

RAID5 (3xHDD) → SSD: 283 MB/s (with some I/O errors)

SSD → RAID5 (3xHDD): 260 MB/s

RAID0 (3xHDD) → SSD: 208 MB/s (abnormal - too many I/O errors)

SSD → RAID0 (3xHDD): 480 MB/s

Once again, I noticed many kernel I/O errors in transfers from RAID0 to SSD, destroying the chances to get representative numbers. It was confirmed in RAID5 (initially 168 MB/s), so I decided to change some cables and results were significantly improved. However, in the other direction, write performance to RAID5 (260 MB/s) is almost the same as on my main i3-12100 x86 solution (275 MB/s) using the same three HDDs and, in case of RAID0, we almost reach the SATA 6 Gbps limit, which is more than satisfying.

CPU bound is reached in my RAID write tests, but not in read tests.


(Rough) conclusions

My tests were not rigorous, with different Armbian/Ubuntu versions, different kernels, unexplained I/O errors, insufficient checks on drivers or optimizations, so it would be false to say I have covered the whole story… No, but at least we can get an overall picture about these three different boards.

The “numbering” hierarchy is confirmed and my rough conclusions today are

  • Rock 3A is a bit weak for decent performance in software RAID, especially RAID5, so it seems ok as secondary NAS if the bandwidth need is not critical, its GbE is then not limitative, but the two PCIe 3.0 lanes on this board are a waste. Of course, knowing this, the 2 and 4 GB models (respectively $45 and $65) are better options in this context but do not include the eMMc module. Also cooling is poor on this board and it is degraded by the SATA hat. It means forced air flow is strictly required under warm environment. I really expected this board would offer more, but it is in the end very disappointing… for a price close to the older but better performing Rock 4B+ solution.
  • Rock 4B+ performance is just nice, not stellar but convincing. We can even regret only a single GbE port is present here. The cooling performance offered by the large heatsink ($8) is good. As a pure NAS solution, the 4 GB model with 32 GB eMMc is fine ($85), but the 2 GB flavor ($75) would probably suffice if 16 GB eMMc is enough. This board offers a nice balance as a simple and affordable NAS solution. With the NVME hat and a cheap SATA 2280 module, price is below $150 excluding the case, power supply and disks.
  • Rock 5B performance is strong, roughly at the same level as an x86 solution with regular HDDs, but the board is not cheap, so the 16 GB board ($189) is not advised unless the board is used for additional tasks, as Minecraft server, video streaming or surveillance station for instance. The 2.5 GbE is here welcome, it simply offers the correct bandwidth for my solution provided the network switch can take advantage of it. For pure NAS usage, 4 GB ($129) are probably sufficient, otherwise the 8 GB flavor at $149 aligns on the QNAP offering. The eMMc module and the cooling heatsink are not included in the announced prices above, but are affordable.
  • The Radxa Penta SATA hat at $49 cannot be advised for 3”5 HDDs, because it requires specific cables and is not cheaper than the M.2 extension hat ($10) with a 2280 M.2 M module ($25 or more). Also, Rock 5B cannot be used with the Penta hat. In my tests on Rock 3A, JMB585 seems to offer better performance than ASM1166 however, if You have the choice.
  • All system images I used allowed me to use both SATA controllers out-of-the-box, without any hacking or additional package. This is good news as the tested chips are cheap and easy to find.

Cheap NAS solutions with 4 bays from QNAP or Synology start at about $300 (excluding disks). It is then difficult to compete with a Rock 4B+ board, its hat and the NVME SATA controller if we add the costs of case and power supply to the bill: about $250 in total, a bit less if refurbished accessories are used. You might end up with a slightly better performer, but without the ecosystem offered by these specialized companies. On the other hand, replacing parts is quite easy on a home-made solution while after 3 to 5 years, these cheap Synology/QNAP systems must usually be replaced as repairing or upgrading them is simply not possible or affordable anymore.

Also, some Rockchip based motherboards exist, like this Firefly ITX-3588J, with dedicated SATA ports and multiple ethernet plugs. With case and power supply, we have a nice Rock 5B alternative but starting at $600, which is too expensive and just a bit cheaper than the QNAP TS-AI642 at about $800.

What’s next?

My next concern will be to find the right enclosure and power supply as the case I used for the three HDDs in these tests does not allow to host the Rock 5B board… and I think that SBC will be my final choice, under OMV and Shinobi, so as NAS and surveillance station, possibly with an attempt to use the NPU. More on that sooner or later in another article…


Thanks for Your hard work documenting all of those, as You know I did not hit any ERRORS on my side with SATA cables, but as many others here I’ve got some issues with UAS mode on quad hata kit. Penta sata kit here has no issues at all with those but also there is no hardware RAID option.

I’m not sure about ASM1166 card, will that use 4 pcie 2.0 lanes on Rock4 and 2x pcie 3.0 lanes on Rock3 (as well as on Rock 5?). I wonder how it connects to the system and what bandwidth can offer?

QNAP is still much more expensive, but You will get hardware and software for that price. NAS server is not only storage today, but may serve for many other services. ARM still has better performance, especially RK3588 on Rock5.

Thanks for documenting all that tests ydeletrain :slight_smile:

I have Rock5B with NVME hat and ASM1166 connected to m.2 E-key port and i did’t observed any I/O errors in dmesg as of now (it’s going 24/7 as auxiliary server with samba docker + urbackup client, but i will probably transfer to it as my main nas in near future). For me it seems that unfortunately You probably have somehow faulty m.2 board hosting ASM1166 or maybe newer kernel provides some improvements (i have armbian with 6.5-rc1).
I will try to do some stress testing on it to see if it’s maybe related to higher load.

@dominik ASM1166 has 2 pcie 3.0 lanes so it should work at full speed on rock 3A in M-Key slot and it will downgrade to 2xpcie2.0 on rock4, but until You don’t want to saturate all sata port at once it should not influence you as it’s still 1GB/s (~2xsata3.0) worth of throughput.

EDIT: I have put my asm1166 under stress (40GB sample using iozone, to connected SSD) and i don’t see any i/o errors in console/dmesg/journaclt.
Run began: Thu Oct 19 07:04:59 2023

Include fsync in write timing
O_DIRECT feature enabled
Auto Mode
File size set to 41943040 kB
Record Size 1024 kB
Record Size 16384 kB
Command line used: iozone -e -I -a -s 40G -r 1024k -r 16384k -i 0 -i 1
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
                                                          random    random     bkwd    record    stride                                    
          kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
    41943040    1024   303861   305899   298944   304878                                                                                  
    41943040   16384   346715   345014   385905   395923

Ok, thanks for clarification,
then it will use half of rock5b and rock4 pcie lanes. Maybe there is other variant of such card with 4x2.1? Speed is almost the same.
For now with rock5b seni pcb board can be used to split lanes on rock5b and if anybody wants to put second 6x sata card or just another ethernet card :slight_smile:

Two pcie lanes are chip limitation, not a card, so there will be no m.2 card using x4 connection electrically with ASM1166, and from what i can see all cheap controllers are limited to x1/x2 either Gen2 or Gen3 depending on exact model (JMicro JMB585/582,Marvel 88SE92xx/88SE91xx, AsMedia ASM1xxx) so until someone connected it through PLX which can do 2.0x4<->3.0x2 conversion (which would be not cheap) your best bet would be to look onto used server grade stuff, but it will be in pcie card form.
There is also Marvel 88SE9345, but i don’t see any card’s with it for sale and i don’t know how software support will look for it.

Yep, this is clear but maybe there is something more than ASM1166 with different wiring :slight_smile:

Still though you only have a 2.5Gbe Nic that even if you did have a chip with more lanes there is not much room as near the ethernet max anyway.
Those ASM1166 for 6x drives are pretty reasonable especialy for slower larger capacity disks.

I know this, just looking for something that may link better with rock4 boards where it’s 4x2.1. Of course it’s true that those adapters are great for larger and slower drives, but I’m rather interested with smaller ssd :slight_smile:

Prob the fastest most compact was but likely a bad idea as likely they will just bake like that

But same again if not using local as still bottlenecked by any Nic used or maybe if you use gadget mode it only works with the USB2.0.