My first attempt to use the Rock 5 as a NAS

Hello,

Before anything I apologize for any grammar errors, English is not my first language. Also, I’m no expert so please forgive any technical mistakes on my part, any suggestions are more than welcome.

I’ve been waiting for months to test the rock 5b as a NAS and I finally got all the hardware. My current hardware is the following:

  • A PLX8747 based card I got from aliexpress.
  • A couple of P1600X optanes.
  • 2x M.2 to PCIe X4 adapters
  • A M.2 to PCIe x16 adapter with ATX PSU 24pin connector
  • A Mellanox Connectx-3 Infiniband
  • A LSI SAS2008 card
  • 6x 2TB SAS hard drives

I had most of these things laying around (Except the PLX and Mellanox cards) and decided to give it a try and see what the hardware is capable of.

The PLX switch worked right out of the box, I even tested it with 4 nvme drives and all got recognized right away, so things were looking good.

The problems started when I tried connecting the SAS card, I first tried with an HP H240, that one somehow got recognized as an Ethernet adapter, I found the right driver for it in the kernel but the device was being recognized with the wrong vendor and device IDs and no matter what I couldn’t get the driver load, after a bunch of troubleshooting I gave up on that one.

Then I tried the LSI one, at first, it didn’t even show up when running lspci. It turns out that, apparently, the firmware on that card takes longer to initialize than the Rock 5b to boot up so all I had to do was rescan the PCI bus after some time and:

After recompiling the kernel and adding the correct drivers all disks were up and running:

So far things were looking good, then I tried the Mellanox Card, and to my surprise, it just worked (I only had to add the kernel drivers and add some PCIe clock delay on the M.2 to x16 adapter), but then I messed up, my desktop didn’t want to recognize the other Mellanox card so I had to update the firmware of the cards. Now my desktop detects it but when trying to load the drivers I get a BAR error in the Rock 5b, I believe it can probably be fixed by looking into the dtb files but honestly I have no idea how to do so. And dumb me didn’t make a backup of the original firmware…


(pci 05:00.0)

I also tried an HP NC523 adapter but it was the same as the H240 sas card.

The saddest part of all this is that even if I got the hardware to work, apparently it would be really difficult to run ZFS on this Kernel, I tried but got no luck with that too.

Although I haven’t gotten it to work yet, I’m really surprised by what the hardware seems capable of, sure the software side is still lacking, but hey this has been out for a short time, so I’m sure this will improve over time and I’m really excited about that.

Lastly here is a photo of my current setup, sorry it’s messy I’m just doing testing for now maybe later I’ll do some nice enclosure for it.

4 Likes

ZFS can be hacked but its (highly) not recommended to use it in production with this kernel. Until this board does not have a working PCI and few other essential functions in mainline kernel, forget about serious NAS functionality. This hardware is too recent. If you are not a developer, you should buy previous models, based around RK3568 (Rock 3). I have 8 x SATA attached (sadly ist not SAS but SATA port multiplier, which means performance wise is only good for spinning rust) to one of those. Kernel 6.1.y, ZFS works pitch perfect. Good point is a lot less hw parts …

Thanks for sharing it with us,
Can You please tell more about pcie 8x adapter? how it’s working? is it limiting speed of is part of extended card not working? I was wondering about such adapter because it’s much harder to find pcie 4x card than pcie 8x.
Of course please share more photos of this setup, this may help others.

the card running on pciex4

What’s the best and modern SATA adapter for m.2/pci boards like this? I found adapters on ASM1166 chip, looks like it’s the latest controller for our needs, right?

ASM1166 is pcie 3.0 2x, m.2 slot is pcie 3.0 4x. that means You will be using half of m.2 speed.

Btw, I’m really interested, what exactly kind of error is happening with ZFS?

What is it? Deduplication with compression and LVM+VDO? SMB+NFS is working just fine under current kernel

Try HP NC552SFP

PCI 2x is plenty for SATA port multiplication and considering that bottleneck is 2.5G NiC in any case …

Nothing wrong with ZFS. I am not interested to find out, but I am looking forward to your findings. I believe in luck, so … This kernel quality is just not good enough to think on such scenarios. Its certainly good for Youtubers to show off their new DIY NAS, but no more then that. If you trust Rockchip engineers which only cares that their hardware works, that the only problem is bug that prevent dkms compilation, then dig in and fix it. That should not be too hard. But i seriously doubt this is the only problem there is.

All I wanted to say is that this particular adapter uses only half of m.2 pcie lanes of bottom Rock5 slot, so You could divide that to two 2x pcie 3.0 lanes and add another ASM1166 (if You have more disks) or something like this:
https://www.innodisk.com/en/products/embedded-peripheral/communication/egpl-t101


or just another 2.5G m.2 E card.

1 Like

manuelbaez from your picture what is the purpose of the boards, I added red arrows to pointing at them? Also, what are they called?
Neat work by the way.

Hey all, thanks for answering.

Here are some more details:

It’s just an m.2 to x16 slot(x4 electrically) with a connector to use a PSU, therefore the PLX card connected to this is running at x4 and the other cards are connected to it, so everything except the SAS card should be running at PCIe 3 x4. The adapter I used I got from amazon (https://www.amazon.com/dp/B07XYZ89J7)

Well right now I’m just not able to build the module for dkms.

Sadly I don’t have one of those.

The one on the right just mentioned above, the other two are just a pair of simpler m.2 m-key to pcie x4 adapters to be able to connect the Mellanox and LSI cards to the PLX board (got those from aliexpress https://www.aliexpress.com/item/4000105187896.html)

Thanks!

I took these other photos, hopefully is clear how everything is connected together:



In the back of the PLX are the two optanes I mentioned, but in theory with 4 of those m.2 to x4 adapters I could connect 4 PCIe cards, one of them could also be another PLX and have the nvme devices there, I’m not sure that’d work but I might try it out in the future just out of curiosity. First I’ll try to get the Mellanox card working, probably will open another post later asking if anyone knows how to increase the BAR size.

This is the PLX board https://www.aliexpress.com/item/1005002019008079.html, posted it here because it wouldn’t let me are more than two links.

Also these are the bus speeds of the Mellanox and LSI cards respectively:

Hi, I have a LSI 2308 and get the same issue you mentioned, but which part of kernel you compiled

I got green led light on my LSI card, but after waited a looooong time , lspci show nothing about this card, and my harddrive attached to this card did not power on

here is my armbian boot log:

Thanks for sharing that one, I’ve been looking for such a thing for a very long time! I guess a few machines will experience an upgrade soon…

1 Like

What is your network or write speed to ZFS an Odroid M1?. Do you achieve Gigabit? If yes, I assume ZFS pools run without native encryption?
I have seen CPU usage is very high on lower end CPUs even without encryption and I could barely get GBit speed over the network to a NAS.