Ubuntu 20.04 zfs network storage server


I configured the QUAD SATA KIT for Raspberry PI 4 (8 GB) as a Ubuntu 20.04 arm64 network storage server with zfs.

First off, I am aware of the concerns regarding the use of zfs on USB-based drives, but I’m mostly using it as a media server, so the write speed shouldn’t be a major bottleneck.

The zpool is configured as RAID 1+0.
zpool create -o ashift=12 -f pool mirror sda sdc mirror sdb sdd
which I presume would give better performance since the writes are split across two USB bus for each vdev.

The kernel and root is on microSD, the zpool is purely for use as a NAS.

Setting up zfs was pretty straightforward once the SATA Hat was configured correctly. However, since the individual HDD are only loaded sometime after the rockpi-sata.service, the default systemd boot sequence will always fail to import the configured zpool since the drives are not ready when zfs starts up.

Getting the system to reboot reliably has been a challenge, due to the need for the rockpi-sata drivers to be operational before zfs is initialized. I tried various combinations of systemd dependencies, but the biggest issue is that there is no rockpi-sata.target available to indicate that the SATA array has been brought up successfully.

In the end, I had to configure it using two separate dependencies:

  1. /etc/systemd/system/rockpi-sata.service.d/local.conf
    ExecStartPost=/bin/sleep 10
  2. /etc/systemd/system/zfs-load-module.service.d/local.conf

The system is able to defer zfs-import-cache until the SATA array has stabilized, though systemd complains about zfs-load-module failing to start due to unmet dependencies, but somehow the rest of the zfs dependencies worked themselves out somehow.

I think it would be better if there is a rockpi-sata.target created after the service has started successfully, so that the zfs-load-module.service can depend on it (and avoid the 10 second sleep in rockpi-sata post-exec). However, I’m not familiar with systemd configuration so I hope that the rockpi-sata drivers will implement this capability.

I don’t know if that would be sufficient to solve the zfs-load-module fail to start error (which was not fatal), but gettingn rid of that message should help to make the system configuration a lot more professional.

1 Like

I have exactly the same problem. I thought at first that the hat have some hardware issues as the pool was suddenly gone after reboot.

Mine worked for a while, reboots and all. Then stopped. I poked around and I agree that a more elegant solution should be made and as out of the box as possible. In hopes of systemd being added, I took a different approach and used cron because NFS failed as ZFS wasn’t importing the pool, but was otherwise starting OK.

sudo crontab -e
@reboot sleep 5 && sudo /sbin/zpool import NAME_OF_POOL && sudo systemctl restart nfs-server.service

So, after a reboot, wait 5 seconds, import your zpool, restart NFS so that it sees the zpool.

Two years down the track, Ubuntu on rpi4 I just want to say that I found this recipe “worked better” if I tied the systemd dependency to the zfs-import-cache service, not the zfs-load-module service.

module loading is how the kernel enables itself to handle the zfs system.

cache load, is when it actually scans for disks. I found by putting the After/Requires stuff above into:


it worked better for me.

Thanks. I’ll try it out to see if it makes any difference.
The more recent 20.04.3 kernel and systemd packages seem to be more stable.

Previously I used to have to manually fix all the circular dependency resolution ‘break’ issues that systemd throws up during booting. Nowadays, the dependency breaks seem benign enough to not affect the rockpi-sata.service loading.

At one point, I’ve also been having USB bus driver faults, but that seems to have become rare with the recent kernel releases.

magicpudding% cat /etc/issue
Ubuntu 21.10 \n \l


I’m rather “fresh” into the whole sata hat + ubuntu and using zfs …

I have 2 questions towards the more experienced …

  1. how to get the oled display to display stats about the pool - digging through the python scripts I found several hardcoded references to /sys/block/bla … which dont work obviously since the zfs pool isnt a block device to my understanding…
    so any hints to change the misc.py would be appreciated … have to admit that I didnt get around learning python yet

  2. Does anybody know a “suitable” replacement for the fan under the top-hat … its rather loud and at the same time I feel the airflow “sub optimal” - temperature reported often above 50°C and that without spinning harddrives

For the problem reported above … I went with the solution of @9999 and built a cron job - but I use samba as my main system(s) are windows machines - and I found that I dont have to restart the smb service like 9999 does for the nfs service … so I just import the pool with a short sleep delay