Ubuntu 20.04 zfs network storage server


I configured the QUAD SATA KIT for Raspberry PI 4 (8 GB) as a Ubuntu 20.04 arm64 network storage server with zfs.

First off, I am aware of the concerns regarding the use of zfs on USB-based drives, but I’m mostly using it as a media server, so the write speed shouldn’t be a major bottleneck.

The zpool is configured as RAID 1+0.
zpool create -o ashift=12 -f pool mirror sda sdc mirror sdb sdd
which I presume would give better performance since the writes are split across two USB bus for each vdev.

The kernel and root is on microSD, the zpool is purely for use as a NAS.

Setting up zfs was pretty straightforward once the SATA Hat was configured correctly. However, since the individual HDD are only loaded sometime after the rockpi-sata.service, the default systemd boot sequence will always fail to import the configured zpool since the drives are not ready when zfs starts up.

Getting the system to reboot reliably has been a challenge, due to the need for the rockpi-sata drivers to be operational before zfs is initialized. I tried various combinations of systemd dependencies, but the biggest issue is that there is no rockpi-sata.target available to indicate that the SATA array has been brought up successfully.

In the end, I had to configure it using two separate dependencies:

  1. /etc/systemd/system/rockpi-sata.service.d/local.conf
    ExecStartPost=/bin/sleep 10
  2. /etc/systemd/system/zfs-load-module.service.d/local.conf

The system is able to defer zfs-import-cache until the SATA array has stabilized, though systemd complains about zfs-load-module failing to start due to unmet dependencies, but somehow the rest of the zfs dependencies worked themselves out somehow.

I think it would be better if there is a rockpi-sata.target created after the service has started successfully, so that the zfs-load-module.service can depend on it (and avoid the 10 second sleep in rockpi-sata post-exec). However, I’m not familiar with systemd configuration so I hope that the rockpi-sata drivers will implement this capability.

I don’t know if that would be sufficient to solve the zfs-load-module fail to start error (which was not fatal), but gettingn rid of that message should help to make the system configuration a lot more professional.

1 Like

I have exactly the same problem. I thought at first that the hat have some hardware issues as the pool was suddenly gone after reboot.

Mine worked for a while, reboots and all. Then stopped. I poked around and I agree that a more elegant solution should be made and as out of the box as possible. In hopes of systemd being added, I took a different approach and used cron because NFS failed as ZFS wasn’t importing the pool, but was otherwise starting OK.

sudo crontab -e
@reboot sleep 5 && sudo /sbin/zpool import NAME_OF_POOL && sudo systemctl restart nfs-server.service

So, after a reboot, wait 5 seconds, import your zpool, restart NFS so that it sees the zpool.