This is the issue with the quad sata hat and raspbian

i figured it out, i think.
i keep getting emergency mode, or missing raid etc.
the issue is that the drives are not seen until the fan,led service is loaded
the drives are trying to be mounted at boot and are not seen so it goes into emergency mode
i just setup omv with unionfs and the same thing.
i found this when i restarted the service to change the temp and flip the display
the hdds dropped out and gave errors.

reboot is not an issue, its when you shut down and power back up. always emergency mode and missing drives.

first attempt was mdadm raid lvl 5 with 4 drives. everything went fine until i shutdown to put the top case back on. the service was stopped before the raid was unmounted and the drives were listed as failed and removed.

gave up on raid 5 mdadm and i setup unionfs.

same thing. everything was great until i shutdown.
at reboot. the drives were not seen when trying to mount as the service for the lcd was not loaded yet. emergency mode…

i had it fixed by mounting the drives after boot using the fix posted here with fc.local and a file. but that doesnt fix the shutdown issue. as the service is stopped before the drives are unmounted.

starting to regret my purchase


1 Like

I’m sorry for the bad experience.

I would optimize the service or add scripts to hopefully solve this problem.

same problem each shutdown have high risk to brick my current install :frowning:

I’m working to find solution to avoid this issue. Seem OVM FS managing is worth with RPI.

I had this same problem. The issue is the rockpi-sata.service must be running before you can mount the RAID. I couldn’t figure out how to accomplish this while still using fstab to mount the drive, but I’ve figured out how to mount the raid using systemd instead. systemd allows you to configure dependencies and boot orders. Here is an article that helped me figure it out Mounting Linux Volumes with systemd vs fstab. and here is my nas.mount file:



# Make 'systemctl enable nas.mount' work:
1 Like

very efficiency setup, thank you. I will also add it inside services file of OVM (to be more secur).

I got the same problem today and then I found this post.

It has been 12 days since setq said they will update the script. Why they haven’t done it yet?

Is there any way to save my installation and avoid redoing the whole thing again?

Please advise.

i used ext2fsd to mount my root file system on the sd card from windows and commented out the mount commands in fstab, i then boot normally and move the mount commands to fc.local so they run after the hat service starts

Is it better to add them in fc.local or in systemd?

Yes check this link How i fixed the hang at boot and got my sata hat working with unionfs

I will prepare a wiki section to OVM+SATA HAT+disks managment soon : when my system is stable :slight_smile:

Hi, I managed to make it work using this solution with my raid10 and the nas mount file.

The problem is that after I rebooted the raid started resynching again from 0.0% and I have to wait 12 hours.

Did this happen because I created a new file system? Will it happen next time I reboot?

I can’t reboot till syncing is finished to test it again.

have use uuid of disk ? or Label ?

I have 4 x 2 TB disks and I created a Raid 10 with mdadm (md0).

I created a filesystem via OMV named dm-0. Then I created a nas.mount file under /etc/systemd/system with the following configuration:




and I removed the entry from fstab file.

I shutdown the pi and after reboot the file system was automounted in OMV (this is what I was expecting) but the Raid started syncing again starting from 0.0%. I don’t know what caused this.

I didn’t use the other solution with unionfs and rc.local file.

not expert un mdadm but did you used uuid to make the NAS with mdadm ?


I would have created the RAID in OMV but even though the disks were showing up it couldn’t find them under the RAID management. Weird!

Hi. I tried to work around this issue with systemd as you proposed but I get the following message with dmesg:

/etc/systemd/system/nas.mount:8: Where= path is not absolute, ignoring: UUID=“f1bfba39-9471-420a-ae4f-8b317b609b2d”

Any idea why?

Ok after some reading I found out that the configuration file’s name must have the mount point’s name. This is my configuration that works fine and loaded by the system properly:




If your mount point is /home/nas then the configuration file name should be home-nas.mount
Beware of cases like mine that the mount point has ‘-’ characters. Then these characters should be as \x2d in the filename.

So for my mounting point /srv/dev-disk-by-label-storage the filename is srv-dev\x2ddisk\x2dby\x2dlabel\x2dstorage.mount

After you need to run:
systemctl start status and enable for the mount file. For cases like mine with - in filename you call the filname within single quotes. example:

systemctl start ‘srv-dev\x2ddisk\x2dby\x2dlabel\x2dstorage.mount’
systemctl status ‘srv-dev\x2ddisk\x2dby\x2dlabel\x2dstorage.mount’
systemctl enable ‘srv-dev\x2ddisk\x2dby\x2dlabel\x2dstorage.mount’


Thx for this tip, works like a charm. Only thing I’m wondering now: does it properly unmount when you stop the rock-pi service? Or does it still just “pull the plug” from the disks?

Also may I suggest a:

sudo systemctl daemon-reload

after adding/editing the .mount files

EDIT: It seems the disks get unmounted when stopping the rock-pi service. But don’t remount automatically when restarting service. So… need to look into systemd kind of “post” start if something like that would exist.

just encase this helps someone else
using the above
"# This file is part of systemd.

systemd is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License as published by

the Free Software Foundation; either version 2.1 of the License, or

(at your option) any later version.



Make ‘systemctl enable mnt-raidx.mount’ work:


mounts to /mnt/raidx
file name:

hi on reboot it didnt auto load annoyingly

how do I get that to happen?

Failed to enable unit: File /mnt/raidx: Invalid argument
when trying to do enable

Your “where” is an invalid mointpoint.

Also use the md0’s uuid in the “what” section. To prevent inconsistencies between reboot.