Quad SATA Hat Raid 5 - Drive gets expelled from array

Upon reboot, one of the array members will be removed from the disk pool.

The array is comprised of (4) 1TB SSD drives. Should transferring files to the array cause the CPU to spike to 100%? All other installed capabilities on the Pi 4 (portainer, pihole, etc) become non-responsive while writing files to the raid array. Is that normal?

$ sudo mdadm --query --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Oct 15 21:11:45 2020
Raid Level : raid5
Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
Used Dev Size : 976630272 (931.39 GiB 1000.07 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Sat Oct 17 13:07:07 2020
         State : clean, degraded
Active Devices : 3

Working Devices : 3
Failed Devices : 0
Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 512K

Consistency Policy : bitmap

          Name : docker:0  (local to host docker)
          UUID : f01cbb7d:b011c241:813faca4:e4de038d
        Events : 10440

Number   Major   Minor   RaidDevice State
   -       0        0        0      removed
   1       8       16        1      active sync   /dev/sdb
   2       8       32        2      active sync   /dev/sdc
   4       8       48        3      active sync   /dev/sdd

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4] sdc[2] sdb[1]
2929890816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
bitmap: 6/8 pages [24KB], 65536KB chunk

unused devices:

$ sudo blkid
/dev/mmcblk0p1: LABEL_FATBOOT=“boot” LABEL=“boot” UUID=“592B-C92C” TYPE=“vfat” PARTUUID=“002973c0-01”
/dev/mmcblk0p2: LABEL=“rootfs” UUID=“706944a6-7d0f-4a45-9f8c-7fb07375e9f7” TYPE=“ext4” PARTUUID=“002973c0-02”
/dev/sdb: UUID=“f01cbb7d-b011-c241-813f-aca4e4de038d” UUID_SUB=“1f5e2936-fa9b-fd4b-e614-45d3e4908147” LABEL=“docker:0” TYPE=“linux_raid_member”
/dev/md0: UUID=“5498f93b-c861-4d9d-8172-5c5e31407dec” TYPE=“ext4”
/dev/sda: UUID=“f01cbb7d-b011-c241-813f-aca4e4de038d” UUID_SUB=“abca515c-11a1-0903-aa66-6b0b69609fae” LABEL=“docker:0” TYPE=“linux_raid_member”
/dev/sdd: UUID=“f01cbb7d-b011-c241-813f-aca4e4de038d” UUID_SUB=“0f5d98f5-46e4-2f65-1ed5-828a35683dc3” LABEL=“docker:0” TYPE=“linux_raid_member”
/dev/sdc: UUID=“f01cbb7d-b011-c241-813f-aca4e4de038d” UUID_SUB=“c1766858-e767-6215-c91f-dd1adf255803” LABEL=“docker:0” TYPE=“linux_raid_member”

If I test read/write speed at the command line, I seem to be getting decent speeds, and the CPU hovers around 50 to 65% usage.

Write Test:

root@docker:/mnt/md0# sync; dd if=/dev/zero of=/mnt/md0/tmpfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.87435 s, 136 MB/s

Read Test:

root@docker:/mnt/md0# dd if=tmpfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.7768 s, 604 MB/s

I’m thinking that the problem may be with NextCloud. I’ll do some further investigation. Regardless, I need to figure out why a random raid member gets expelled during reboots.

I need to figure out why a random raid member gets expelled during reboots

What power adapter are you using?

https://www.amazon.de/-/en/gp/product/B07JMRXCXC/ref=ppx_yo_dt_b_asin_image_o00_s00?ie=UTF8&psc=1

I have exactly the same issue …. Sometimes the raid does not even start but mostly It’s 3 HDD randomly mounted.

How did you solve it ?