Sata Raid Marvell 4port 9235 benchmarks

Ok got some disks 4x Integral SSD 120gb P5 as that should been the card is the bottleneck.
Couldn’t afford anything more really as x4 gets a bit expensive.

Staring with Debian-stretch-4.4-latest and mdadm going to try a raid10

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    20355    28533    43513    44568    22556    28612
          102400      16    60835    71891   111520   107540    66074    71640
          102400     512   149988   129385   253123   263113   211684   131649
          102400    1024   161360   164943   274007   275765   253893   165764
          102400   16384   181646   182851   338294   347395   342601   176768

rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid0] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
      234309632 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      [===========>.........]  resync = 55.4% (129931520/234309632) finish=8.5min speed=202624K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk
Jul  4 13:46:45 rockpi4 kernel: [   75.172062] md0: detected capacity change from 479866126336 to 0
Jul  4 13:46:45 rockpi4 kernel: [   75.172628] md: md0 stopped.
Jul  4 13:46:45 rockpi4 kernel: [   75.173397] md: unbind<sda>
Jul  4 13:46:45 rockpi4 kernel: [   75.190852] md: export_rdev(sda)
Jul  4 13:46:45 rockpi4 kernel: [   75.191282] md: unbind<sdd>
Jul  4 13:46:45 rockpi4 kernel: [   75.206849] md: export_rdev(sdd)
Jul  4 13:46:45 rockpi4 kernel: [   75.207325] md: unbind<sdb>
Jul  4 13:46:45 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink
Jul  4 13:46:45 rockpi4 kernel: [   75.239056] md: export_rdev(sdb)
Jul  4 13:46:45 rockpi4 kernel: [   75.239439] md: unbind<sdc>
Jul  4 13:46:45 rockpi4 kernel: [   75.254837] md: export_rdev(sdc)
Jul  4 13:47:12 rockpi4 kernel: [  102.258308]  sdc: sdc1 sdc2
Jul  4 13:47:12 rockpi4 kernel: [  102.288150]  sdc: sdc1 sdc2
Jul  4 13:48:09 rockpi4 kernel: [  159.300017] md: bind<sda>
Jul  4 13:48:09 rockpi4 kernel: [  159.308923] md: bind<sdb>
Jul  4 13:48:09 rockpi4 kernel: [  159.319055] md: bind<sdc>
Jul  4 13:48:09 rockpi4 kernel: [  159.320188] md: bind<sdd>
Jul  4 13:48:09 rockpi4 kernel: [  159.326830] md/raid0:md0: md_size is 937238528 sectors.
Jul  4 13:48:09 rockpi4 kernel: [  159.327314] md: RAID0 configuration for md0 - 1 zone
Jul  4 13:48:09 rockpi4 kernel: [  159.327759] md: zone0=[sda/sdb/sdc/sdd]
Jul  4 13:48:09 rockpi4 kernel: [  159.328165]       zone-offset=         0KB, device-offset=         0KB, size= 468619264KB
Jul  4 13:48:09 rockpi4 kernel: [  159.328937]  sdc: sdc1 sdc2
Jul  4 13:48:09 rockpi4 kernel: [  159.329369] 
Jul  4 13:48:09 rockpi4 kernel: [  159.330145] md0: detected capacity change from 0 to 479866126336
Jul  4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4)
Jul  4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4)
Jul  4 13:49:40 rockpi4 kernel: [  250.355809] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
Jul  4 13:55:31 rockpi4 kernel: [  601.335494] panel disable
Jul  4 14:02:26 rockpi4 anacron[1047]: Anacron 2.3 started on 2019-07-04
Jul  4 14:02:26 rockpi4 anacron[1047]: Normal exit (0 jobs run)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.309314] md0: detected capacity change from 479866126336 to 0
Jul  4 14:02:59 rockpi4 kernel: [ 1049.309886] md: md0 stopped.
Jul  4 14:02:59 rockpi4 kernel: [ 1049.310176] md: unbind<sdd>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.327147] md: export_rdev(sdd)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.327821] md: unbind<sdc>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.350959] md: export_rdev(sdc)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.351512] md: unbind<sdb>
Jul  4 14:02:59 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink
Jul  4 14:02:59 rockpi4 kernel: [ 1049.366971] md: export_rdev(sdb)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.367513] md: unbind<sda>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.383124] md: export_rdev(sda)
Jul  4 14:03:21 rockpi4 kernel: [ 1071.066678]  sdc: sdc1 sdc2
Jul  4 14:03:21 rockpi4 kernel: [ 1071.092394]  sdc: sdc1 sdc2
Jul  4 14:05:23 rockpi4 kernel: [ 1193.551804] md: bind<sda>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.552267]  sdc: sdc1 sdc2
Jul  4 14:05:23 rockpi4 kernel: [ 1193.552547] md: bind<sdb>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.553780] md: bind<sdc>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.554266] md: bind<sdd>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.570556] md: raid10 personality registered for level 10
Jul  4 14:05:23 rockpi4 kernel: [ 1193.573138] md/raid10:md0: not clean -- starting background reconstruction
Jul  4 14:05:23 rockpi4 kernel: [ 1193.573765] md/raid10:md0: active with 4 out of 4 devices
Jul  4 14:05:23 rockpi4 kernel: [ 1193.575635] created bitmap (2 pages) for device md0
Jul  4 14:05:23 rockpi4 kernel: [ 1193.578102] md0: bitmap initialized from disk: read 1 pages, set 3576 of 3576 bits
Jul  4 14:05:23 rockpi4 kernel: [ 1193.581797] md0: detected capacity change from 0 to 239933063168
Jul  4 14:05:23 rockpi4 kernel: [ 1193.583297] md: md0 switched to read-write mode.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.588652] md: resync of RAID array md0
Jul  4 14:05:23 rockpi4 kernel: [ 1193.589019] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.589541] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.590381] md: using 128k window, over a total of 234309632k.
Jul  4 14:25:02 rockpi4 kernel: [ 2372.292473] md: md0: resync done.
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452970] RAID10 conf printout:
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452989]  --- wd:4 rd:4
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452998]  disk 0, wo:0, o:1, dev:sda
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453005]  disk 1, wo:0, o:1, dev:sdb
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453012]  disk 2, wo:0, o:1, dev:sdc
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453019]  disk 3, wo:0, o:1, dev:sdd
Jul  4 14:30:45 rockpi4 kernel: [ 2715.470782] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)

RAID5

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: size set to 117154816K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4] sdc[2] sdb[1] sda[0]
      351464448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>....................]  recovery =  1.6% (1898560/117154816) finish=19.2min speed=99924K/sec
      bitmap: 0/1 pages [0KB], 65536KB chunk
Jul  4 14:49:52 rockpi4 kernel: [  491.913061] md: bind<sda>
Jul  4 14:49:52 rockpi4 kernel: [  491.913784] md: bind<sdb>
Jul  4 14:49:52 rockpi4 kernel: [  491.914381] md: bind<sdc>
Jul  4 14:49:52 rockpi4 kernel: [  491.914971] md: bind<sdd>
Jul  4 14:49:52 rockpi4 kernel: [  491.920396]  sdc: sdc1 sdc2
Jul  4 14:49:52 rockpi4 kernel: [  491.929530] async_tx: api initialized (async)
Jul  4 14:49:52 rockpi4 kernel: [  491.952339] md: raid6 personality registered for level 6
Jul  4 14:49:52 rockpi4 kernel: [  491.952833] md: raid5 personality registered for level 5
Jul  4 14:49:52 rockpi4 kernel: [  491.953316] md: raid4 personality registered for level 4
Jul  4 14:49:52 rockpi4 kernel: [  491.959926] md/raid:md0: device sdc operational as raid disk 2
Jul  4 14:49:52 rockpi4 kernel: [  491.960484] md/raid:md0: device sdb operational as raid disk 1
Jul  4 14:49:52 rockpi4 kernel: [  491.961025] md/raid:md0: device sda operational as raid disk 0
Jul  4 14:49:52 rockpi4 kernel: [  491.962943] md/raid:md0: allocated 4384kB
Jul  4 14:49:52 rockpi4 kernel: [  491.964488] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
Jul  4 14:49:52 rockpi4 kernel: [  491.965161] RAID conf printout:
Jul  4 14:49:52 rockpi4 kernel: [  491.965169]  --- level:5 rd:4 wd:3
Jul  4 14:49:52 rockpi4 kernel: [  491.965177]  disk 0, o:1, dev:sda
Jul  4 14:49:52 rockpi4 kernel: [  491.965183]  disk 1, o:1, dev:sdb
Jul  4 14:49:52 rockpi4 kernel: [  491.965188]  disk 2, o:1, dev:sdc
Jul  4 14:49:52 rockpi4 kernel: [  491.965603] created bitmap (1 pages) for device md0
Jul  4 14:49:52 rockpi4 kernel: [  491.966746] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits
Jul  4 14:49:52 rockpi4 kernel: [  491.968765] md0: detected capacity change from 0 to 359899594752
Jul  4 14:49:52 rockpi4 kernel: [  491.969465] md: md0 switched to read-write mode.
Jul  4 14:49:52 rockpi4 kernel: [  491.969930] RAID conf printout:
Jul  4 14:49:52 rockpi4 kernel: [  491.969951]  --- level:5 rd:4 wd:3
Jul  4 14:49:52 rockpi4 kernel: [  491.969968]  disk 0, o:1, dev:sda
Jul  4 14:49:52 rockpi4 kernel: [  491.969984]  disk 1, o:1, dev:sdb
Jul  4 14:49:52 rockpi4 kernel: [  491.969997]  disk 2, o:1, dev:sdc
Jul  4 14:49:52 rockpi4 kernel: [  491.970009]  disk 3, o:1, dev:sdd
Jul  4 14:49:52 rockpi4 kernel: [  491.980149] md: recovery of RAID array md0
Jul  4 14:49:52 rockpi4 kernel: [  491.980523] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 14:49:52 rockpi4 kernel: [  491.981044] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Jul  4 14:49:52 rockpi4 kernel: [  491.981894] md: using 128k window, over a total of 117154816k.
Jul  4 14:51:41 rockpi4 kernel: [  601.050246] panel disable
Jul  4 15:00:30 rockpi4 anacron[1052]: Anacron 2.3 started on 2019-07-04
Jul  4 15:00:30 rockpi4 anacron[1052]: Normal exit (0 jobs run)
Jul  4 15:05:53 rockpi4 kernel: [ 1453.287257] md: md0: recovery done.
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567652] RAID conf printout:
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567661]  --- level:5 rd:4 wd:4
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567666]  disk 0, o:1, dev:sda
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567670]  disk 1, o:1, dev:sdb
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567674]  disk 2, o:1, dev:sdc
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567677]  disk 3, o:1, dev:sdd
Jul  4 15:07:07 rockpi4 kernel: [ 1527.108599] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4     8159     8947    43789    42643    24543    10212
          102400      16    33078    40985    98244    98407    70763    41851
          102400     512    52870    53418   212184   202157   203772    50657
          102400    1024    66426    69555   250660   250200   249607    69539
          102400   16384   108537   112300   326090   324173   320777   106363

RAID1

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 117155264K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md0 : active raid1 sdb[1] sda[0]
      117155264 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  2.3% (2801408/117155264) finish=8.8min speed=215492K/sec
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.757953] md: bind<sda>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.759742] md: bind<sdb>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.772561] md: raid1 personality registered for level 1
Jul  4 15:20:25 rockpi4 kernel: [ 2324.783910] md/raid1:md0: not clean -- starting background reconstruction
Jul  4 15:20:25 rockpi4 kernel: [ 2324.784534] md/raid1:md0: active with 2 out of 2 mirrors
Jul  4 15:20:25 rockpi4 kernel: [ 2324.785261] created bitmap (1 pages) for device md0
Jul  4 15:20:25 rockpi4 kernel: [ 2324.787956] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits
Jul  4 15:20:25 rockpi4 kernel: [ 2324.790798] md0: detected capacity change from 0 to 119966990336
Jul  4 15:20:25 rockpi4 kernel: [ 2324.791556] md: md0 switched to read-write mode.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.794162] md: resync of RAID array md0
Jul  4 15:20:25 rockpi4 kernel: [ 2324.794546] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.795124] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.795964] md: using 128k window, over a total of 117155264k.
Jul  4 15:30:14 rockpi4 kernel: [ 2913.737079] md: md0: resync done.
Jul  4 15:30:14 rockpi4 kernel: [ 2913.745998] RAID1 conf printout:
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746016]  --- wd:2 rd:2
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746027]  disk 0, wo:0, o:1, dev:sda
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746035]  disk 1, wo:0, o:1, dev:sdb
Jul  4 15:31:19 rockpi4 kernel: [ 2978.675630] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    24759    31559    39765    41196    25476    30710
          102400      16    62662    73245   124756   125744    62209    72778
          102400     512   139397   160038   260433   261606   218154   147652
          102400    1024   165815   155189   258119   261744   232643   164702
          102400   16384   172905   186702   318211   322998   321997   170680

RAID0

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sda /dev/sdb  /dev/sdc /dev/sdd
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1] [raid0]
md0 : active raid0 sdd[3] sdc[2] sdb[1] sda[0]
      468619264 blocks super 1.2 512k chunks

unused devices: <none>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.084442] md: bind<sda>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.085523] md: bind<sdb>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.086511] md: bind<sdc>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.087930] md: bind<sdd>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.101830] md: raid0 personality registered for level 0
Jul  4 15:38:35 rockpi4 kernel: [ 3415.101836]  sdc: sdc1 sdc2
Jul  4 15:38:35 rockpi4 kernel: [ 3415.107953] md/raid0:md0: md_size is 937238528 sectors.
Jul  4 15:38:35 rockpi4 kernel: [ 3415.108427] md: RAID0 configuration for md0 - 1 zone
Jul  4 15:38:35 rockpi4 kernel: [ 3415.108866] md: zone0=[sda/sdb/sdc/sdd]
Jul  4 15:38:35 rockpi4 kernel: [ 3415.109261]       zone-offset=         0KB, device-offset=         0KB, size= 468619264KB
Jul  4 15:38:35 rockpi4 kernel: [ 3415.109973] 
Jul  4 15:38:35 rockpi4 kernel: [ 3415.110235] md0: detected capacity change from 0 to 479866126336
Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4)
Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4)
Jul  4 15:41:08 rockpi4 kernel: [ 3568.278677] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    31874    42784    44859    48796    26191    42465
          102400      16    89104   112188   110570   114486    77652   111816
          102400     512   248787   259180   258800   270097   227197   229707
          102400    1024   309271   324243   293455   293122   268819   286143
          102400   16384   373574   382208   324869   326204   326070   380622

Concurrent single disks

        Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /home/rock/sda/tmp1 /home/rock/sdb/tmp2 /home/rock/sdc/tmp3 /home/rock/sdd/tmp4
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 4
        Max process = 4
        Throughput test with 4 processes
        Each process writes a 524288 kByte file in 16 kByte records

        Children see throughput for  4 initial writers  =  468982.85 kB/sec
        Parent sees throughput for  4 initial writers   =  391562.16 kB/sec
        Min throughput per process                      =  115979.48 kB/sec
        Max throughput per process                      =  118095.79 kB/sec
        Avg throughput per process                      =  117245.71 kB/sec
        Min xfer                                        =  513488.00 kB

        Children see throughput for  4 rewriters        =  448753.70 kB/sec
        Parent sees throughput for  4 rewriters         =  378103.46 kB/sec
        Min throughput per process                      =  108174.91 kB/sec
        Max throughput per process                      =  119841.15 kB/sec
        Avg throughput per process                      =  112188.42 kB/sec
        Min xfer                                        =  472992.00 kB

        Children see throughput for  4 readers          =  319857.60 kB/sec
        Parent sees throughput for  4 readers           =  319587.93 kB/sec
        Min throughput per process                      =   78386.40 kB/sec
        Max throughput per process                      =   81170.33 kB/sec
        Avg throughput per process                      =   79964.40 kB/sec
        Min xfer                                        =  506336.00 kB

        Children see throughput for 4 re-readers        =  331737.53 kB/sec
        Parent sees throughput for 4 re-readers         =  331539.26 kB/sec
        Min throughput per process                      =   74617.11 kB/sec
        Max throughput per process                      =   90278.13 kB/sec
        Avg throughput per process                      =   82934.38 kB/sec
        Min xfer                                        =  433360.00 kB

        Children see throughput for 4 reverse readers   =  769042.86 kB/sec
        Parent sees throughput for 4 reverse readers    =  768023.53 kB/sec
        Min throughput per process                      =   43320.77 kB/sec
        Max throughput per process                      =  262961.66 kB/sec
        Avg throughput per process                      =  192260.72 kB/sec
        Min xfer                                        =   86384.00 kB

        Children see throughput for 4 stride readers    = 1795856.09 kB/sec
        Parent sees throughput for 4 stride readers     = 1781767.61 kB/sec
        Min throughput per process                      =   65569.88 kB/sec
        Max throughput per process                      =  920383.50 kB/sec
        Avg throughput per process                      =  448964.02 kB/sec
        Min xfer                                        =   37360.00 kB

        Children see throughput for 4 random readers    = 1971409.70 kB/sec
        Parent sees throughput for 4 random readers     = 1958188.18 kB/sec
        Min throughput per process                      =   69869.92 kB/sec
        Max throughput per process                      =  861175.75 kB/sec
        Avg throughput per process                      =  492852.43 kB/sec
        Min xfer                                        =   41904.00 kB

        Children see throughput for 4 mixed workload    = 1176863.17 kB/sec
        Parent sees throughput for 4 mixed workload     =  275991.88 kB/sec
        Min throughput per process                      =   98414.23 kB/sec
        Max throughput per process                      =  606498.81 kB/sec
        Avg throughput per process                      =  294215.79 kB/sec
        Min xfer                                        =   84304.00 kB

        Children see throughput for 4 random writers    =  428459.84 kB/sec
        Parent sees throughput for 4 random writers     =  318774.34 kB/sec
        Min throughput per process                      =   96696.56 kB/sec
        Max throughput per process                      =  118440.29 kB/sec
        Avg throughput per process                      =  107114.96 kB/sec
        Min xfer                                        =  428352.00 kB

        Children see throughput for 4 pwrite writers    =  467800.79 kB/sec
        Parent sees throughput for 4 pwrite writers     =  381736.33 kB/sec
        Min throughput per process                      =  111798.68 kB/sec
        Max throughput per process                      =  120814.23 kB/sec
        Avg throughput per process                      =  116950.20 kB/sec
        Min xfer                                        =  485168.00 kB

        Children see throughput for 4 pread readers     =  309714.87 kB/sec
        Parent sees throughput for 4 pread readers      =  309501.91 kB/sec
        Min throughput per process                      =   76447.56 kB/sec
        Max throughput per process                      =   79120.13 kB/sec
        Avg throughput per process                      =   77428.72 kB/sec
        Min xfer                                        =  506592.00 kB

        Children see throughput for  4 fwriters         =  442763.85 kB/sec
        Parent sees throughput for  4 fwriters          =  373418.60 kB/sec
        Min throughput per process                      =  107828.45 kB/sec
        Max throughput per process                      =  114495.70 kB/sec
        Avg throughput per process                      =  110690.96 kB/sec
        Min xfer                                        =  524288.00 kB

        Children see throughput for  4 freaders         =  331765.48 kB/sec
        Parent sees throughput for  4 freaders          =  325459.39 kB/sec
        Min throughput per process                      =   81387.83 kB/sec
        Max throughput per process                      =   86099.32 kB/sec
        Avg throughput per process                      =   82941.37 kB/sec
        Min xfer                                        =  524288.00 kB

single disk sda

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36038    45031    52457    52672    27342    44553
          102400      16    93224   115531   124822   114115    79868   115219
          102400     512   249415   223799   267595   273488   227651   258480
          102400    1024   259449   236700   268852   273148   242803   266988
          102400   16384   313281   317096   324922   325600   319687   267843

single disk sdb

       Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    33918    45021    52628    52655    27404    44621
          102400      16   100152   106531   127148   115452    76579   113503
          102400     512   251035   259812   272338   273634   227332   225607
          102400    1024   260791   268019   273578   276074   241042   268323
          102400   16384   267448   316877   323467   324679   319983   316710

single disk sdc

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36074    44819    52358    52592    23334    44073
          102400      16    92510   114568   127346   126830    72293   112819
          102400     512   220032   260191   271136   274745   225818   258574
          102400    1024   258895   228236   270047   271946   239184   267370
          102400   16384   312151   316425   318919   323689   317570   268308

single disk sdd

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36100    44939    52756    52768    27569    42697
          102400      16   100207   111073   127120   118992    76555   105342
          102400     512   248869   259052   271718   272745   227450   258252
          102400    1024   226653   266979   262772   265104   236617   266018
          102400   16384   314211   269062   322937   325634   320150   315470
1 Like

Also if you are a plonker and forget to edit /boot/hw_intfc.conf from #intfc:dtoverlay=pcie-gen2 to intfc:dtoverlay=pcie-gen2 you will be running on pcie-gen1

RAID 10

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    11719    15447    55220    53720    25421    12773
          102400      16    39410    54840   139482   145128    81258    43792
          102400     512   228002   220126   334104   339660   265930   225507
          102400    1024   244376   243730   451377   462467   397566   258481
          102400   16384   270088   304411   597462   610057   615669   297855

RAID 5

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4     6133     6251    47505    46013    25046     8190
          102400      16    17103    17134   113272   133606    79753    20420
          102400     512    61418    50852   241860   246467   244030    58031
          102400    1024    79325    73325   363343   359830   361882    83655
          102400   16384   127548   124702   625256   642094   650407   136680

RAID 1

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    23713    29698    45608    45983    23657    30381
          102400      16    79205    82546   138060   144557    82126    93921
          102400     512   212859   221943   307613   304036   259783   179355
          102400    1024   235985   243783   366101   369935   317354   198861
          102400   16384   289036   290279   410520   398875   399868   295329

RAID 0

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    33519    47927    52701    51023    26700    46382
          102400      16   105763   132604   138080   155514    87026   135111
          102400     512   276220   320320   311343   294629   267624   335363
          102400    1024   493565   522038   463105   470833   398584   522560
          102400   16384   687516   701200   625733   623531   555318   681535

4 individual disk concurrent

        Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /srv/dev-disk-by-label-sda/tmp1 /srv/dev-disk-by-label-sdb/tmp2 /srv/dev-disk-by-label-sdc/tmp3 /srv/dev-disk-by-label-sdd/tmp4
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 4
        Max process = 4
        Throughput test with 4 processes
        Each process writes a 524288 kByte file in 16 kByte records

        Children see throughput for  4 initial writers  =  884590.91 kB/sec
        Parent sees throughput for  4 initial writers   =  701620.17 kB/sec
        Min throughput per process                      =  195561.27 kB/sec
        Max throughput per process                      =  234457.59 kB/sec
        Avg throughput per process                      =  221147.73 kB/sec
        Min xfer                                        =  437344.00 kB

        Children see throughput for  4 rewriters        =  822771.77 kB/sec
        Parent sees throughput for  4 rewriters         =  701488.29 kB/sec
        Min throughput per process                      =  180381.25 kB/sec
        Max throughput per process                      =  232223.50 kB/sec
        Avg throughput per process                      =  205692.94 kB/sec
        Min xfer                                        =  408720.00 kB

        Children see throughput for  4 readers          =  755252.30 kB/sec
        Parent sees throughput for  4 readers           =  753357.02 kB/sec
        Min throughput per process                      =  169105.11 kB/sec
        Max throughput per process                      =  198976.81 kB/sec
        Avg throughput per process                      =  188813.07 kB/sec
        Min xfer                                        =  445664.00 kB

        Children see throughput for 4 re-readers        =  753492.39 kB/sec
        Parent sees throughput for 4 re-readers         =  750353.64 kB/sec
        Min throughput per process                      =  160626.64 kB/sec
        Max throughput per process                      =  201223.11 kB/sec
        Avg throughput per process                      =  188373.10 kB/sec
        Min xfer                                        =  418528.00 kB

        Children see throughput for 4 reverse readers   =  780261.86 kB/sec
        Parent sees throughput for 4 reverse readers    =  778761.55 kB/sec
        Min throughput per process                      =   58371.02 kB/sec
        Max throughput per process                      =  254657.08 kB/sec
        Avg throughput per process                      =  195065.47 kB/sec
        Min xfer                                        =  120192.00 kB

        Children see throughput for 4 stride readers    =  317923.62 kB/sec
        Parent sees throughput for 4 stride readers     =  316905.36 kB/sec
        Min throughput per process                      =   63171.63 kB/sec
        Max throughput per process                      =   98114.27 kB/sec
        Avg throughput per process                      =   79480.91 kB/sec
        Min xfer                                        =  337600.00 kB

        Children see throughput for 4 random readers    =  798898.78 kB/sec
        Parent sees throughput for 4 random readers     =  794905.95 kB/sec
        Min throughput per process                      =   57059.89 kB/sec
        Max throughput per process                      =  391248.59 kB/sec
        Avg throughput per process                      =  199724.70 kB/sec
        Min xfer                                        =   76480.00 kB

        Children see throughput for 4 mixed workload    =  647158.06 kB/sec
        Parent sees throughput for 4 mixed workload     =  491223.65 kB/sec
        Min throughput per process                      =   28319.04 kB/sec
        Max throughput per process                      =  305288.75 kB/sec
        Avg throughput per process                      =  161789.51 kB/sec
        Min xfer                                        =   48720.00 kB

        Children see throughput for 4 random writers    =  734947.98 kB/sec
        Parent sees throughput for 4 random writers     =  544531.66 kB/sec
        Min throughput per process                      =  167241.00 kB/sec
        Max throughput per process                      =  207134.38 kB/sec
        Avg throughput per process                      =  183737.00 kB/sec
        Min xfer                                        =  424704.00 kB

        Children see throughput for 4 pwrite writers    =  879712.72 kB/sec
        Parent sees throughput for 4 pwrite writers     =  686621.58 kB/sec
        Min throughput per process                      =  186624.69 kB/sec
        Max throughput per process                      =  236047.30 kB/sec
        Avg throughput per process                      =  219928.18 kB/sec
        Min xfer                                        =  415856.00 kB

        Children see throughput for 4 pread readers     =  777243.34 kB/sec
        Parent sees throughput for 4 pread readers      =  773302.81 kB/sec
        Min throughput per process                      =  184983.08 kB/sec
        Max throughput per process                      =  203392.77 kB/sec
        Avg throughput per process                      =  194310.84 kB/sec
        Min xfer                                        =  476896.00 kB

        Children see throughput for  4 fwriters         =  820877.50 kB/sec
        Parent sees throughput for  4 fwriters          =  693823.17 kB/sec
        Min throughput per process                      =  194228.28 kB/sec
        Max throughput per process                      =  217311.28 kB/sec
        Avg throughput per process                      =  205219.38 kB/sec
        Min xfer                                        =  524288.00 kB

        Children see throughput for  4 freaders         = 1924029.62 kB/sec
        Parent sees throughput for  4 freaders          = 1071393.99 kB/sec
        Min throughput per process                      =  268087.50 kB/sec
        Max throughput per process                      =  970331.94 kB/sec
        Avg throughput per process                      =  481007.41 kB/sec
        Min xfer                                        =  524288.00 kB

Single disk sda reference

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    35191    45728    56689    53307    27889    48508
          102400      16   104379   122405   154385   157484    88670   113964
          102400     512   315788   347042   351932   348604   271399   288430
          102400    1024   358399   366194   388893   379453   338470   369888
          102400   16384   353154   443256   425396   422384   410580   444530
2 Likes

Its a datum thats all, but doh first set forgot to edit /boot/hw_intfc.conf as this is a fresh copy of OMV. stretch.
Interested actually to see what the raid0 will be.

Unlike USB RAID which is a forget it, seems extremely stable and strong.

Yeah :slight_smile: the pcie1 speed surprisingly still outspeeds 1gb ethernet but yeah editing and enabling pcie2 helps much :slight_smile:

USB is great but if you get complex arrangements such as RAID 10,5 you will notice errors on sync.
Something to do with the latency or how USB works but the timing doesn’t seem to run parallel to mdadm RAID.
Its not secure and often really doesn’t like small blocks, the above doesn’t but USB really really doesn’t like.

Suppose you don’t know of alternative USB RAID to mdadmn?

PS strange you mentioned about having 2x PCIe as I have been thinking the same.
Most devices are only taking up 2x lanes as the card above same with many SSDs.
The RK3399 has a single pcie2.1 root complex that with a pcie packet switch can supply 1x 4lane, 2x 2 lane and 4x 1lane at full speed.

It depends on the device that you are using but 2x 2 lane pcie 2.1 is very possible.

No OMV has a module but in terms of speed difference prob minimal just maybe easier to admin with LVM.

I will give it a try, did the check the RAID 0 mdadm I just posted?

100g LVM stripe 4 disks

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    35658    47489    49756    52995    26121    45593
          102400      16    94734   126921   150461   151137    83124   127600
          102400     512   464453   505973   468821   471177   418778   519892
          102400    1024   513178   540603   509022   519001   459355   540782
          102400   16384   653031   490313   591320   608481   601552   669271

Slightly slower ?!

Also creating a raid5 not going well but not that sure of LVM and kernel 4.4 version

sudo lvcreate --type raid5 -i 4 -L 20G -n raid5_vol volg
Using default stripesize 64.00 KiB.
Insufficient suitable allocatable extents for logical volume raid5_vol: 5124 more required

Just to add raspberry Pi4 using usb3.0 haven’t added the new VLI firmware to stop the USB overheat yet so this is pre patch and prob faster.

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    19730    23646    22413    20084    13203    15832
          102400      16    41233    52083    52350    52084    42446    51576
          102400     512   166590   187084   170468   171390   171175   187099
          102400    1024   179506   199145   179024   179845   179802   199135
          102400   16384   241009   203976   189705   190575   190688   210859

2 concurrent usb3.0

        Command line used: iozone -l 2 -u 2 -r 16k -s 512M -F /home/pi/sda/tmp1 /home/pi/sdb/tmp2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 2
        Max process = 2
        Throughput test with 2 processes
        Each process writes a 524288 kByte file in 16 kByte records

        Children see throughput for  2 initial writers  =  327101.17 kB/sec
        Parent sees throughput for  2 initial writers   =  286339.91 kB/sec
        Min throughput per process                      =  160675.55 kB/sec
        Max throughput per process                      =  166425.62 kB/sec
        Avg throughput per process                      =  163550.59 kB/sec
        Min xfer                                        =  506176.00 kB

        Children see throughput for  2 rewriters        =  321104.42 kB/sec
        Parent sees throughput for  2 rewriters         =  285591.65 kB/sec
        Min throughput per process                      =  154710.28 kB/sec
        Max throughput per process                      =  166394.14 kB/sec
        Avg throughput per process                      =  160552.21 kB/sec
        Min xfer                                        =  495136.00 kB

        Children see throughput for  2 readers          = 1754070.94 kB/sec
        Parent sees throughput for  2 readers           = 1737363.73 kB/sec
        Min throughput per process                      =  803817.81 kB/sec
        Max throughput per process                      =  950253.12 kB/sec
        Avg throughput per process                      =  877035.47 kB/sec
        Min xfer                                        =  443488.00 kB

        Children see throughput for 2 re-readers        = 1746217.81 kB/sec
        Parent sees throughput for 2 re-readers         = 1735644.68 kB/sec
        Min throughput per process                      =  865502.62 kB/sec
        Max throughput per process                      =  880715.19 kB/sec
        Avg throughput per process                      =  873108.91 kB/sec
        Min xfer                                        =  515200.00 kB

        Children see throughput for 2 reverse readers   = 1812079.44 kB/sec
        Parent sees throughput for 2 reverse readers    = 1568699.48 kB/sec
        Min throughput per process                      =  888710.50 kB/sec
        Max throughput per process                      =  923368.94 kB/sec
        Avg throughput per process                      =  906039.72 kB/sec
        Min xfer                                        =  504592.00 kB

        Children see throughput for 2 stride readers    = 1860447.00 kB/sec
        Parent sees throughput for 2 stride readers     = 1855034.10 kB/sec
        Min throughput per process                      =  908043.06 kB/sec
        Max throughput per process                      =  952403.94 kB/sec
        Avg throughput per process                      =  930223.50 kB/sec
        Min xfer                                        =  499904.00 kB

        Children see throughput for 2 random readers    = 1783428.75 kB/sec
        Parent sees throughput for 2 random readers     = 1778619.73 kB/sec
        Min throughput per process                      =  891017.56 kB/sec
        Max throughput per process                      =  892411.19 kB/sec
        Avg throughput per process                      =  891714.38 kB/sec
        Min xfer                                        =  523472.00 kB

        Children see throughput for 2 mixed workload    = 1143513.59 kB/sec
        Parent sees throughput for 2 mixed workload     =  316544.38 kB/sec
        Min throughput per process                      =  319036.97 kB/sec
        Max throughput per process                      =  824476.62 kB/sec
        Avg throughput per process                      =  571756.80 kB/sec
        Min xfer                                        =  206192.00 kB

        Children see throughput for 2 random writers    =  284256.08 kB/sec
        Parent sees throughput for 2 random writers     =  122613.08 kB/sec
        Min throughput per process                      =   95420.17 kB/sec
        Max throughput per process                      =  188835.91 kB/sec
        Avg throughput per process                      =  142128.04 kB/sec
        Min xfer                                        =  265616.00 kB

        Children see throughput for 2 pwrite writers    =  330696.61 kB/sec
        Parent sees throughput for 2 pwrite writers     =  284339.24 kB/sec
        Min throughput per process                      =  160255.59 kB/sec
        Max throughput per process                      =  170441.02 kB/sec
        Avg throughput per process                      =  165348.30 kB/sec
        Min xfer                                        =  493568.00 kB

        Children see throughput for 2 pread readers     = 1882564.19 kB/sec
        Parent sees throughput for 2 pread readers      = 1869604.09 kB/sec
        Min throughput per process                      =  915361.00 kB/sec
        Max throughput per process                      =  967203.19 kB/sec
        Avg throughput per process                      =  941282.09 kB/sec
        Min xfer                                        =  496144.00 kB

        Children see throughput for  2 fwriters         =  335876.62 kB/sec
        Parent sees throughput for  2 fwriters          =  294503.95 kB/sec
        Min throughput per process                      =  164153.69 kB/sec
        Max throughput per process                      =  171722.94 kB/sec
        Avg throughput per process                      =  167938.31 kB/sec
        Min xfer                                        =  524288.00 kB

        Children see throughput for  2 freaders         = 1733385.69 kB/sec
        Parent sees throughput for  2 freaders          = 1726750.31 kB/sec
        Min throughput per process                      =  865120.38 kB/sec
        Max throughput per process                      =  868265.31 kB/sec
        Avg throughput per process                      =  866692.84 kB/sec
        Min xfer                             

USB RAID 1

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    15469    18184    19542    16347    12336    18523
          102400      16    47840    43417    48496    48733    39534    54546
          102400     512   130787   111892   162961   163952   165146   119545
          102400    1024   123425   131900   167899   168281   171155   126468
          102400   16384   143742   149125   168974   180467   184600   143088

USB RAID 0
Prob not a good idea as it wasn’t as crashed before iozone finished

Jul  5 04:50:07 raspberrypi systemd-udevd[3338]: Spawned process '/bin/sh -c '/sbin/mdadm --examine --export /dev/sdb | /bin/sed s/^MD_/UDISKS_MD_MEMBER_/g'' [3345] is taking longer than 59s to complete
Jul  5 04:50:07 raspberrypi systemd-udevd[144]: md0: Worker [3330] processing SEQNUM=1618 is taking a long time
Jul  5 04:50:07 raspberrypi systemd-udevd[144]: sdb: Worker [3338] processing SEQNUM=1620 is taking a long time
Jul  5 04:50:08 raspberrypi kernel: [ 3880.892493] scsi host1: uas_eh_device_reset_handler start
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894427] sd 1:0:0:0: [sdb] tag#16 uas_zap_pending 0 uas-tag 1 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894444] sd 1:0:0:0: [sdb] tag#16 CDB: opcode=0x28 28 00 00 00 00 08 00 00 08 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894461] sd 1:0:0:0: [sdb] tag#17 uas_zap_pending 0 uas-tag 2 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894473] sd 1:0:0:0: [sdb] tag#17 CDB: opcode=0x28 28 00 0d f9 45 28 00 00 30 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894488] sd 1:0:0:0: [sdb] tag#18 uas_zap_pending 0 uas-tag 3 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894501] sd 1:0:0:0: [sdb] tag#18 CDB: opcode=0x28 28 00 0d f9 45 60 00 00 50 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894515] sd 1:0:0:0: [sdb] tag#19 uas_zap_pending 0 uas-tag 4 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894527] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x28 28 00 0d f9 44 78 00 00 a8 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894541] sd 1:0:0:0: [sdb] tag#20 uas_zap_pending 0 uas-tag 5 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894552] sd 1:0:0:0: [sdb] tag#20 CDB: opcode=0x28 28 00 0d f9 44 28 00 00 08 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894566] sd 1:0:0:0: [sdb] tag#21 uas_zap_pending 0 uas-tag 6 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894577] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x28 28 00 0d f9 44 38 00 00 10 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894591] sd 1:0:0:0: [sdb] tag#22 uas_zap_pending 0 uas-tag 7 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894603] sd 1:0:0:0: [sdb] tag#22 CDB: opcode=0x28 28 00 0d f9 44 50 00 00 20 00
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894617] sd 1:0:0:0: [sdb] tag#23 uas_zap_pending 0 uas-tag 8 inflight: CMD 
Jul  5 04:50:08 raspberrypi kernel: [ 3880.894628] sd 1:0:0:0: [sdb] tag#23 CDB: opcode=0x28 28 00 0d f9 45 b8 00 00 48 00
Jul  5 04:50:08 raspberrypi kernel: [ 3881.043392] usb 2-2: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd
Jul  5 04:50:08 raspberrypi kernel: [ 3881.078971] scsi host1: uas_eh_device_reset_handler success
Jul  5 04:50:38 raspberrypi kernel: [ 3911.612930] scsi host1: uas_eh_device_reset_handler start
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614565] sd 1:0:0:0: [sdb] tag#23 uas_zap_pending 0 uas-tag 1 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614581] sd 1:0:0:0: [sdb] tag#23 CDB: opcode=0x28 28 00 0d f9 45 b8 00 00 48 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614598] sd 1:0:0:0: [sdb] tag#24 uas_zap_pending 0 uas-tag 2 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614611] sd 1:0:0:0: [sdb] tag#24 CDB: opcode=0x28 28 00 0d f9 44 50 00 00 20 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614626] sd 1:0:0:0: [sdb] tag#25 uas_zap_pending 0 uas-tag 3 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614638] sd 1:0:0:0: [sdb] tag#25 CDB: opcode=0x28 28 00 0d f9 44 38 00 00 10 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614652] sd 1:0:0:0: [sdb] tag#26 uas_zap_pending 0 uas-tag 4 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614665] sd 1:0:0:0: [sdb] tag#26 CDB: opcode=0x28 28 00 0d f9 44 28 00 00 08 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614678] sd 1:0:0:0: [sdb] tag#27 uas_zap_pending 0 uas-tag 5 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614690] sd 1:0:0:0: [sdb] tag#27 CDB: opcode=0x28 28 00 0d f9 44 78 00 00 a8 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614704] sd 1:0:0:0: [sdb] tag#0 uas_zap_pending 0 uas-tag 6 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614716] sd 1:0:0:0: [sdb] tag#0 CDB: opcode=0x28 28 00 0d f9 45 60 00 00 50 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614730] sd 1:0:0:0: [sdb] tag#1 uas_zap_pending 0 uas-tag 7 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614741] sd 1:0:0:0: [sdb] tag#1 CDB: opcode=0x28 28 00 0d f9 45 28 00 00 30 00
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614755] sd 1:0:0:0: [sdb] tag#2 uas_zap_pending 0 uas-tag 8 inflight: CMD 
Jul  5 04:50:38 raspberrypi kernel: [ 3911.614767] sd 1:0:0:0: [sdb] tag#2 CDB: opcode=0x28 28 00 00 00 00 08 00 00 08 00
Jul  5 04:50:39 raspberrypi kernel: [ 3911.763829] usb 2-2: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd
Jul  5 04:50:39 raspberrypi kernel: [ 3911.799395] scsi host1: uas_eh_device_reset_handler success
Jul  5 04:51:09 raspberrypi kernel: [ 3942.333368] scsi host1: uas_eh_device_reset_handler start
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334228] sd 1:0:0:0: [sdb] tag#3 uas_zap_pending 0 uas-tag 5 inflight: CMD 
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334244] sd 1:0:0:0: [sdb] tag#3 CDB: opcode=0x28 28 00 0d f9 45 b8 00 00 48 00
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334261] sd 1:0:0:0: [sdb] tag#4 uas_zap_pending 0 uas-tag 6 inflight: CMD 
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334273] sd 1:0:0:0: [sdb] tag#4 CDB: opcode=0x28 28 00 00 00 00 08 00 00 08 00
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334288] sd 1:0:0:0: [sdb] tag#5 uas_zap_pending 0 uas-tag 7 inflight: CMD 
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334301] sd 1:0:0:0: [sdb] tag#5 CDB: opcode=0x28 28 00 0d f9 45 28 00 00 30 00
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334315] sd 1:0:0:0: [sdb] tag#6 uas_zap_pending 0 uas-tag 8 inflight: CMD 
Jul  5 04:51:09 raspberrypi kernel: [ 3942.334328] sd 1:0:0:0: [sdb] tag#6 CDB: opcode=0x28 28 00 0d f9 45 60 00 00 50 00
Jul  5 04:51:09 raspberrypi kernel: [ 3942.484233] usb 2-2: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd
Jul  5 04:51:09 raspberrypi kernel: [ 3942.519869] scsi host1: uas_eh_device_reset_handler success
Jul  5 04:51:10 raspberrypi udisksd[364]: The function 'bd_md_examine' called, but not implemented!
Jul  5 04:51:34 raspberrypi kernel: [ 3966.825734] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
Jul  5 04:53:22 raspberrypi kernel: [ 4074.795287] sd 1:0:0:0: [sdb] tag#22 uas_eh_abort_handler 0 uas-tag 15 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.795306] sd 1:0:0:0: [sdb] tag#22 CDB: opcode=0x28 28 00 00 04 84 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.795805] sd 1:0:0:0: [sdb] tag#21 uas_eh_abort_handler 0 uas-tag 14 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.795818] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x28 28 00 00 04 80 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.796433] sd 1:0:0:0: [sdb] tag#20 uas_eh_abort_handler 0 uas-tag 13 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.796445] sd 1:0:0:0: [sdb] tag#20 CDB: opcode=0x28 28 00 00 04 7c 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.797055] sd 1:0:0:0: [sdb] tag#19 uas_eh_abort_handler 0 uas-tag 12 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.797067] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x28 28 00 00 04 78 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.797678] sd 1:0:0:0: [sdb] tag#18 uas_eh_abort_handler 0 uas-tag 11 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.797690] sd 1:0:0:0: [sdb] tag#18 CDB: opcode=0x28 28 00 00 04 74 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.798306] sd 1:0:0:0: [sdb] tag#17 uas_eh_abort_handler 0 uas-tag 10 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.798317] sd 1:0:0:0: [sdb] tag#17 CDB: opcode=0x28 28 00 00 04 70 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.798936] sd 1:0:0:0: [sdb] tag#16 uas_eh_abort_handler 0 uas-tag 9 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.798947] sd 1:0:0:0: [sdb] tag#16 CDB: opcode=0x28 28 00 00 04 6c 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.799566] sd 1:0:0:0: [sdb] tag#15 uas_eh_abort_handler 0 uas-tag 8 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.799578] sd 1:0:0:0: [sdb] tag#15 CDB: opcode=0x28 28 00 00 04 68 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.800194] sd 1:0:0:0: [sdb] tag#14 uas_eh_abort_handler 0 uas-tag 7 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.800205] sd 1:0:0:0: [sdb] tag#14 CDB: opcode=0x28 28 00 00 04 64 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.800820] sd 1:0:0:0: [sdb] tag#13 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.800831] sd 1:0:0:0: [sdb] tag#13 CDB: opcode=0x28 28 00 00 04 60 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.801448] sd 1:0:0:0: [sdb] tag#12 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.801460] sd 1:0:0:0: [sdb] tag#12 CDB: opcode=0x28 28 00 00 04 54 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.802090] sd 1:0:0:0: [sdb] tag#11 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.802101] sd 1:0:0:0: [sdb] tag#11 CDB: opcode=0x28 28 00 00 04 50 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.802728] sd 1:0:0:0: [sdb] tag#10 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.802739] sd 1:0:0:0: [sdb] tag#10 CDB: opcode=0x28 28 00 00 04 58 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.803366] sd 1:0:0:0: [sdb] tag#9 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.803378] sd 1:0:0:0: [sdb] tag#9 CDB: opcode=0x28 28 00 00 04 5c 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.804002] sd 1:0:0:0: [sdb] tag#8 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD IN 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.804014] sd 1:0:0:0: [sdb] tag#8 CDB: opcode=0x28 28 00 00 04 4c 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.865298] sd 1:0:0:0: [sdb] tag#24 uas_eh_abort_handler 0 uas-tag 17 inflight: CMD OUT 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.865314] sd 1:0:0:0: [sdb] tag#24 CDB: opcode=0x2a 2a 00 05 62 0c 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.865622] sd 1:0:0:0: [sdb] tag#23 uas_eh_abort_handler 0 uas-tag 16 inflight: CMD OUT 
Jul  5 04:53:22 raspberrypi kernel: [ 4074.865634] sd 1:0:0:0: [sdb] tag#23 CDB: opcode=0x2a 2a 00 05 62 08 00 00 04 00 00
Jul  5 04:53:22 raspberrypi kernel: [ 4074.915232] scsi host1: uas_eh_device_reset_handler start
Jul  5 04:53:22 raspberrypi kernel: [ 4075.065881] usb 2-2: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd
Jul  5 04:53:22 raspberrypi kernel: [ 4075.100300] scsi host1: uas_eh_device_reset_handler success
Jul  5 04:53:52 raspberrypi kernel: [ 4105.515673] sd 1:0:0:0: [sdb] tag#21 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.515691] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x2a 2a 00 06 e4 0a 38 00 00 18 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.565694] scsi host1: uas_eh_device_reset_handler start
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574255] sd 1:0:0:0: [sdb] tag#7 uas_zap_pending 0 uas-tag 4 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574271] sd 1:0:0:0: [sdb] tag#7 CDB: opcode=0x28 28 00 00 04 5c 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574288] sd 1:0:0:0: [sdb] tag#8 uas_zap_pending 0 uas-tag 5 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574300] sd 1:0:0:0: [sdb] tag#8 CDB: opcode=0x28 28 00 00 04 58 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574316] sd 1:0:0:0: [sdb] tag#11 uas_zap_pending 0 uas-tag 8 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574328] sd 1:0:0:0: [sdb] tag#11 CDB: opcode=0x28 28 00 00 04 60 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574343] sd 1:0:0:0: [sdb] tag#12 uas_zap_pending 0 uas-tag 9 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574355] sd 1:0:0:0: [sdb] tag#12 CDB: opcode=0x28 28 00 00 04 64 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574369] sd 1:0:0:0: [sdb] tag#13 uas_zap_pending 0 uas-tag 10 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574381] sd 1:0:0:0: [sdb] tag#13 CDB: opcode=0x28 28 00 00 04 68 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574395] sd 1:0:0:0: [sdb] tag#14 uas_zap_pending 0 uas-tag 11 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574407] sd 1:0:0:0: [sdb] tag#14 CDB: opcode=0x28 28 00 00 04 6c 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574421] sd 1:0:0:0: [sdb] tag#15 uas_zap_pending 0 uas-tag 12 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574432] sd 1:0:0:0: [sdb] tag#15 CDB: opcode=0x28 28 00 00 04 70 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574446] sd 1:0:0:0: [sdb] tag#16 uas_zap_pending 0 uas-tag 13 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574458] sd 1:0:0:0: [sdb] tag#16 CDB: opcode=0x28 28 00 00 04 74 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574472] sd 1:0:0:0: [sdb] tag#17 uas_zap_pending 0 uas-tag 14 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574483] sd 1:0:0:0: [sdb] tag#17 CDB: opcode=0x28 28 00 00 04 78 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574497] sd 1:0:0:0: [sdb] tag#18 uas_zap_pending 0 uas-tag 15 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574509] sd 1:0:0:0: [sdb] tag#18 CDB: opcode=0x28 28 00 00 04 7c 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574523] sd 1:0:0:0: [sdb] tag#19 uas_zap_pending 0 uas-tag 16 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574534] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x28 28 00 00 04 80 00 00 04 00 00
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574548] sd 1:0:0:0: [sdb] tag#20 uas_zap_pending 0 uas-tag 17 inflight: CMD 
Jul  5 04:53:52 raspberrypi kernel: [ 4105.574559] sd 1:0:0:0: [sdb] tag#20 CDB: opcode=0x28 28 00 00 04 84 00 00 04 00 00
Jul  5 04:53:53 raspberrypi kernel: [ 4105.726604] usb 2-2: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd
Jul  5 04:53:53 raspberrypi kernel: [ 4105.762236] scsi host1: uas_eh_device_reset_handler success
Jul  5 04:54:23 raspberrypi kernel: [ 4136.235955] sd 1:0:0:0: [sdb] tag#28 uas_eh_abort_handler 0 uas-tag 15 inflight: CMD OUT 
Jul  5 04:54:23 raspberrypi kernel: [ 4136.235973] sd 1:0:0:0: [sdb] tag#28 CDB: opcode=0x2a 2a 00 05 62 14 00 00 04 00 00
Jul  5 04:54:23 raspberrypi kernel: [ 4136.236297] sd 1:0:0:0: [sdb] tag#20 uas_eh_abort_handler 0 uas-tag 14 inflight: CMD OUT 
Jul  5 04:54:23 raspberrypi kernel: [ 4136.236311] sd 1:0:0:0: [sdb] tag#20 CDB: opcode=0x2a 2a 00 05 62 10 00 00 04 00 00
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896398] INFO: task iozone:3409 blocked for more than 120 seconds.
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896412]       Tainted: G        WC        4.19.50-v7l+ #895
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896421] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896432] iozone          D    0  3409   3404 0x00000000
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896476] [<c0990864>] (__schedule) from [<c0990ed4>] (schedule+0x50/0xa8)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896498] [<c0990ed4>] (schedule) from [<c0252078>] (io_schedule+0x20/0x40)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896521] [<c0252078>] (io_schedule) from [<c03f6698>] (__blockdev_direct_IO+0x2710/0x3bcc)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896542] [<c03f6698>] (__blockdev_direct_IO) from [<c0470894>] (ext4_direct_IO+0x394/0x790)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896564] [<c0470894>] (ext4_direct_IO) from [<c033ad0c>] (generic_file_read_iter+0x100/0xa78)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896587] [<c033ad0c>] (generic_file_read_iter) from [<c045a410>] (ext4_file_read_iter+0x44/0x5c)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896607] [<c045a410>] (ext4_file_read_iter) from [<c03af638>] (__vfs_read+0x10c/0x16c)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896624] [<c03af638>] (__vfs_read) from [<c03af734>] (vfs_read+0x9c/0x168)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896640] [<c03af734>] (vfs_read) from [<c03afd7c>] (ksys_read+0x74/0xe8)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896656] [<c03afd7c>] (ksys_read) from [<c03afe08>] (sys_read+0x18/0x1c)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896673] [<c03afe08>] (sys_read) from [<c0201000>] (ret_fast_syscall+0x0/0x28)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896684] Exception stack(0xccf19fa8 to 0xccf19ff0)
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896698] 9fa0:                   00000020 0025f718 00000003 b5c00000 01000000 00000000
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896712] 9fc0: 00000020 0025f718 01000000 00000003 00259cd0 001266d0 00080000 00000000
Jul  5 04:55:03 raspberrypi kernel: [ 4175.896724] 9fe0: 0006d030 bede9a38 00025ba4 b6f4a238

Mirrors are prob ok and going to wait and see which way Raspberry swing on the USB firmware update before applying.

You can try Raid off a USB hub and stuff but this is just my opinion and what should be the general advise and that is just don’t bother as it sucks.
Jbod and linear aggregation then its OK but don’t mix USB and RAID.
OMV and other NAS products don’t support USB RAID as over time it caused so many problems they decided for an easier ride and don’t support.
You can take your pick on that but mine is as said jbob / linear aggregation / RAID1 yeah its ok but don’t mix it with raid array such as 0,5,6,10 but linear or mirror its prob ok.

Dunno haven’t tested the JMS585 as its PCIe gen3.0 x2 and haven’t seen anyone confirm if it works.
Could be quite good though as you could run a small ssd cache infront of a 4x Raid5,10 array or 5x Raid5/6

ASM1061 chipset works as have one here.

Its all about opinion but sticking to my guns on this one.
Don’t use USB or Hub combinations for striping RAID mechanisms AKA 0,5,6 & 10.
For Linear RAID Jbod or Mirrors Raid 1 its OK.

There are reasons for this and basically check out the OMV or Armbian forums or any article by TKaiser about the subject and its your choice to ignore their advice or not.

2 Likes

PS I gave up on trying to install ZFS

RAID 6

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    10940    12843    49342    49430    26391     9521
          102400      16    19715    23239   138314   136809    84627    18516
          102400     512    30892    28781   257714   260578   260894    28554
          102400    1024    45750    46988   383654   391368   391954    46640
          102400   16384    51026    49988   642723   647465   650003    50726

Not likely RAID6 would be used on 4 disks but hey.

Ordered 2x port multipliers
So I can see what the fanout is like to x2 9235 ports to 2x JMB575.
Suposedly supports NCQ & FIS so will just have to see.
The dropout in performance to Sata2 with ssd of x2 actually on average is expected to be extremely minimal to non, so in that arrangement any hit should be indicative of the switching.

With spindles the 150MBs is still high and maybe even x16 is possible, but x8 is still quite substantial on 4 ports via 4 port multipliers.
I can do some benches that will be indicative via 2 ports and 2 port multipliers.

While me order is on the way, thanks for checking this, because at furst i wanted to create Raid5 with 4 HDD from sata adapter and 2 from usb, guess not gonna work out as well as i hoped. But is it possible to create raid1 or raid0 from usb?

1 Like

Sorry, but I’m interested in his errors, because i have a feeling i will meet them too.

Mirrors are fine with usb but stripes often don’t go well.

If you are going to do Raid (real raid) you are not going to use USB. No one does.
If you are going to use alternatives such as media stores and stuff with things like snapraid then they are setup for more asynchronous timing of stripe/volume sync.
Snap raid just has remote parity and that works and so do many others but nah sod usb for raid.

Why anyone would do Raid0 on usb is just a bemusement to me and I am not going to bother benchmarking.
I will leave @anon77784915 to benchmark Raid0 usb if he can be bothered, but why just jbod it or linear but not striping raid.
USB sets huge packet clusters that just don’t work well as a stripe and often just don’t work well.

Unless you prove it works well and reliable by providing a benchmark I think we will have to surmise that it doesn’t.

Until someone does USB Raid successfully than the answer from me is that it does not.

With usb ‘unraid’ methods are a much better implementation.

Unless you are going to provide some sort of evidence your claims are hollow and there is nothing rude about that.
You are wasting our time and until someone proves otherwise USB raid doesn’t work well as there are better methods for USB storage.

1 Like

You just don’t get it do you cevap :slight_smile:

Stop wasting our time.

1 Like

I am not trolling this argument has been done a million times in the armbian forums.

Its nothing to do with ego, lols you want to reread your posts mate :slight_smile:

This argument has been done to death, go to the armbian forums and argue there.
You have made a stripe that has near no advantage that has all the disadvantages that it is easily broken and all data is lost.

USB raid is not a good idea, if it was then its adoption would be wide scale.
Think about it and go argue with someone else on this one.
You have 2 usb ports and 2 devices and you have posted the best result you will get and its all down hill from here.

I am not going to argue but just say go and read why usb raid is not a good idea.

@anon77784915 OMV does not allow you to add usb drives for raid.
They say it causes too many problems and that is good enough for me.

You can write all you want about the mighty cevap and I knew it would just get to this pointless argument.
When there are vastly knowledgeable sources such as OMV to Synology who verdantly say don’t use USB that is good enough for me.

There are discussions already done because Armbian support OMV images and they have already gone through and won the USB argument with the likes of you and several extremely long winded occasions.

Yes you are being ignored as I am sticking to my principles about other peoples data security and I am saying don’t use USB for striping RAID.

lols, whatever mate.

Also it doesn’t as your small rec length random file tranfers suck.

1 Like

I am not really replying to you, ignore what I say as its no problem and please don’t bother to reply.

RAID0 is a complete relic from spindle days when we where stuck with cumbersome disks that didn’t have a fast alternative.
RAID0 has no redundancy or parity, each additional device you add lowers the MTBF by the reciprocal of number of devices.
That is just RAID0 I only added the benchmark for RAID0 for reference but I am certainly not suggest anyone to use it as the benefits are pointless and the impact of total data failure via any singular device is huge.

Why would anyone wish to use RAID mechanisms and USB to emulate what they can do with a single NVME drive that now are of very similar price to there slower Sata M.2 cousins.

If you check cevaps benchmarks on small 4k file transfers you will see how that large packet burst technology of USB for small files and concurrency is not good even when using the latest and greatest in sata SSD.
Using 2 setting up and doing a single benchmark and then announcing to this forum ‘USB RAID0 is GREAT’ is doing the forums community a great disservice.
Completely ignoring that other members are saying this topic has been broached many times and generally in the IT community USB RAID is seen as bad but more importantly totally unnecessary.

You have 2 choice you can like Cevap for some reason purchase 2 sata SSD and then USB 3.0 adapters attach in a RAID0 .
Its on 2 individual USB roots and is the best performance you will get as any further additions just multiplex the USB roots.
If you look at the benchmark provided the top speeds do not rival NVME but what is much worse is the small rec length file benchmarks are absolutely stinky.

It does make a fast large file transfer device without any form of redundancy and MTBF (Mean Time Between Failure by the reciprocal of the number of devices ) is lowered and in this case halved.
It terms of usage this drive arrangement has zero advantages and the huge implications of a single drive failure means total data failure.

Before you head to a whole and quite large community that tried to use USB raid and failed due to strange events and random daily resyncs. Just ask the question why do I need it and is it any good as the answer is no.
Then head to OpenMediaVault forum an superb opensource NAS device software offering and read the plethora of resync, failure problems that caused OMV to stop use with RAID.
Ignore the fact that generally in industry USB raid is seen as a poor implementation.

You have the benchmarks and for common server based local activities for say databases or web servers the general pattern is for small rec length random/reads and writes and USB raid irrespective of drive is relatively stinky for that as kindly demonstrated by Cevap.
Its also pretty stinky when it comes to multiple concurrent reads & writes but unfortunately we don’t have benchmarks for that.
All we have is a singular 2 drive arrangement of a singular benchmark without any long term quality testing whilst ignoring a huge back catalogue of problems.
All I can say is WTF Batman its times like these I am happy not to live in Gotham city like some others who seem to be impersonating The Joker.

USB RAID as an application store or high concurrency store is stinky but that doesn’t mean USB storage is.
USB for single low concurrency large file transfer is blooming great and the applications you could use USB have zero reason to bring in all the associated problems of RAID as often they outweigh any benefit they provide.

I did some initial benchmarks purely as a reference each RAID level against each other, please just because I included it, do not take that as advocacy of its use.
There is zero application benefit for RAID0 and its purely a legacy of the past of slow spindles and has no place in modern technology as its disadvantages greatly outweigh its advantages.
That is before we get to USB!

1 Like

It has nothing to do with unhappy its your claims are hollow and potentially dangerous and problematic.

Benchmark all you want but don’t make claims that USB RAID0 is a valid method and advise the community so.

Your results where better this time but still stinky on low rec length random file transfers.

	Command line used: iozone -e -I -a -s 10M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
           10240       4    20585    21836    22263    26672    15512    25338                                                          
           10240      16    45832    80887    87728    76571    51763    59171                                                          
           10240     512   220291   233844   187527   223619   214364   284010                                                          
           10240    1024   341766   378936   320099   329376   346906   318994          

Whats worse is your claims that its faster and better this way.
Someone elses NVME bench not mine Evo 250gb

Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    70395    70811    92238    92886    46631    70865                                                          
          102400      16   141961   200984   236280   237984    147557   204966                                                          
          102400     512   500244   536614   466642   479526   467919   536983                                                          
          102400    1024   529906   542864   483609   489514   483614   542280                                                          
          102400   16384   652449   670618   628992   654369   644341   665665  

But the main point of reliability is huge and I have no problems with your benchmarks but with all honesty have to say to the community don’t believe this guys claims of reliability or suitability.
That is before we get to the fact its pointless and there are better methods dependent on application.