"Penta SATA HAT" supported for "Rock 3a"?

Hello.

  1. Is “Penta SATA HAT” supported for “Rock 3a”?

  2. Does device support work immediately or do you need to install an additional driver?

  3. If I plan to use only one ssd 2.5 disk, do I need additional power?

Yes, Penta SATA HAT support ROCK 3A and it run as fast as on ROCK Pi 4.

For the 3A support, we need update the firmware of the Penta sata hat. For the new produced Penta sata hat, it’s updated already. Do check it with the distributor.

If you use only one 2.5 HDD, you can power from USB C port on ROCK 3A, make sure your PD power adapter is 24W or more. You can also power from the 12V DC on the penta hat.

2 Likes

**ROCK 3 A with PENTA SATA HAT, all 4 Drives are operational in ubuntu server environment plus Samba but getting hot as fan and display are not yet active :-1:(( **

I feel also, that here is some kind of hickup going on, as the rock 3 A seems to reset itsself once in a while but I’m still trying to track that.
Question: Might debian be more stable?

Got this from cockpit system log:
service rockpi-penta.service
Rockpi SATA Hat
Status Failed to start
Starten automatic
Pfatgh /lib/systemd/system/rockpi-penta.service
requires system.slice-.mountsysinit.target
required by multi-user.target
conflicts shutdown.target
before shutdown.targetmulti-user.target
after system.slicesystemd-journald.socket-.mountbasic.targetsysinit.target
Serviceprotocol 23. Dezember 2021
11:04 Failed to start Rockpi SATA Hat.
systemd
11:04 rockpi-penta.service: Failed with result ‘exit-code’.
systemd
11:04 rockpi-penta.service: Start request repeated too quickly.
systemd
11:04 Stopped Rockpi SATA Hat.
systemd
11:04 rockpi-penta.service: Scheduled restart job, restart counter is at 5.
systemd
11:04 rockpi-penta.service: Failed with result ‘exit-code’.
systemd
11:04 rockpi-penta.service: Main process exited, code=exited, status=1/FAILURE
systemd
11:04 ModuleNotFoundError: No module named 'mraa’
python3
11:04 import mraa # pylint: disable=import-error
python3
11:04 File “/usr/bin/rockpi-penta/fan.py”, line 3, in
python3

Reworked.… Still no response from Radxa :frowning: Changed to debian, no improvment here and none of the two serviced 3A distributiosn seems to willing to cooperate with OMV 6 .:frowning:

URGENT QUESTION: So what is a that uncooled PENTA HAT for, if the only solution, to use it as a NAS is to install a quite old UBUNTU Server Version, on a attractive HW Solution that might die of heat problems within short time. ???

Can I update the firmware myself if there is an outdated version on sale?
Does the firmware update take place in the installed state or through a UART connection?

When did you buy your Penta SATA Hat? Maybe @setq has some tips on how to upgrade on ROCK 3A.

I haven’t bought it yet, I’m calculating all the options in advance.
I’ll clarify just in case.

If you buy it now, the firmware is updated already. Just plug in and it works with 3A.

2 Likes

Addenum: Just gave the install another try as there seemed to be an update but…
#######################
2023-06-13T22:00:00Z
Anyone else experiancing transfer becomming bumpy above approx 1.8 GB in short sequence (2 Min or so) and stopping/freezing completey till reboot above ~2 GB
##########################
current install looks fine now . see below… Any comments
*root@rock3a:/home/rock#curl -sL https://rock.sh/get-rockpi-penta 1 | sudo -E bash -

*** Penta SATA Hat Install for ROCK Pi 4 / ROCK 3

*** Tested distributions:
*** - ROCK Pi 4
*** Armbian 20.05.4 focal
*** Armbian 20.05.3 buster
*** Debian 9 Desktop (radxa official image)
*** Ubuntu Server 18.04 (radxa official image)
*** - ROCK 3
*** Debian 10 Desktop (radxa official image)
*** Ubuntu Server 20.04 (radxa official image)

*** Please report problems to setq@radxa.com and we will try to fix.

deb http://apt.radxa.com/focal-testing/ focal main
OK
Hit:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Get:2 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [108 kB]
Get:5 http://apt.radxa.com/focal-testing focal InRelease [2338 B]
Get:6 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [1621 kB]
Get:7 http://ports.ubuntu.com/ubuntu-ports focal-security/main Translation-en [358 kB]
Get:8 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted Translation-en [257 kB]
Get:9 https://repo.45drives.com/debian focal InRelease [6858 B]
Get:10 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [764 kB]
Get:11 http://ports.ubuntu.com/ubuntu-ports focal-security/universe Translation-en [174 kB]
Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [1929 kB]
Get:13 http://ports.ubuntu.com/ubuntu-ports focal-updates/main Translation-en [440 kB]
Get:14 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted Translation-en [272 kB]
Get:15 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [993 kB]
Get:16 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe Translation-en [255 kB]
Get:17 https://repo.45drives.com/debian focal/main amd64 Packages [51.3 kB]
Fetched 7460 kB in 5s (1556 kB/s)
Reading package lists… Done
running install
running bdist_egg
running egg_info
creating Adafruit_SSD1306.egg-info
writing Adafruit_SSD1306.egg-info/PKG-INFO
writing dependency_links to Adafruit_SSD1306.egg-info/dependency_links.txt
writing requirements to Adafruit_SSD1306.egg-info/requires.txt
writing top-level names to Adafruit_SSD1306.egg-info/top_level.txt
writing manifest file ‘Adafruit_SSD1306.egg-info/SOURCES.txt’
reading manifest file ‘Adafruit_SSD1306.egg-info/SOURCES.txt’
writing manifest file ‘Adafruit_SSD1306.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/Adafruit_SSD1306
copying Adafruit_SSD1306/init.py -> build/lib/Adafruit_SSD1306
copying Adafruit_SSD1306/SSD1306.py -> build/lib/Adafruit_SSD1306
creating build/bdist.linux-aarch64
creating build/bdist.linux-aarch64/egg
creating build/bdist.linux-aarch64/egg/Adafruit_SSD1306
copying build/lib/Adafruit_SSD1306/init.py -> build/bdist.linux-aarch64/egg/Adafruit_SSD1306
copying build/lib/Adafruit_SSD1306/SSD1306.py -> build/bdist.linux-aarch64/egg/Adafruit_SSD1306
byte-compiling build/bdist.linux-aarch64/egg/Adafruit_SSD1306/init.py to init.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Adafruit_SSD1306/SSD1306.py to SSD1306.cpython-38.pyc
creating build/bdist.linux-aarch64/egg/EGG-INFO
copying Adafruit_SSD1306.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
copying Adafruit_SSD1306.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying Adafruit_SSD1306.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying Adafruit_SSD1306.egg-info/requires.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying Adafruit_SSD1306.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents…
creating dist
creating ‘dist/Adafruit_SSD1306-1.6.2-py3.8.egg’ and adding ‘build/bdist.linux-aarch64/egg’ to it
removing ‘build/bdist.linux-aarch64/egg’ (and everything under it)
Processing Adafruit_SSD1306-1.6.2-py3.8.egg
Removing /usr/local/lib/python3.8/dist-packages/Adafruit_SSD1306-1.6.2-py3.8.egg
Copying Adafruit_SSD1306-1.6.2-py3.8.egg to /usr/local/lib/python3.8/dist-packages
Adafruit-SSD1306 1.6.2 is already the active version in easy-install.pth

Installed /usr/local/lib/python3.8/dist-packages/Adafruit_SSD1306-1.6.2-py3.8.egg
Processing dependencies for Adafruit-SSD1306==1.6.2
Searching for Adafruit-GPIO==1.0.6
Best match: Adafruit-GPIO 1.0.6
Processing Adafruit_GPIO-1.0.6-py3.8.egg
Adafruit-GPIO 1.0.6 is already the active version in easy-install.pth

Using /usr/local/lib/python3.8/dist-packages/Adafruit_GPIO-1.0.6-py3.8.egg
Searching for spidev==3.5
Best match: spidev 3.5
Processing spidev-3.5-py3.8-linux-aarch64.egg
spidev 3.5 is already the active version in easy-install.pth

Using /usr/local/lib/python3.8/dist-packages/spidev-3.5-py3.8-linux-aarch64.egg
Searching for Adafruit-PureIO==1.1.5
Best match: Adafruit-PureIO 1.1.5
Processing Adafruit_PureIO-1.1.5-py3.8.egg
Adafruit-PureIO 1.1.5 is already the active version in easy-install.pth

Using /usr/local/lib/python3.8/dist-packages/Adafruit_PureIO-1.1.5-py3.8.egg
Finished processing dependencies for Adafruit-SSD1306==1.6.2
/home/karl
(Reading database … 166168 files and directories currently installed.)
Preparing to unpack /tmp/tmp.Qw6FBu5K89 …
Removed /etc/systemd/system/multi-user.target.wants/rockpi-penta.service.
Unpacking rockpi-penta (0.10) over (0.10) …
Setting up rockpi-penta (0.10) …
Created symlink /etc/systemd/system/multi-user.target.wants/rockpi-penta.service → /lib/systemd/system/rockpi-penta.service.

Try to create a new topic

setq@radxa.com
NO FIX AVAILABLE YET?
Sorry but I bought this crab, because I had confidence in the 3rd release of a rock3a plus YOUR PENTA SATA HAT, that it would be somehow less buggy. IT IS purely to maintain, as when you want to experiment with it you always have to rip the whole thing apart just to install a new version of software.
With the HAT mounted safely on top there is no access to the eMMC and even changing the SD is a tricky thing to do!
This very much reduces the fun factor to experiment with such a nitty Potential of SBC Power plus that cabability for a somehow reliable datamanagement.
** 'til now I’m just disappointed :-(**
What a waste of time and energy to have this stuff being sent around half of the world, just to see that it is a bunch of mybe somehow WORKÍNG things, thrown together just to feed the market that seems to have money to feed your “copy and paste” culture.
Shame on you. YOU can do better, I know! Soo! get up and act! … at least start communicating in a trustworthy way!

CAN YOU PLEASE GIVE US A STATUS REPORT PLEASE

I just bought this same hardware and will try this solution as well. I hope I can get a stable solution, although this is a test, as I built a OMV NAS solution in late 2022 based on x86-64 Intel µATX board.
Indeed , support from Radxa is low, very low, I will post my results hoping I will have a better experience.
If the test ends up as a stable alternative with decent performance, I may switch to it and convert the x86-64 solution as a low power workstation, only if…

Rock 3A + Penta Sata HAT?
I also considered that, but it seems that rock3a has 2x pcie 3.0 lanes and rock4 has 4x pcie 2.1 lanes, and don’t know what is supported by hat so what speed whole set is capable of
good luck, post results.

I received the Penta SATA board yesterday and tried to assemble it on a new Rock 3a with 8 GB RAM. I got some strange behavior with an unexpected reboot loop, so today I removed the SATA board, first reinstalled Debian to get rid of the problems, assembled it again and tonight, aside HDMI problems preventing to use the screen, it is correctly running in test mode with a temporary 2"5 WD Black drive on top of the Penta hat: yes, it is recognized and usable! I am still using a 65W PD connected to the USB-C of the rock SBC for the moment (it powers the hat through the GPIO interface).


A quick test proves it basically works: I have created an ext4 partition and was able to read/write data to the disk.
I will have to wait for a few days since I have the drives for RAID 5, but not the case for the final setup and the Flex ATX power supply cannot be used directly. The next steps will be final assembly, power supply setup, removal of GUI packages and installation of OpenMediaVault before the interesting part: performance test…
I started to carefully note the different steps/problems/fixes to hopefully propose a detailed experience.

Thanks for report. With 4 drives You will need much more power. Also stability is some issue, I just replaced power adapter with the one with 12v and 5A to check if this was core of my problems on quad sata hat.

Im waiting for speed test as well as link.information. ROCK 4 should have 4 x 2.0 pcie lanes, and ROCK 3 has 2 x 3.0 so overall bandwith of m.2 is same on those two, but link may be limited to 2 x 2.0. Looking forward for Your results :slight_smile:

I think the SATA hat is correctly recognized, but I was unable to check the PCIe link, I might dig on that side later.
Well, I received all hardware last Tuesday, but had to buy additional specific SATA male cables for 3"5 drives (Amazon saved me time on this). Unfortunately, after a simple temporary hardware setup and a long RAID 5 volume initialization (20h), I noticed that one of the SATA drives disconnected when writing to it, destroying my first performance tests.
The SMART status, green at the very beginning, turned yellow: one of the new drive is faulty. I changed the port and then the cable, but the drive disconnects and this heavily degrades data transfers.
I do not have spare drives as these three drives are already spare disks for my active NAS (I had one spare and I bought two new identical drives), so I will ask for replacement…
In the meantime, I planned to try simpler tests with direct disk accesses and then RAID 0 and RAID1, but another quick test on the USB 3 port with an external pocket drive highlights a bottleneck on the ROCK board itself that do not seem CPU bounded… maybe I should analyze the system installation first… So I think the road will be long, very long, before I can really finalize this little NAS project :frowning:

1 Like

@tkaiser excellent tool sbc-bench with one of switch show link speeds. This gives comfortable view for attached devices and information about degraded links

I also plan to use this device as backup for bigger nas, so on it’s not reliable to put any data there :confused:
there is also some new firmware available for jms chips and I hope this will fix some errors

Some progress in my preliminary test on a RAID NAS server based on the Penta SATA on Rock 3A. The system is a Debian release adapted to OMV 6 by removal of the GUI packages and booting from eMMC. The hardware in still is test mode, running in an open environement but power is supplied by a Flex ATX PSU only from the molex plug on the SATA hat and HDDs are housed in a hot-swap enclosure.
Unfortunately, the status is currently a failure, as explained below.

Aside a defect on one of the three SATA disks, I was able to test the performance on the degraded RAID 5 array (running on 2 disks) with a tranfer rate in read and write of about 100~110 MB/s for a large archive (12 GB), apparently CPU bounded (cp and mdo_raid5 processes entirely filling one A55 core), but at least it ran flawlessly. Test is similar from/to eMMc module and a modern USB 3 external drive. My conclusion so far is that read and write speeds are almost identical and using either a fast USB device on the onboard eMMc module as source (for write) or target (for read) does not matter.

I then turned the array to striping (RAID 0) to check if a hight transfer was possible, hoping this mode is less CPU bounded. This is where trouble quickly occured: the copy is blocked with recurrent kernel errors:

[ 313.679532] ata3.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x6 frozen
[ 313.679643] ata3.00: failed command: WRITE FPDMA QUEUED
[ 313.679696] ata3.00: cmd 61/00:00:00:18:30/08:00:00:00:00/40 tag 0 ncq dma 1048576 ou
res 40/00:01:04:03:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 313.679763] ata3.00: status: { DRDY }

and it happens on both SATA ports in use. I changed the plugged ports, checked the cables and replaced one of them, but it does not help, the same errors occur. I also replugged the FPC cable provided with the Penta SATA hat, no change.
Finally, I plugged a single 2"5 direclty in one of the SATA ports to avoid the intermediate cables as I used for my very first and short test, but the same error also occurs after less than a minute… it seems my first test was too short (smaller files).

Testing the USB 3.0 port only, between eMMc and the external drive, so independently of the SATA hat, seems very stable: I was able to transfer a dozen GB of files several times in both directions without any error.
Performance is here also bounded, with read and write speeds of repectively about 90 and 60 MB/s (the USB drive max speed is 100~110 MB/s), so again probably CPU bounded.

Since I checked the cables (they are not tightly plugged, but it seems safe enough), tested several of the SATA ports and with a single drive directly plugged, without any cable except the provided FPC cable, I think the problem can only come from one of these potential explanations:

  • the FPC cable or its connection to the Rock3 is not reliable
  • the SATA hat firmware/driver is buggy or my OS install is not correct
  • the SATA hat has a hardware problem

Any advice?

I don’t think that the problem lays in sata or fpc cable, they are rather ok, but this have to be something about system (kernel) and JMB chipset/firmware. On JMB561 (quad sata hat) You can’t get everything to work - either smart or UAS will cause problems and drives disconnects. I’ve seen different problems on different kernels and firmwares so I think You always have to double check all of them.

JMB chipsets have their own utility to do hardware RAID, maybe it’s worth to check that. I would look for speed parameters - maybe there is something like UAS on USB that can be disabled for tests? You may get different results with different kernels - if You have good test then try older 4.x as well as 5.x and maybe some mainline attempts.

Of course still there is some firmware inside JMB585 - take a look at that, maybe there is newer version that includes fixes, even if You have latest version perform upgrade (with backup option, then compare bin). Also there are some upgrades to disk firmwares and that sometimes help, always worth to check that too.

As I mentioned earlier I’m looking for upgrade from quad to penta sata kit hoping that JMB585 is way better than pair of JMB561 connected via usb. Hopefully You will manage to get that to work and that should be same as some m.2 cards with same jmb chip. I think I have ASM card but I need to verify that, if it’s JMB then all issues will be the same as Your.

@dominik

Yes, cables are probably ok. The problem obviously comes from a stress on the transfer and is most probably linked to drivers/firmwares. This JMB585 chip has no hardware RAID support and is reported by JMicron to use PCIe 3.0 x 2 (https://www.jmicron.com/file/download/945/JMB585.pdf), which is exactly what the Rock 3A offers, so theoretically, its is just fine…

Unfortunately, my knowledge in kernel/drivers/firmwares is quite limited, all I can say is I have installed the Radxa Debian 11 release and used the script provided by Radxa to setup the SATA hat. The kernel version is 4.19 and I guess I would have to compile a 5.x flavor from the config file used for the 4.19 if I want to test an alternate kernel without having to play with too many options.
I also will have to dig a little to get all details on how the Penta SATA hat is identified and what drivers exactly are in use (AHCI, but I don’t know how this stuff works…). Any help on that is welcome :wink:

As I have received the replacement hard drive, the RAID 5 set of 3 drives is now rebuilt and, surprise, as when running in degraded mode (2 disks), not a single error anymore in kernel! it is quite slow but seemingly stable and steady, with transfer speeds a little above 100MB/s in both read/write modes between the RAID array and the onboard eMMc. Again, clearly CPU bounded.

As You, I am testing the solution as a spare time project and there is not need foe a perfect answer since I already have a good working solution based on Intel x86 12th gen. If possible, I plan to test RAID 0 and 5 and try to get the best stable performance. Child playground only so far :wink:

So yes, there is some room for investigation on kernel/driver/firmware and I will have a few hours next week to do dig the subject. If You have another contact channel for more direct exchanges on the topic, please let me know.

Thanks again

@ydeletrain, try kernel 5.*
I bought an SSD drive in June 2022 and I had problems with the drive.
After switching to the 5.* kernel, the problems disappeared.
SMART ssd showed problems with CRC. Like it’s a sata cable issue or something.
Immediately put the kernel 5.*