Rock 5B screen turns green - freezes

Hey there, last week my 5b finally arrived. I explored it a bit using the provided Debian Image, and it worked fine for the time. Yesterday I installed the armbian image(because i need ffmpeg with hardware decoding[and would love encoding as well]), which did work at first, but then I noticed that about 5 minutes after booting it would stop and become unresponsive. When attaching a screen and keyboard to it, I wouldnt get any output. After rebooting, it would work for another 5-10 minutes, then silently die again.
Now I started it up and attached a screen from the beginning, just to realize it starts up, keeps complaining from the r8152 driver: get_registers: -71 - those messages pop up every couple seconds on the terminal, I guess thats something to do with the networking, still it works. However after the 5-10 minute timeframe has passed, the rock 5B just displays a dark green on the screen and stops responding… I rebooted to reproduce and kept watching, there is no error message or anything. Where could the issue be? Could it be a power issue?
As for the IO:
I am powering the board with a USB-Type C Docking Station/hub, connected to my 65W USB PD power supply from my asus ultrabook(it wont work with the asus power supply plugged in directly), the dock is rated for 60W (its a renkforce thing). I have 2 external SSDs connected via usb, one in a USB 3 Enclosure connected to USB3, the other in a USB3 Enclosure but only connected with a USB 2.0 cable, to limit the max power draw(if it cant read or write at full speed, it cant use max power, right) Apart from that its only connected to a 1GBit Ethernet switch.

Any ideas / suggestions where I could find information on how to get this working?

Btw, I looked at the logs in /var/log after rebooting, whatever happens happens without a warning and quick enough to prevent the kernel/syslog to put it into the logs before freezing)

Okay, got a little bit more info now, so I can provide a kernel log(this time the screen didnt green out though). This is until the ssh connection to the rock 5 failed. After that there shouldnt be too much more except for a kernel core dump(which is still partially visible on screen. Last kernel message on screen is the kernel panic at 118.45sec, so I’m still missing 8 seconds of log:

[   22.666413] IPv6: ADDRCONF(NETDEV_CHANGE): enP4p65s0: link becomes ready
[  105.564971] rga_job: rga_request_wait timeout
[  105.564989] rga_job: reset core[4] by request abort
[  105.568883] rga_job: request[4] abort! finished 0 failed 0 running_abort 1 todo_abort 0
[  105.575637] rga_job: rga request commit failed!
[  105.579020] rga: request[4] submit failed!
[  106.658499] rga_job: rga_request_wait timeout
[  106.658520] rga_job: reset core[4] by request abort
[  106.664206] rga_job: request[5] abort! finished 0 failed 0 running_abort 1 todo_abort 0
[  106.674524] rga_job: rga request commit failed!
[  106.679650] rga: request[5] submit failed!
[  107.458449] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  107.462678] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  107.464701] mpp_rkvdec2 fdc48100.rkvdec-core: resetting...
[  107.464714] rkvdec2_soft_ccu_reset:1742: soft reset fail, int 00000020
[  107.466838] mpp_rkvdec2 fdc48100.rkvdec-core: reset done
[  107.466841] mpp_rkvdec2 fdc38100.rkvdec-core: resetting...
[  107.466934] mpp_rkvdec2 fdc38100.rkvdec-core: reset done
[  107.991736] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  107.995272] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  107.998696] mpp_rkvdec2 fdc48100.rkvdec-core: resetting...
[  107.998713] rkvdec2_soft_ccu_reset:1742: soft reset fail, int 00000020
[  108.002088] rkvdec2_soft_ccu_reset:1748: bus busy
[  108.005498] mpp_rkvdec2 fdc48100.rkvdec-core: reset done
[  108.005504] mpp_rkvdec2 fdc38100.rkvdec-core: resetting...
[  108.005568] mpp_rkvdec2 fdc38100.rkvdec-core: reset done
[  108.471609] rga_job: rga_request_wait timeout
[  108.471631] rga_job: reset core[4] by request abort
[  108.475157] rga_job: request[41] abort! finished 0 failed 0 running_abort 1 todo_abort 0
[  108.481325] rga_job: rga request commit failed!
[  108.484411] rga: request[41] submit failed!
[  108.525101] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  108.530158] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  108.535029] mpp_rkvdec2 fdc48100.rkvdec-core: resetting...
[  108.535055] rkvdec2_soft_ccu_reset:1742: soft reset fail, int 00000020
[  108.539898] rkvdec2_soft_ccu_reset:1748: bus busy
[  108.544797] mpp_rkvdec2 fdc48100.rkvdec-core: reset done
[  108.544808] mpp_rkvdec2 fdc38100.rkvdec-core: resetting...
[  108.544940] mpp_rkvdec2 fdc38100.rkvdec-core: reset done
[  109.094996] rkvdec2_ccu_link_timeout_work:1564: task timeout
[  109.100548] mpp_rkvdec2 fdc48100.rkvdec-core: resetting...
[  109.100570] rkvdec2_soft_ccu_reset:1742: soft reset fail, int 00000020
[  109.105742] rkvdec2_soft_ccu_reset:1748: bus busy
[  109.111213] mpp_rkvdec2 fdc48100.rkvdec-core: reset done
[  109.111222] mpp_rkvdec2 fdc38100.rkvdec-core: resetting...
[  109.111296] mpp_rkvdec2 fdc38100.rkvdec-core: reset done
[  109.591560] rga_job: rga_request_wait timeout
[  109.591572] rga_job: reset core[4] by request abort
[  109.592692] rga_job: request[42] abort! finished 0 failed 0 running_abort 1 todo_abort 0
[  109.594460] rga_job: rga request commit failed!
[  109.595379] rga: request[42] submit failed!
[  110.684871] rga_job: rga_request_wait timeout
[  110.684881] rga_job: reset core[4] by request abort
[  110.686038] rga_job: request[43] abort! finished 0 failed 0 running_abort 1 todo_abort 0
[  110.687862] rga_job: rga request commit failed!
[  110.688803] rga: request[43] submit failed!

Interestingly enough this looks like a failure from the hardware video decoder, which shouldnt even be doing something at this moment(I did not run anything and stopped jellyfin services right after boot)

any ideas?

Can you check the output of “sensors” to see if PD negotiation is successful?

Looks ok to me:

root@rock-5b:~# sensors
gpu_thermal-virtual-0
Adapter: Virtual device
temp1: +60.1 C
littlecore_thermal-virtual-0
Adapter: Virtual device
temp1: +61.9 C
bigcore0_thermal-virtual-0
Adapter: Virtual device
temp1: +61.0 C
tcpm_source_psy_4_0022-i2c-4-22
Adapter: rk3x-i2c
in0: 20.00 V (min = +20.00 V, max = +20.00 V)
curr1: 2.88 A (max = +2.88 A)
npu_thermal-virtual-0
Adapter: Virtual device
temp1: +61.0 C
center_thermal-virtual-0
Adapter: Virtual device
temp1: +60.1 C
bigcore1_thermal-virtual-0
Adapter: Virtual device
temp1: +61.0 C
soc_thermal-virtual-0
Adapter: Virtual device
temp1: +61.9 C (crit = +115.0 C)

OK, it’s not a PD negotiation failure then.

seems unlikely but not all cables seem to be the same with usb / usb c
sometimes no matter how good the power supply is if a cable is too long or bad things can still go wrong

well, as a quick update, I have in the mean time reverted the image back to the debian server image from radxa, and its been stable since. It’s a shame that I have to give up on ffmpeg for now(looking into building it myself currently), but at least the board is stable. I have also ordered a second one to fiddle around with(despite all the current issues I love the platform and am confident that once mainline kernel support is there a lot of stuff will improve). I will keep this updated once the new 5b comes in and I get more opportunity to debug the issue

1 Like

yeah those error messages look like a kernel/driver problem rather than a power issue

tiny update, pd readings from sensors output are correct, I measured it out with one of those mitm-current readers for usb pb that I have lying around. Obviously it doesnt draw the 2.88 A it lists as current1 input(Its somewhere in the range of 500mA) but sensors probably just outputs the max it negotiated with the pd host. Voltage is correct though, being measured as 20.04V