ROCK 5B Debug Party Invitation

root@rock-5b:/home/rock# find /usr -name mali_csffw.bin
root@rock-5b:/home/rock# find /usr -name libmali.so
root@rock-5b:/home/rock# find /usr -name librga.so
/usr/lib/aarch64-linux-gnu/librga.so
root@rock-5b:/home/rock# ldd /usr/lib/aarch64-linux-gnu/librga.so
	linux-vdso.so.1 (0x0000007f971bc000)
	libdrm.so.2 => /lib/aarch64-linux-gnu/libdrm.so.2 (0x0000007f9713e000)
	libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f96f59000)
	libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f96eae000)
	libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f96e8a000)
	libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000007f96e59000)
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f96ce6000)
	/lib/ld-linux-aarch64.so.1 (0x0000007f9718c000)

I have not read anything about a Mali blob for the G610 and Mesa slowed down with Valhall as focussed on the Apple M1 GPU.
I think some G57 updates went into the last Kernel release (5.18) but not sure if that will work at all with the G610 even though the same architecture family and don’t know what the state of play the G57 is as haven’t got a MediaTek MT8192 that Alisia of collabora is doing testing on.
Mesa was romping ahead with Mali then Apple Arm silicon occurred, its not stopped but definitely progressing at a slower pace than it was before.

The blobs seem to be:

libmali-valhall-g610-g6p0-gbm.so (GBM)
libmali-valhall-g610-g6p0-wayland.so (WAYLAND)
libmali-valhall-g610-g6p0-x11.so (X11)

Haven’t seen anything out in the wild yet or a Linux bench all seems to be Android but maybe.

@stuartiannaylor
Take a look: https://youtu.be/XrEmmXMXzXU?t=302

1 Like

Let’s be honest, I’m sure it’s certainly not easy. Yes it would be nice, but if not possible, be it. Maybe the arrangement at 45 degrees like on the R6S above can help get the SoC closer to the center, if possible at all. I’m noting that they have less distance between the SoC and the DRAM chips, I don’t know if it’s a matter of number of copper layers, different pinouts or routing constraints. And the fact that the SoC heats very little makes the arrangement even less critical here.

BTW @hipboi I tried a number of different heat sinks, including various from north bridges and south bridges for intel and AMD, but couldn’t find one that matches this exact step (some were 5mm too long). I found some where the plots were reversed, i.e. top-left and bottom-right. I thought that maybe such variations could be enough to solve the problem, if the spacing doesn’t correspond that much to an easily findable heat sink model and is not that important in the end.

1 Like

Firmware for the GPU (which goes in /lib/firmware) is found at the Rockchip libmali repository, and blob drivers are there as well.

Support in Mesa is in-development, and because I don’t have a board yet I would be interested in seeing what results people get from trying my test program.

1 Like

When talking about heat dissipation we shouldn’t forget the NVMe SSD many users will use as OS drive. And these things heat like crazy so a thermal concept when the board should be enclosed needs to take care of this too.

As such I repeat it again: hoping for a nicely designed metal enclosure allowing the milled top side direct contact with the SoC via a thin thermal pad (or copper shim for enthusiasts) and allowing to slap a thermal pad between SSD and the metal bottom to also dissipate the SSD’s heat out of the enclosure.

Cramping this whole thing into a plastic box neither a heatsink nor heatsink + fan will really do the job especially when a fast NVMe SSD is also heating from below.

2 Likes

out of unhealthy curiosity
cat /sys/kernel/debug/clk/clk_summary
cat /sys/kernel/debug/dynamic_debug/control
ou similar please.

…which needed confirmation though without having an SSD also in the box.

While running an endless 7z b benchmark loop board + fansink cramped into a tiny plastic enclosure perform almost as bad as in the open w/o any cooling. At least (almost) no throttling happened since the 85°C thermal treshold is still 2.5°C away:

11:32:27: 2400/1800MHz  7.49  87%   2%  85%   0%   0%   0%  80.4°C
11:32:32: 2400/1800MHz  7.69  97%   1%  96%   0%   0%   0%  80.4°C
11:32:37: 2400/1800MHz  7.71  96%   1%  95%   0%   0%   0%  81.3°C
11:32:42: 2400/1800MHz  7.90  74%   0%  73%   0%   0%   0%  81.3°C
11:32:47: 2400/1800MHz  7.90  95%   0%  94%   0%   0%   0%  82.2°C
11:32:52: 2400/1800MHz  7.59  72%   0%  71%   0%   0%   0%  81.3°C
11:32:57: 2208/1608MHz  7.62  97%   0%  96%   0%   0%   0%  81.3°C
11:33:02: 2400/1800MHz  7.33  66%   0%  65%   0%   0%   0%  80.4°C
11:33:07: 2400/1800MHz  7.31  94%   1%  92%   0%   0%   0%  81.3°C
11:33:12: 2400/1800MHz  7.36  91%   0%  91%   0%   0%   0%  79.5°C
11:33:17: 2400/1800MHz  7.65  68%   1%  66%   0%   0%   0%  81.3°C
11:33:22: 2400/1800MHz  7.76  98%   1%  96%   0%   0%   0%  80.4°C
11:33:28: 2400/1800MHz  7.78  90%   1%  89%   0%   0%   0%  81.3°C
11:33:33: 2400/1800MHz  7.48  76%   0%  75%   0%   0%   0%  80.4°C
11:33:38: 2208/1800MHz  7.52  97%   0%  96%   0%   0%   0%  82.2°C
11:33:43: 2400/1800MHz  7.24  77%   0%  77%   0%   0%   0%  79.5°C
11:33:48: 2400/1800MHz  7.30  91%   1%  90%   0%   0%   0%  82.2°C
11:33:53: 2400/1800MHz  7.36  81%   0%  80%   0%   0%   0%  79.5°C
11:33:58: 2400/1800MHz  7.49  85%   2%  83%   0%   0%   0%  81.3°C
11:34:03: 2208/1608MHz  7.53  96%   0%  95%   0%   0%   0%  82.2°C
11:34:08: 2400/1800MHz  7.25  59%   1%  57%   0%   0%   0%  80.4°C
11:34:13: 2400/1800MHz  7.31  98%   1%  96%   0%   0%   0%  81.3°C
11:34:18: 2400/1608MHz  7.44  96%   1%  94%   0%   0%   0%  81.3°C
11:34:23: 2400/1800MHz  7.49  84%   0%  84%   0%   0%   0%  79.5°C
11:34:28: 2400/1800MHz  7.61  88%   1%  87%   0%   0%   0%  81.3°C
11:34:33: 2400/1800MHz  7.64  93%   0%  93%   0%   0%   0%  79.5°C
11:34:38: 2400/1800MHz  7.67  74%   1%  73%   0%   0%   0%  81.3°C
11:34:43: 2400/1800MHz  7.70  96%   0%  96%   0%   0%   0%  79.5°C
11:34:48: 2400/1800MHz  7.16  70%   1%  69%   0%   0%   0%  80.4°C
11:34:53: 2400/1608MHz  6.75  91%   0%  90%   0%   0%   0%  81.3°C

Please note that this little box (the board was shipped in) dissipates heat a little better than your typical plastic SBC enclosure.

1 Like

http://ix.io/41Qp

1 Like

Yeah I agree, and over time I’ve become a big fan (no pun intended) of fanless metal enclosures as well.

2 Likes

Looking forward for some nvme test results

For consumption numbers? Or how different tools throw out different numbers?

Or do you simply don’t trust Radxa guys when they wrote “M.2 M key PCIe 3.0 x4 with read speed > 2700MB/s” in the 1st post of this thread? You assume they’re too stupid to route PCIe Gen3 signals to a slot and it’s just Gen2? What else?

If even RK3568 with its 4 little cores is able to saturate a PCIe Gen3 x2 link what do you expect from RK3588 with 4 additional big cores and Gen3 x4?

My expectation (with a super fast SSD that is not itself the bottleneck and proper settings – checking/adjusting PCIe powermanagement and cpufreq governor parametrization as outlined above) is ~3000 MB/s in each direction measured with fio and 4 parallel threads. And whatever random IO performance the SSD in question allows with Gen3 x4…

While in reality the average Rock 5B user will slap the cheapest NVMe SSD possible into the slot with low random IO by design and slowing severly down after around a GB of sustained writes when caches are full…

Frankly, what’s being reported are just details. After having tested it, I would definitely buy it in its exact current state without a doubt. Tkaiser and I have been insisting on the fact that we’re just transparently reporting our observations, thus do not see any form of criticism there, these are just small feedback that may or may not be used to improve next versions. But for sure not everything will be adjusted and I can even be wrong on some points. Thus it’s important not to take such comments as judgements to decide whether or not the board is ready. Think about it, we have not yet faced any single show-stopper! This in itself is already a sign that it’s probably ready! Could it be improved ? Surely, as always. But I faced no meaningful problem yet, which gives me a great confidence in the work that was done before reaching this state. Really.

1 Like

Once I receive the M.2 to PCIe adapter I ordered, I’ll try to plug one of our 25GbE NICs on it. They’re dual-port PCIe-3.0x8 but I guess they’ll be happy with a single port on x4 since the 32GbE of PCIe bw will be sufficient. This can give us an estimate of how far it can go. Network is always more expensive than block I/O but with GSO/GRO it can compensate a bit.

1 Like

I do believe that hardware is capable. I am asking to check software support status of nvme at the moment. Previously, it took almost a year for pci express start to work in gen2 mode and we had issues like: if you use emmc you can’t use nvme and vice versa.

I want to report two issues:
1, I have 2 65W PD charger, one is lenovo thinkplus, the other is one plus charger. With one hdmi connected to the screen I plug in the typec power, then the board will lose power when the kernel is booting, and then on power again. I will never get into the system. This happends on armbian I built with kernel at date 20220707. The system on emmc doesn’t have this issue but its kernel is several days ealier. When I change to a 20W PD charger, I can boot into the system with both older and new kernel.
2, With the 20W PD charger I can lightup a 1080p screen, but one 4K screen is not lighted up. Here is the dmesg log when connected to the 4K screen: https://gist.github.com/amazingfate/17af25d7d543d253c9d608d1d90ff2c0

The question we ask ourselves,
is it possible to put a M.2 module without it being affected by the fixing of the CPU heat sink.

3 Likes