ROCK 5B Debug Party Invitation

Haven’t seen anything out in the wild yet or a Linux bench all seems to be Android but maybe.

@stuartiannaylor
Take a look: https://youtu.be/XrEmmXMXzXU?t=302

1 Like

Let’s be honest, I’m sure it’s certainly not easy. Yes it would be nice, but if not possible, be it. Maybe the arrangement at 45 degrees like on the R6S above can help get the SoC closer to the center, if possible at all. I’m noting that they have less distance between the SoC and the DRAM chips, I don’t know if it’s a matter of number of copper layers, different pinouts or routing constraints. And the fact that the SoC heats very little makes the arrangement even less critical here.

BTW @hipboi I tried a number of different heat sinks, including various from north bridges and south bridges for intel and AMD, but couldn’t find one that matches this exact step (some were 5mm too long). I found some where the plots were reversed, i.e. top-left and bottom-right. I thought that maybe such variations could be enough to solve the problem, if the spacing doesn’t correspond that much to an easily findable heat sink model and is not that important in the end.

1 Like

Firmware for the GPU (which goes in /lib/firmware) is found at the Rockchip libmali repository, and blob drivers are there as well.

Support in Mesa is in-development, and because I don’t have a board yet I would be interested in seeing what results people get from trying my test program.

1 Like

When talking about heat dissipation we shouldn’t forget the NVMe SSD many users will use as OS drive. And these things heat like crazy so a thermal concept when the board should be enclosed needs to take care of this too.

As such I repeat it again: hoping for a nicely designed metal enclosure allowing the milled top side direct contact with the SoC via a thin thermal pad (or copper shim for enthusiasts) and allowing to slap a thermal pad between SSD and the metal bottom to also dissipate the SSD’s heat out of the enclosure.

Cramping this whole thing into a plastic box neither a heatsink nor heatsink + fan will really do the job especially when a fast NVMe SSD is also heating from below.

2 Likes

out of unhealthy curiosity
cat /sys/kernel/debug/clk/clk_summary
cat /sys/kernel/debug/dynamic_debug/control
ou similar please.

…which needed confirmation though without having an SSD also in the box.

While running an endless 7z b benchmark loop board + fansink cramped into a tiny plastic enclosure perform almost as bad as in the open w/o any cooling. At least (almost) no throttling happened since the 85°C thermal treshold is still 2.5°C away:

11:32:27: 2400/1800MHz  7.49  87%   2%  85%   0%   0%   0%  80.4°C
11:32:32: 2400/1800MHz  7.69  97%   1%  96%   0%   0%   0%  80.4°C
11:32:37: 2400/1800MHz  7.71  96%   1%  95%   0%   0%   0%  81.3°C
11:32:42: 2400/1800MHz  7.90  74%   0%  73%   0%   0%   0%  81.3°C
11:32:47: 2400/1800MHz  7.90  95%   0%  94%   0%   0%   0%  82.2°C
11:32:52: 2400/1800MHz  7.59  72%   0%  71%   0%   0%   0%  81.3°C
11:32:57: 2208/1608MHz  7.62  97%   0%  96%   0%   0%   0%  81.3°C
11:33:02: 2400/1800MHz  7.33  66%   0%  65%   0%   0%   0%  80.4°C
11:33:07: 2400/1800MHz  7.31  94%   1%  92%   0%   0%   0%  81.3°C
11:33:12: 2400/1800MHz  7.36  91%   0%  91%   0%   0%   0%  79.5°C
11:33:17: 2400/1800MHz  7.65  68%   1%  66%   0%   0%   0%  81.3°C
11:33:22: 2400/1800MHz  7.76  98%   1%  96%   0%   0%   0%  80.4°C
11:33:28: 2400/1800MHz  7.78  90%   1%  89%   0%   0%   0%  81.3°C
11:33:33: 2400/1800MHz  7.48  76%   0%  75%   0%   0%   0%  80.4°C
11:33:38: 2208/1800MHz  7.52  97%   0%  96%   0%   0%   0%  82.2°C
11:33:43: 2400/1800MHz  7.24  77%   0%  77%   0%   0%   0%  79.5°C
11:33:48: 2400/1800MHz  7.30  91%   1%  90%   0%   0%   0%  82.2°C
11:33:53: 2400/1800MHz  7.36  81%   0%  80%   0%   0%   0%  79.5°C
11:33:58: 2400/1800MHz  7.49  85%   2%  83%   0%   0%   0%  81.3°C
11:34:03: 2208/1608MHz  7.53  96%   0%  95%   0%   0%   0%  82.2°C
11:34:08: 2400/1800MHz  7.25  59%   1%  57%   0%   0%   0%  80.4°C
11:34:13: 2400/1800MHz  7.31  98%   1%  96%   0%   0%   0%  81.3°C
11:34:18: 2400/1608MHz  7.44  96%   1%  94%   0%   0%   0%  81.3°C
11:34:23: 2400/1800MHz  7.49  84%   0%  84%   0%   0%   0%  79.5°C
11:34:28: 2400/1800MHz  7.61  88%   1%  87%   0%   0%   0%  81.3°C
11:34:33: 2400/1800MHz  7.64  93%   0%  93%   0%   0%   0%  79.5°C
11:34:38: 2400/1800MHz  7.67  74%   1%  73%   0%   0%   0%  81.3°C
11:34:43: 2400/1800MHz  7.70  96%   0%  96%   0%   0%   0%  79.5°C
11:34:48: 2400/1800MHz  7.16  70%   1%  69%   0%   0%   0%  80.4°C
11:34:53: 2400/1608MHz  6.75  91%   0%  90%   0%   0%   0%  81.3°C

Please note that this little box (the board was shipped in) dissipates heat a little better than your typical plastic SBC enclosure.

1 Like

http://ix.io/41Qp

1 Like

Yeah I agree, and over time I’ve become a big fan (no pun intended) of fanless metal enclosures as well.

2 Likes

Looking forward for some nvme test results

For consumption numbers? Or how different tools throw out different numbers?

Or do you simply don’t trust Radxa guys when they wrote “M.2 M key PCIe 3.0 x4 with read speed > 2700MB/s” in the 1st post of this thread? You assume they’re too stupid to route PCIe Gen3 signals to a slot and it’s just Gen2? What else?

If even RK3568 with its 4 little cores is able to saturate a PCIe Gen3 x2 link what do you expect from RK3588 with 4 additional big cores and Gen3 x4?

My expectation (with a super fast SSD that is not itself the bottleneck and proper settings – checking/adjusting PCIe powermanagement and cpufreq governor parametrization as outlined above) is ~3000 MB/s in each direction measured with fio and 4 parallel threads. And whatever random IO performance the SSD in question allows with Gen3 x4…

While in reality the average Rock 5B user will slap the cheapest NVMe SSD possible into the slot with low random IO by design and slowing severly down after around a GB of sustained writes when caches are full…

Frankly, what’s being reported are just details. After having tested it, I would definitely buy it in its exact current state without a doubt. Tkaiser and I have been insisting on the fact that we’re just transparently reporting our observations, thus do not see any form of criticism there, these are just small feedback that may or may not be used to improve next versions. But for sure not everything will be adjusted and I can even be wrong on some points. Thus it’s important not to take such comments as judgements to decide whether or not the board is ready. Think about it, we have not yet faced any single show-stopper! This in itself is already a sign that it’s probably ready! Could it be improved ? Surely, as always. But I faced no meaningful problem yet, which gives me a great confidence in the work that was done before reaching this state. Really.

1 Like

Once I receive the M.2 to PCIe adapter I ordered, I’ll try to plug one of our 25GbE NICs on it. They’re dual-port PCIe-3.0x8 but I guess they’ll be happy with a single port on x4 since the 32GbE of PCIe bw will be sufficient. This can give us an estimate of how far it can go. Network is always more expensive than block I/O but with GSO/GRO it can compensate a bit.

1 Like

I do believe that hardware is capable. I am asking to check software support status of nvme at the moment. Previously, it took almost a year for pci express start to work in gen2 mode and we had issues like: if you use emmc you can’t use nvme and vice versa.

I want to report two issues:
1, I have 2 65W PD charger, one is lenovo thinkplus, the other is one plus charger. With one hdmi connected to the screen I plug in the typec power, then the board will lose power when the kernel is booting, and then on power again. I will never get into the system. This happends on armbian I built with kernel at date 20220707. The system on emmc doesn’t have this issue but its kernel is several days ealier. When I change to a 20W PD charger, I can boot into the system with both older and new kernel.
2, With the 20W PD charger I can lightup a 1080p screen, but one 4K screen is not lighted up. Here is the dmesg log when connected to the 4K screen: https://gist.github.com/amazingfate/17af25d7d543d253c9d608d1d90ff2c0

The question we ask ourselves,
is it possible to put a M.2 module without it being affected by the fixing of the CPU heat sink.

3 Likes

it is also my conviction but at that moment i was a little annoyed with the resellers proposed by the company,
these charge the mule with allucinant expenses of delivery, insurances of routing very strongly advised, and followed of delivery in supplement.
The taxes must be paid with the customs what adds a supplement for the expenses of procedure.
Brief one thing goes up to 250Euro, and I take two of them.
for a SOC with RAM, no EMMC, no heat sink, no psu, no software support, just a motherboard with RK3588 and heat sink holes placed oddly to remain polite,
basically they taken me for an american! :slight_smile: to be honest I’ve picked up a few coupons since last time, at worst if this madness continues I’ll only have lost 10$.
I will be glad to read your test of network card, it was planned on my roadmap but by weakness I think to install a SFF-8643 to connect external adapters if necessary.

No, you won’t lost 10$. You can just refund the 10$.

3 Likes

I mentioned above that I observed the same power cut issue at boot as you, I suspect it happens when a USB driver is loaded though I could be wrong. For me it fails with the ubuntu SD image, but the debian on the eMMC doesn’t have this problem.