The perfect rk3588 SBC would be

I noticed a few days ago that there’s an updated ROCK 5 ITX+ board available and immediately wondered what they changed as I have one of the original and can think of a few things that I wish were different about it. Turns out, the biggest change is removing the 4 SATA ports and adding another m.2 gen3 x2 socket (which can be 110mm) - and hyping it up as a way to get 6 SATA ports (via 2 hexa SATA adapters. Considering all of the trouble so far with keeping the SATA ports working between kernel releases, I don’t know if this is a good move or not. But it did make me decide to write down what I’d personally like out of a future ITX board based on the rk35xx platform…

I’d like to use this board to collapse a few functions down to one and save some power - in particular my home router, NAS, and media servers as well as some containers for simple services.

What’s missing now to do this? The biggest thing I would prefer is 10gbe via an SFP+ cage. My suggestion would be using 2 lanes of gen3 to accomplish this. I don’t have a preference of what controller to integrate but a Mellanox ConnectX-3 might make sense as its well supported, PCIe gen3, and available as a single port. Or, just plumb 2 of these lanes to a PCIe slot (x2 electrical but open-ended mechanically to support longer cards) and let the user pick.

With a 10g port plugged into my switch and some VLANs, I’d drop one or both 2.5gbe interfaces and expose some of the native 1g interfaces that are available via the SOC for use as WAN ports. This would free up some of the gen2 PCIe lanes which could then be used for storage. I’d then pull the SATA controller over from the gen3 lanes to gen2 and deal with the perf hit. Or again, expose 2 lanes of gen2 as another open ended PCIe slot so the user can choose what to do with it.

Leaving 2 lanes of gen3 for an m.2 slot would be fine, but depending upon how many PCIe slots were exposed this could be yet another. Or expose all 4 gen3 lanes as one slot.

Is there a way to expose the debug header as an RS232 interface at a slower speed? If so, do that for a console port. Bonus points if it’s RJ45 rather than DB9, but make it something that I can plug into my terminal server. Failing that, the datasheet shows there’s a bunch of TTL UARTs available that could be plumbed out to plugs. I get that we wouldn’t see anything prior to the kernel entrypoint, but I could still run a getty on it or attach a modem (my dreamcast has to get online somehow!).

The m.2 gen2 x1 slot that’s there now for wireless? I’d much rather have that for another (slower) SSD for the root filesystem. This thing is gonna be very well connected via cables and I have dedicated access points so no need for wireless and if it does USB is an option.

Find a way to fit the 40 pin GPIO block in there.

Do people actually use the roobi installer on the emmc and leave it there? I dropped that thing fast and put my own install onto it. Increase the size of the emmc to at least 16G as 8GB isn’t much in 2025 for a general purpose Linux install even without a windowing environment.

Yeah, that’d be a tight board.

For me it’s just cosmetic change, if You plan to use it with 4x sata drives for sure ROCK 5 ITX is better because You already have power connectors and more secure sata ports. If You need something different or some flexibility - then 2x m.2 connector (and one filled with ASM1166, like the hexa sata m.2 adapter) will be great, but You need to take care about powering all drives and m.2 card is not that durable.
Real change would be to just put regular pcie slot with all four pcie lanes.

If You are asking for perfect ROCK 5 ITX++ board then it’s one with plx switch with pcie 3.0 4x upstream, something like this one. This will raise board cost significantly, but there should be place for two m.2 slots, some sata and two 10Gbit adapters, all working up up to almost full pcie bandwidth.

SFP+ is not that popular in homes, rather in pro offices, but for now there is not much 10G accessories for personal use. On the other hand there is a ton of old, crappy 10G cards, most not much power efficient. Still nothing on m.2 with sfp+, but there are already ACQ107 10G ethernet adapters as well as 10G usb3/tb.

For now Orion is the way to go. Faster, much more pcie lanes (but no bifurcation :()

Roobi is some idea, it may help some users, others will just delete this :slight_smile:

Replying to this post title:

This is the best rk3588 in a nutshell radxa cm5 + waveshare cm4-nano.

Nice compact one, but far away from NAS appliance :smiley:

tbh for a nas… u dont need rk3588 … I mean radxa sells them proper e52c with nice shell and two ethernet slots… might be best suited? idk… but for all options… cm5 in an nano board is Dooooope !

depends on your use case. I fitted by rock5itx with 10GbE and the whole device is well suited for that (SATA barely does above with my SSDs, the CPUs are working a bit but delivering etc). It’s quite much balanced.

I’m not sure how I missed that board considering it’s the first thing on Radxa’s website, but I did. It’s really nice! But not perfect…

I must sound like a broken record here, but I do not understand the inclusion of dual multigig ethernet when they should just include 10gbe! I don’t have anything to plug it into which will link up above 1gbe, and I have a lot of networking gear. Don’t get me wrong, 5gbe is probably fast enough for my needs - it’s the compatibility of it that mystifies me. 5gbe switches are rare and not cheap.

No SATA at all unless you sacrifice the x8 PCIe slot or the m.2. And they’re both gen4 so it’s a bit of a waste - and considering I would still want 10gbe I hate to burn a nice port on that. Looking around I suppose one could install a SATA controller into the m.2 E key port for a couple of disks for bulk storage which wouldn’t be too bad all else considered.

Still no serial port but I do see the GPIO block so I could probably get to a UART off that and wire it to a TTL to RS232 adapter in a PCI bracket. That doesn’t do anything about the debug port but better than nothing.

I read (and enjoyed!) your blog post on this system. It was one of the things that led to me buying one. When I looked at the cost of the 10gbe m.2 NIC that you bought, it was not cheap however - though it is nice that it’s gen3 so the x2 lanes were enough to saturate 10gbe in one direction at least (though for a router you really want more bandwidth since each packet you RX is likely one you need to forward and TX - but like you said, it depends upon your uses).

So yeah, you can already get 10gbe in it today but it’s not cheap (and 10g-baset runs hot too), but that means you’re not really utilizing the 2.5gbe NICs that it includes. I’d still rather drop those and use the lanes for something else.

Not sure what you mean here by “in one direction at least”. PCIe is bidirectional. Even if there’s some overhead when using a NIC due to the need to fetch descriptors, the data flows in the two directions at the same time, and this NIC on x2 has no issue filling the 10G in both directions at once like below with HTTP/1 traffic, which requires less than one little core:

$ if_rate -l -i eth0 1
#   time   eth0(ikb ipk okb opk)
1738812012 9914873.8 866366.6 9855651.3 852548.8
1738812013 9891613.2 864452.2 9820095.8 847567.7
1738812014 9926927.4 867317.7 9859568.7 849391.1
1738812015 9895335.3 865265.5 9857640.6 848268.8
1738812016 9929206.4 867825.5 9896613.3 852657.7
1738812017 9911157.4 866072.2 9868587.6 851546.6
1738812018 9891256.6 865072.2 9858122.5 848044.4
1738812019 9899583.3 865056.6 9858404.8 848227.7
1738812020 9911627.9 866211.1 9857582.0 852777.7

Regardless, it’s true that in this case I’m not using the 2.5G NICs anymore. But I was initially using them, and the vast majority of users would. Many would even prefer to have two 2.5G NICs rather than a single 10G, and in this particular case, the two ports use Gen2 which would not suffice for 10G and would require to sacrifice Gen3 lanes that others would prefer on their M2 for an extra SSD… Also you mentioned SFP+, but similarly the vast majority of users prefer a pure RJ45 and not to have to plug an RJ45 SFP+ module that costs extra $25 and that heats like crazy (admittedly 10GbE in RJ45 always heats like crazy).

The main problem with most Arm SoCs is the small number of PCIe lines that forces the board designer to make a compromise between on-board devices that dedicate lines and placing extension ports. A few low-end x86 boards manage to share lines between multiple ports so that the bifurcation depends on what’s plugged where,but I think that it’s super difficult and probably causes some particular signal multiplexing. I think that M2 is a really nice form factor that allows to have multiple extension cards on a small board without taking the room of a PCIe slot, and that after all, placing multiple M2 slots with equally distributed lanes on a motherboard does make quite some sense, even though it costs quite a bit to acquire the devices to place there.

Thanks for pointing out my error here. I’ve been thinking for years that each 10gbe port required 4 lanes but you’re right.

No argument from me here - except that I think the compromises made were not the best (but I still really like this board!). You can disagree and that’s ok.

You mention the “vast majority of users” a few times and I don’t have any data to refute that or back it up. But I want to make sure that Radxa at least hears what I want so I’m doing that here. Are there changes you’d like to see?

I’ve been thinking as well as a board that allows to plug multiple devices, but that’s what rock5b+ added. The only thing is that M2 on the bottom makes it more complicated to design an enclosure when you start to use thicker M2 cards than SSD (typically a NIC with its heat sink, or a SATA controller with connectors). But on the other hand, the other side is supposed to be fitted with a heat sink so maybe one needs to think in terms of flipping the board. I’m really not demanding more than what is currently provided (ah, yes, for rock5itx, I was really irritated by the too limited 8GB eMMC). And yes, Radxa listens to users (which is why people exchange here I guess). Several of us here made comments that were directly reflected in new iterations of various devices, whether it’s components/ports location, options, available devices by default etc.

Yes, me too. Sometimes one is used for command / control and the other is squirting packets to a GPU running CUDA for additional signal processing. It also allows for daisy chain of boards. I looked at the new 5T and that one looks like a good fit for us. It has dual Ethernet.

No hardware is perfect :slight_smile:
This is first Cix and they managed to deliver quite impressive hardware for that price. Still it’s development board and some proof of concept. We will see Rockchip answer sooner or later and hopefully we will get at least some specs updated. Also Radxa came back to allwinner and experiments with some of others, so we will see interesting products in future.
RK3588 is few years old for now. Still great SoC, but market has changed for sure.

I also planned to upgrade from 1G to 2.5G and then for 10G.
But 10G requires new cabling. Also heats up significantly and in last year we’ve got massive flood of multi gig equipment with consumer grade SFP+. This makes at least several things much easier, because all You need to do is to connect right SFP+ transceiver and those often include 5G speeds. Also does not heat up that much and may link in home in shorter distances when 10G will not.
So it’s just another option :slight_smile:

10G ethernet are not cheaper :smiley:
but SFP+ is surprisingly cheap and You can get insert with 10G/5G/2.5G/1G ethernet. Also DAC cables makes those easy to connect each other. This is the way to go for me, I could live without 10G ethernet cabling if there will be an option to use SFP+ whenever is possible. Does not heat up that much, uses lower energy.

It’s always the matter of finding right cards and configuration. You still have more pcie in other slots so simply You could insert 10G nic into m.2 slot (ACQ 007) and something like LSI 9400-16i SATA/SAS HBA (16 sata/sas channels). Both will not top up performance of those slots.

I think we are going to switch to fiber sooner or later. That one has much more potential for upgrades, single fiber can carry way more than ethernet and does not heat up that much.

Rock 5 series could work with RTL8126, and those chips are getting cheaper and more popular now. Maybe market will be flooded with such devices. It should reach 4Gbit on same pcie2 lane. Still better than 2.5G :slight_smile:

Unfortunately it’s true. Right now I don’t see bright future for such heaters in small SBCs :frowning:

Just use any of m.2 extender/adapter:

or the one for full pcie:

Those are available at popular lengths and with needed angles (90, 180, 270) and size (pcie 1-16).

1 Like

Add 16 hardware ePWM / high resolution, not kernel but true dedicated hardware devices that have shadow registers like the Ti C2000 devices. This allows the PWM settings to be changed on the fly without deadband. That could be 8 degrees of freedom with full encoder feedback. Now that is a robot controller with plenty of power.