Void linux on Rock pi - is it possible?

Hi Everyone,

Im fairly new to SBC, so please bare with me if Im asking some basic questions. I will look around to see if others have gone over them first elsewhere :wink:

My Rock Pi is arriving today, just the Ethernet version 4gb Ram with alu heatsink and m.2 extension board and SSD.
From reading around it seems the most workable OS for my use case is debian stretch, so will likely give that a go first and to have a backup to roll back to when/if needed.

My usecase The Rock PI will be a local, always on (or sleep with WoL) remote access to my network and servers from outside the home.
It will sit in my DYI server rack once completed, but will have monitor connected with various htop, temp readings etc kinda dashboard running.
So I dont need media playback or 3D capability, youtube playback and sound is also not required.

I would however like to play around with trying to get void linux setup on the Rock (if that is even possible?) ARM architecture is enabled for Void linux, so thought it would be fun to try.
Sorta a messy project maybe, but for fun and to learn at the same time :slight_smile:

– Void Linux install –

Goal: i3-gaps WM, sleep enabled and with WoL,

1, uSD 16gb void basic live install —> boot to/run from RAM
2, Replace uSD with another empty 16gb to serve as boot loader.
3, Run “void-installer” and partition root on NVMe and /boot on SD card.
4, Cross compile required packages from the radxa repo for rock pi using xbps-src ?? (https://github.com/void-linux/void-packages#cross-compiling-packages-for-a-target-architecture )

5 , Success?

Any feedback is appreciated. Have you tried something similar? is it doomed from the start? :smiley:
If yes, do please let me know and I will stick with debian and not waste my time haha.

cheers everyone

1 Like

Thank you for the suggestions, links and the welcome :wink:
I dont have eMMC, but have found the uSD A2 cards to be incredibly fasts so far :slight_smile:
Void is a separate linux distribution from debian, but with its own compile tools for ARM so figured it might work.

Right now I am trying the ARM64 that was just released and I’m quite impressed.
Have also ordered extra uSD cards so I can mess around with other distros.
cheers

The sandisk extreme pro 32gb a1
https://www.google.com/search?q=sandisk+extreme+pro+32gb+a1

Haven’t done any benchmarks but its cheap @ £10 for a card with free delivery if you do your shopping and generally had really good results with that card.

I haven’t done any benchmarks with the rockpi4b but think it has the same reader as the Pi3 and strangely a2 cards often showed up slower on the Pi than a1 cards.
The a2 needed a newer reader to get those quoted speeds.

This is the first article I found haven’t really read but presume it says the same.

Actually thought the a1 was pretty good for apps compared to what I had been using but the many comments in the raspbian forum stopped me purchasing an a2 as didn’t seem worth the cost and likely would be no faster.
I think unless its a microSD Express reader you will get no benefit over a1 with an a2 and could even be less.
So many specs with SD now that its quite confusing and only way is prob to test.

PS @ LucidScrubJay how did kde go as it still looks the same but maybe its the oomf of the rk3399 but it feels much lighter and it took a while but actually started to like it.

I got the m.2 extender board too so only really need to A2 SSD cards for the boot loader :wink:
Yeah once I get a minute I want to try the benchmark for A1 vs A2 cards.
So far the rock using the ARM64 image boots to desktop completely from the A2 SD card before my monitor even turns on from the signal over hdmi so no complaints regarding speed so far :wink:

1 Like

Sorry have not had a chance to test the kde yet as I was knackered after work yesterday. :confused: will give feedback as soon as I test it though :slight_smile:

The A2 class cards hold a huge advantage over the A1 cards in terms of min guarantied IOPS at 4K hence the speed over an SD A1 card that might have faster peak transfer rate (or that is my understanding of it anyway but need to run some performance tests to verify that. So far very happy with A2 card. The 64gb A2 cards is something like 17$ on amazon so yes a little pricy but easier to swap out for trying different distros I was thinking.

apt-get iozone3
iozone -a -e -I -i 0 -i 1 -i 2 -s 80M -r 4k

Gives a relatively quick and good test without it going on for ages.

Dunno you will have to test a2/a1

Think the SD 3.0 reader doesn’t take advantage of Micro SD express so the min guaranteed IOPs doesn’t happen.
Think Class A1 is the fastest the reader will take and use to full extent.

1 Like

Nice, good to know. Will definitely try this tonight :slight_smile: thanks for the hint

Not sure what I am doing wrong, but the results I get for the 64GB A2 uSD card does not seem indicative of the speed I am seeing when just farting around, and booting. Also cannot seem to declare me m.2 as a target for the test, when using the "iozone /dev/nvme0n1 --> iozone /u10 I get the same speed results as if using root. :confused:

just mount the m.2 cd to the mount dir and run as it runs from current dir

lsblk to see your disks
mkdir m2
mount /dev/xxx m2
cd m2
iozone -a -e -I -i 0 -i 1 -i 2 -s 80M -r 4k

PS if that was your A2 yeah pretty bad my a1 is faster.

thanks, I am obviously too ired at this point haha.
I did mount the m2 but thought I should just pass the mount point for the test to run there instead of moving to the mount point. Will try that next :wink:
And yes it is super slow in this test, but that seems wrong, the system is very responsive and very quick to boot. the Sandisk extreme should not be that slow.

Trying out and just running update for the Manjaro KDE version now :slight_smile:
So far its quite slow compared to the openbox that runs on the ARM64 but seems very “polished” and complete :slight_smile: very nice.
Will let it update completely and reboot and see how it fairs :slight_smile:

Not surpricingly the m2 disks comes up with better numbers in the test. :slight_smile:
Samsung SSD 250GB 970 EVO Plus M.2

1 Like

PS Sandisk Pro 32gb A1

 Command line used: iozone -a -e -I -i 0 -i 1 -i 2 -s 80M -r 4k
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
           81920       4     4763     4971     8408     8400     6711     4916

But got you beat

        Command line used: iozone -a -e -I -i 0 -i 1 -i 2 -s 80M -r 4k
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                            
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
           81920       4   128739   332329   601247   615462   578960   336148    

awesome numbers :slight_smile:
what is the 2nd round results from ?
m.2 brand and make?

Lols, I couldn’t beat that evo 970 so I cheated.

As well as adding some zram swap which works well on the Rockpi4, I cheated and added a zram dir and ran from there.
zram dir make really good zero block wear exceptional fast application directories if you ever need to.
Also you can create a zlog in a small 50M zram dir and just stop avoid block wear there is you so wish.

Still waiting for my 4 port sata but doubt I will be anywhere near your evo plus :slight_smile:

1 Like

haha fair enough, you did have me scratch my head there for a sec :smiley:
And yes thats a pretty sweet (and oldschool) way to do speedup as long as it is for non important data.

Cool, I did see the post about the 4 port sata haha thats then starting to look like a pretty cool NAS setup if you do zfs and pool those sata with zfs cache running in that zram :wink:

1 Like

StuartIanNaylor/zram-config

This looks really cool, will definitely have a play with this :wink: cheers man.
Ram disk is as old as I am, but back then memory cost the tip of a fighter jet, so might be cool to dedicate a little of the spare ram most of us have these days for something like this.

I also have an idea to play around with some of the cluster possibilities of the SBC’s (not trying to create anything crazy, more for the sake of messing around and learning something in the process. my attention is fleeting and short lived and my time is very sparse haha but I like to try new stuff when time permits. it always a tradeoff, I used to have plenty of time-ish, and so I messed around with unix and gentoo linux but now that work takes most of my time (macOS and Winslows) its hard to find the time … /rant

Anyways, my idea which is currently “on the back of a napkin after a drunk night out” kinda note, is to try and do some of the distributed computing that I have seen others do with raspberry pi.

But have the hardware on some of the newer 1gbe NIC SBC’s and have them connected over websocket for ultra fast latency, and distribute commands by mqtt with pub/sub for communications. (think if all are subscribing to a master node that can send RPC calls over mqtt to all at the same time) and maybe have each be part of a dynamic zpool owned/controlled by the masternode with dynamically adding/removing of nodes of the sbc you attach. their contribution in storage could then be from a zram mounted disk (and yes there is a good chance someone have already done this)
Okay, as you have like noted by now, this idea is super crazy and full of holes that I have not even been close to consider yet, but I like the idea. And maybe only some of this is possible, but hey might be a fun project.

Anyways this is WAY off topic, so sorry about that, just thought that something like the Rock Pi with its 6cores could be a good candidate for a project like this in the future, and using your zram might be a cool addition? :slight_smile:

zram-config is just a protest project about some very bad scripts but zram is pretty good also in 5.1 lzo-rle for arm is supposed to be amazingly fast whilst getting x3 compression.
The alg is your choice with zlib/zstd having great text compression up x10 but slower/more cpu.

The 2gb RockPi4 is prob best value for money and with zram if you change swappiness and page cluster its like what the do in chromiumos / android. 50% ram disksize x3 pagecluster 0 swappiness 100

But also like said if you do have more ram 4gb you can also make extremely fast working directories.
I use overlayfs to use CoW to bring the original data in and so only writes are copied up to zram so it can use large directories in minimal ram whilst on shutdown merges down to keep persistent.

Really needs a more modern kernel as a lot of overlayfs fixes happened but it works fine for most especially logs and swap doesn’t use overlayfs.

Your evo 970 is so fast that its almost near ram speed but the iozone gives you an indication.
I am humming and harring about ZFS with 4 disks and wondering if ZRAID is really much of an advantage over MDADM but going to test it out and see.
Just trying to get OMV working fine on 4.4 maybe also with Nextclould and LibreOffice Online in a docker container with Samba using a 4 disk raid.
The 1gb ethernet on the Rockpi4 is excellent but via USB or gadget g_ether its actually pretty stinky about 60% of the dedicated 1gb nic port.

If you struggle making your own void prob Manjaro minimal with i3 on 5.2 will be just as good if not better…

1 Like

yeah, actually I turned an old macbook air mid-2011 into a Manjaro with awesome WM and it just screams in use :slight_smile: so might be a good fit in the future. not decided yet, but will have it running for a while and evaluate.

For the zfs, I think the advantages are pretty overwhelming, in terms of snapshot, copy-on-write, self healing, dynamic raid etc etc. :slight_smile: at the cost of a few cpu cycles more though. But if it is a dedicated NAS those cpu cycles are likely not needed for anything else anyway.

Dunno yet :slight_smile:

PS a slight update to the image if you want to have a quick look.
OMV Arm64 Image
https://1drv.ms/u/s!AocmAh35i26QiRoVLn1ttnlorw8g
Debian user:password rock:rock
OMV user:password admin:openmediavault

sudo systemctl enable resize-helper.service reboot to resize /

PS Lucid the 4 port sata turned up today the below is a single 250gb Evo 850 sata

	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    40059    51432    60812    63636    31464    50149                                                          
          102400      16   106962   142357   160286   165256    95699   139100                                                          
          102400     512   325343   335507   317536   323487   312129   338324                                                          
          102400    1024   340292   351986   340852   339575   337433   352246                                                          
          102400   16384   444705   452061   479606   482634   479726   454479    
1 Like