Hello from Dawson Creek, BC, Canada -1

New to Raxda (Rock Pi 4b), not new to Linux (1.2.13) or programming (FORTRAN).

I received the URL by email to acknowledge that I did want to sign up. The page that appeared in my browser did not say anything about allowing javascript, so pushing the button the first time did nothing. I am a dinosaur, and do as little with javascript enabled as is possible.

I live close to where the Alaska Highway begins, at Dawson Creek, BC, Canada. Most of my computer needs are in some sense related to farming. But since I have worked in other fields, my definition of farming may not be what your is. I will have robots at some point (who wants to prune trees?). I think the Rock Pi 4b is going to be involved in either serving time (NTP) or GPS base station type stuff (including RTKlib).

Many years ago, I ported most of Perl to QNX-2.x. I once built a personal repository of OpenWRT and built just about every package they had at the time for my hardware. The only other extensive cross compiling I’ve done is to compile gcc for a Sun from a PPro based machine running Linux (could have been yggdrasil (sp?) or Debian).

I’ve got a PinebookPro coming, which is another rk3399 device.

Most of my machines run Devuan (I hate systemd), I will try to keep my RPi type machines on as common a OS as I can. Possibly Debian, but maybe I can figure out how to leverage Devuan in that direction. In reading some forum topics, it seems that there are occassionally packages not available. I could try to set up a Debian repository here based on what you have, and try to build packages for Rock Pi 4b. I could send diffs to someone if that is useful.

Gord

3 Likes

Welcome!

Low level support/kernel and below is what is (still) critical here. Perhaps diffing Debian and Devuan is simpler on this level but so far there was too little interest to bring it up. Its some work to get going but when you are done, you get automatically backed with infrastructure and something that covers all those devices … except RPi :slight_smile:

1 Like

Hello Gord!

I would also like a non-systemd Debian / Armbian.

At the moment Debian is re-evaluating its support for non-systemd systems, lets see how that turns out:

This will likely influence the amount of work needed in Armbian too.

1 Like

Hello Igorp and aaditya.

Left to my own devices, I thought the first place to play with compiling, was with das U-boot. And reading there, it seems that a person want to start with the Embedded Linux Development Kit (ELDK), in its most current release.

But, it seems that you think I should start with Armbian.

It would seem that I need to start with a git source tree of some kind. I tried to download the eldk_build, and that comes back as either a typo or I need to become a “member” in order to download it.

Downloading Armbian seemed to take less time and the rockchip-bsp git.

I have to do this compiling as root? Sorry, I don’t do sudo. Fine, do the su. Try to set EXPERT mode:

It seems you ignore documentation and run an unsupported build system: beowulf

Actually, cat /etc/devuan_version produces

beowulf/ceres

which is the mixture of testing and unstable.

Pull compile.sh into emacs, and it seems Igor wrote it. Hello Igor. :slight_smile: Oh, and Igor wrote lib/general.sh too.

I will guess that my system is probably similar to disco. I am running a 5.2.0-3 (Debian naming) kernel. I have gcc-[6789] installed, as well as llvm-[56789] installed (I have interests in OpenCL with amdgpu).

I suppose what people would like, is for me to set up container with Ubuntu disco in it, and do my compiling in that. I have used a few containers, but I have run across an awful lot of container problems. I believe all the containers I have used are via vagrant. Do you recommend moving to a container, or editing your shell scripts to get past this buntu-ism?

Igor, I am assuming you are from Russia (or a former republic), and I believe Russia covers more time zones than Canada does (5.5). So, it could be morning for you now, or in a few hours. I can wait for you to have your morning coffee. My Mom’s family were Germans from Volhynia (part of the Ukraine). Sadly, I am pretty much hopeless in German, and have never studied Russian or Ukrainian. I have made perogies from scratch. :slight_smile:

1 Like

In this case you need to add another parameter: docker and it will prepare docker image/container for you. You don’t need to do anything on your own unless you are running a system that was not derived from Debian. In that case you would need to install docker manually and then run the script.

In case of compiling with Docker, you don’t need sudo. For native building you need. For most of the things sudo is not needed, but we need it to create a loop device which is our image where stuff gets installed. Until we don’t find a solution how to deal with that, sudo is necessary evil (i know its bad).

It’s my copyright for legal reasons but I didn’t write all of the code. It’s impossible but am among the main authors / maintainers.

No, no need to edit anything. Script has to be left intact. You have configuration files to operate with customisation.

Few hours to the west, Slovenia. :grinning:

Okay, so I will see how things go with docker.

When I was playing football in Edmonton, Alberta, Canada; there might have been some people from Slovenia. Near where I am now, there are a lot of people from the Sudeten who I played football with.

I’m following the recipe at https://docs.armbian.com/Developer-Guide_Building-with-Docker/

All the machines on my LAN do apt-get type operations via a apt-cache (?) proxy on my server. I am guessing I need to register the key there as well.

I have both my server and this client machine with that docker key. I guess I need to go read the manpage on apt-cache (or ngcache or …) to deal with this warning:

W: Failed to fetch https://download.docker.com/linux/ubuntu/dists/bionic/InRelease Invalid response from proxy: HTTP/1.0 403 CONNECT denied (ask the admin to allow HTTPS tunnels)

(some of that warning was deleted).

packagecloud.io has a page on setting apt-cacher-ng (which is the proper name of the cache I am running) to use SSL/TLS.

It seems I need to add to acng.conf a PassThruPattern for download.docker.com. And I need to escape the periods.in download.docker.com. And I probably have to append “:443” to that IP address as well.

On allowing the PassThrough on my server, apt-get update doesn’t have a problem any more.

The next step is to install docker-ce. By asking apt-get to install (but not allowing yet), I get some of the following:

The following NEW packages will be installed:
apparmor aufs-dkms aufs-tools cgroupfs-mount containerd.io docker-ce
docker-ce-cli pigz
The following packages will be upgraded:
libseccomp2

I have had problems with apparmor in the past; so my inclination is to change the command to:

apt-get install docker-cd ; dpkg --purge apparmor

Does building Armbian somehow need apparmor running? Purging apparmor does not remove libapparmor.

Okay, that allowed me to install docker-ce on this computer.

Running compile.sh (as root) still complains, but to me it seems I am missing something.

Compile.sh assumes that the user is running an instance of Ubuntu (bionic). I am not running that, I am running Devuan beowulf/ceres. So, I need to set up a container which provides the Ubuntu environment, and then within that I can do my compiling. Or am I misunderstanding something?

Been doing other things.

I installed a bunch of vagrant things on this computer, and at the moment I am doing a first install of generic/ubuntu1904 (disco). So that should get me an amd64 environment which reports as something that armbian allows me to compile in.

Devuan packages installed were vagrant-libvirt vagrant-hostmanager and vagrant-sshfs.

I have done very little with VM, containers or anything else. I do know there is a lot of imprecision and inaccuracy with respect to how any of the various VM and containers work. And as near as I can tell, the people pushing either VM, containers or anything similar are not rushing to educate people.

From what I read in the Armbian guide, if I was running on Apple (IOS) or Windoze, I could install an Armbian compiling environment with docker. Docker leverages the installed OS, it doesn’t try to replace it (which is what a VM does). So, anybody trying to compile Armbian using docker should always fail because the environment provided isn’t Ubuntu/bionic (or better).

Supposedly some people from IOS and Windoze are succeeding, so my understanding of what is going on must not be correct.

I have 20 or so minutes for the vagrant generic/ubuntu1904 environment to download, so I have time to speculate/comment.

I am running armbian build system on my notebook running Debian Buster. Setup is very very simple … you don’t need to create anything at all. Script does that for you (if your host package manager is apt which on Devuan is)

I have no idea. If Docker needs that, you have to install it. If you have concerns about this and that, rather move to fully virtualisation environment. Its pointless to waste time with things that suppose to work out of the box.

It creates Ubuntu bionic image inside container. It is irrelevant what is on your host. Automated install only needs apt install command for installing docker but you could run this script on Mac, RedHat, Slackaware …

Never used Vagrant and I heard there are ATM some issues with. Use Docker, it must work!

Sorry for being out of touch. My Mom is 87 and has been having cluster headaches and sleeping problems.

When I entered
./compile.sh EXPERT=yes docker

it did nothing.

I then went through the process of installing vagrant. I noticed I wasn’t yet running the newest kernel that I had installed on this computer (Ryzen 1600 with 32GB of RAM and a RX-570 that can be used for graphics or mining (has a little BIOS switch). But I also noticed that the vagrant install process, (or the part of vagrant that is involved with DKMS) had noticed that my BIOS was set to not allow virtualization.

So, I shut down the computer, got into the BIOS and changed that virtualization setting and got back to the OS. Now vagrant/dkms configuration goes to completion.

Just tonight, I reran

./compile docker EXPERT=yes

and it ran and ran. I am now at a question about only build uboot, or building everything. I’m inclined to build everything. Hopefully there is a log somewhere where I can look for problems. :slight_smile:

Thanks for the help so far.

In a bit, I am going to go a lookout point, to see if any alpha monoceros meteors make it out here. SpaceWeather says not a chance.

I told compile.sh to build everything, Ubuntu/disco, development, and every other bad option. :slight_smile: The compile eventually failed, with an “out of space” message. I still have almost 3TB on this particular disk, so it must be the size of the container. I had that happen in vagrant once before. There was a trick to making the container bigger. But, maybe one uses sshfs to do the compiling somewhere on “this” filesystem?

I seen that PinebookPro laptops apparently have been shipping, although there is a 2 week delay in the middle somewhere. So, perhaps in a couple of weeks I’ll have another Rockhip RK3399 computer to also try compiling stuff for? Although there is a trackpad and a NVMe problem at the moment.

2 Likes

My root partition is completely full. I can see where docker is doing some stuff in /run (which is on the root partition), but I don’t see this big chunk of storage. I’ll try finding where all this storage is trying to live tomorrow.

Thanks to all (in this thread, or in messages I’ve read). It took a bit to set things up (at /home/Docker) and then remove what docker had set up at /var/lib/docker and then set up a symlink. I then restarted the compile of pretty much everything on Debian/buster (that supposedly being a better platform for panfrost). The compile went to completion. It gives me a place to start from.

Time to type this in again, somewhat form memory.

I received a QNine NVMe-SSD/USB enclosure. So, I mounted a 1TB SSD in it, and started to prepare it for files. The finished images I downloaded from Armbian assume a significantly different situation than what I am setting up.

Before I get into that, I decompressed the 7zip compressed Buster desktop image, producing 4 files. The text file suggests that I should thank Igor for setting this up. So, thanks Igor! The text file gives some magic to verify the signature. Gpg says it can’t check signature, no public key. The shasum256 check works just fine.

The image that Armbian supplies, assumes a person is writing to a device, not to a partition. I am going to try partitioning this similar to a desktop: a swap partition, partitions for home, boot, tmp, usr, var, usr/local and var/log. The home partition would be btrfs, all the others would be ext4 except for ext2 for tmp. The SSD was partitioned by gdisk using a protective MBR, and a 128 MiB gap between partitions. Slightly more than half of the SSD was partitioned.

I think I am going to have to dig stuff out of the container where I am practicing compiling stuff. That would be the easiest way to make a filesystem copy of most of the OS, leaving the boot loader and some other details for eMMC and/or microSD.

You have to import public key to verify signature:

gpg --keyserver ha.pool.sks-keyservers.net --recv-key DF00FAF1C577104B50BF1D0093D6889F9F0E78D5

If this key server doesn’t work for you, try others …

Those devices are between embedded world and desktop and OS is adopted to this fact. That’s why its a bit different. Leave perfectly adjusted swap settings you don’t want to kill performances. We use compressed memory to extend memory and not slow swap … observe how things are made. Its a huge improvement over what you are used on the desktop. There you don’t need to save resources, here you must.

1 Like

Okay, I am starting to get a hold of this javascript editor. For me, having the SHA256 match was probably enough.

Are you saying to not have a swap space?

I’ve always set up multiple partitions. Not that I “have to”. But I thought I would point out why I am doing some of this.

The reason to have /tmp on its own partition, is so that it could use a filesystem that does not have a journal (as nobody should rely on what is in /tmp more or less by definition).

I don’t have strong reasons for putting /var on its own partition. But, the reason to put /var/log on its own partition is to keep runaway logging problems from filling all the filesystem the logs are being written to. But /var does have other parts of the filesystem which historically can also get out of control (/var/mail and /var/spool come to mind).

I don’t normally think about backing up /usr, but /usr/local is a part of the filesystem I do like to backup.