Rock 5B loses Ethernet connectivity upon Docker startup - Recurring issue with r8125 driver and APIPA routes

Hello Radxa community,

I am experiencing a persistent and highly frustrating issue with my Radxa ROCK 5B where it loses Internet connectivity intermittently. I’ve managed to isolate that the trigger is the startup of the Docker service . I would greatly appreciate any insights or assistance regarding this behavior.

My Configuration:

  • Hardware: Radxa ROCK 5B
  • Operating System: Debian Bookworm (Radxa official, kernel 6.1.84-6-rk2410 ).
  • Ethernet Driver: r8125 (version 9.015.00-NAPI, compiled out-of-tree ). First I also tried the stock driver version that came with the kernel before compiling this one.
  • Network: Connected via Ethernet cable to an OpenWrt router (BPI-R4) on the LAN ( 192.168.1.0/24 ). Rock 5B’s static IP: 192.168.1.246 .
  • Services: I run multiple Docker containers using Docker Compose (ARRs, Jellyfin, etc.).

The Problem: The Rock 5B loses Internet connectivity. Pings to 8.8.8.8 or google.com result in From 169.254.x.x icmp_seq=X Destination Host Unreachable . This indicates that traffic is attempting to route via an APIPA address ( 169.254.x.x ) instead of the correct gateway ( 192.168.1.1 ).

Isolated Behavior (Key Tests & Conclusions):

I performed extensive tests to isolate the cause:

  1. BPI-R4 Router Excluded:
  • Other devices on the same LAN ( 192.168.1.0/24 ) connected to the BPI-R4 have normal connectivity .
  • The fw4 firewall configuration on the BPI-R4 ( /etc/config/firewall ) is correct, and nft list ruleset confirms that forwarding and NAT rules are active and functional for both LAN segments, allowing traffic to the WAN.
  • Conclusion: The issue is NOT the BPI-R4 or its firewall. Connectivity breaks on the Rock 5B.
  1. Problem Isolated to Docker on the Rock 5B:
  • Test 1 (24 hours without Docker): I rebooted the Rock 5B and let it run for 24 hours without starting the Docker service . Throughout this period, the Rock 5B maintained Internet connectivity without any issues .
  • Test 2 (Starting Docker): Immediately after starting the Docker service ( sudo systemctl start docker ), the Rock 5B lost Internet connectivity , with pings showing From 169.254.x.x Destination Host Unreachable .
  • Conclusion: Docker service startup is the direct trigger for the loss of connectivity.

Details of Connectivity Loss on Rock 5B upon Docker Startup:

When Docker starts, I observe the following behavior on the Rock 5B:

  • Network Interfaces:
    • The main enP4p65s0 interface maintains its correct IP ( 192.168.1.246/24 ) and remains UP, LOWER_UP .
    • Multiple virtual veth interfaces (created by Docker for containers) appear UP and receive APIPA addresses ( 169.254.x.x/16 ) instead of valid IPs from Docker networks ( 172.x.x.x ).
  • Routing Table ( ip r ):
    • A new default route appears: default dev vethXXXXX scope link .
    • This APIPA-backed route takes precedence over the correct default route ( default via 192.168.1.1 dev enP4p65s0 ), redirecting all Internet traffic through a non-functional interface.
  • Docker Daemon:
    • The Docker daemon sometimes manages to start ( active (running) ), but host connectivity is lost.
    • Other times, the Docker daemon fails to start ( failed (Result: exit-code) ) , with messages like Start request repeated too quickly .
    • Crucially, I haven’t been able to obtain detailed Docker daemon debug logs ( journalctl -xeu docker.service shows -- No entries -- , and log-driver configuration in daemon.json hasn’t produced visible logs), which hinders explicit root cause identification for Docker’s internal failure.

Mitigation Attempts (Unsuccessful so far in preventing the core issue):

I have attempted various configurations in /etc/docker/daemon.json and sysctl , without success in preventing the core problem:

  • "ip-forward": false and "iptables": false : Prevented Docker from starting, but host connectivity was maintained (confirming Docker was the trigger).
  • "bip": "172.23.0.1/16" : To change the default docker0 network range.
  • net.ipv4.conf.all.accept_local = 0 in sysctl : To try and prevent the kernel from assigning routes to APIPA IPs.
  • systemd configurations for docker.service ( After=network-online.target , Wants=network-online.target ).
  • UFW has been disabled, confirming it’s not interfering.
  • Attempted Docker data-root reset, which allowed Docker to start, but the connectivity issue then reappeared upon Docker’s successful startup.

Final Conclusion:

The problem is a critical, low-level interaction between Docker’s startup (specifically the creation/manipulation of veth interfaces and bridges), the r8125 Ethernet driver (compiled out-of-tree ), and the Debian kernel version on the Rock 5B. This combination appears to lead to a race condition or incompatibility causing the kernel to erroneously prioritize routes through APIPA-assigned veth interfaces.

Has anyone else experienced this behavior with Rock 5B, Debian Bookworm, the r8125 driver, and Docker? Are there specific kernel versions or r8125 driver versions recommended that resolve network stability issues with Docker? Are there any advanced nftables (for OpenWrt) or iptables (for Debian) configurations that can prevent the kernel from creating or prioritizing these default dev veth... routes?

Thanks in advance for your help.

As always give a shoot different images, kernels,
If You have clear test case then this should be easy, Armbian just switched to 6.15 edge kernel, and there are few other choices there. As always other things may be broken, but You can find out about this particular one.

Hi, a related report:

After doing rsetup system upgrade on a fresh install yesterday, the onboard Realtek 8125 controller did show up in lspci, but did not show up in ip/ifconfig/dmesg at all.

So I downgraded the kernel back from the “6.1.84-6-rk2410” to the bundled “6.1.43-15-rk2312”. But, in all situations, the network on LAN and WAN (R8125 and I225) interfaces as operated by NetworkManager were totally broken. Thread here A horrible failure report in latest Radxa Debian - no networking due to kernel, NetworkManager or other component broken. ifupdown a fix? .

The old Debian 11 Bullseye with 5. kernel worked OK all the time. I’m surprised my attempt with the Rock5B Radxa Debian 12 Bookworm with 6. kernel has been so very problematic.

If you have any thoughts or insights please let me know, many thanks.

Thank you very much for your response and for sharing your experience. Indeed confirms that I’m facing a very similar, if not the same, set of issues. Your report directly aligns with my observations and validates many of my conclusions.

I appreciate you mentioning your kernel downgrade attempts and the state of the network interfaces under NetworkManager. My own tests led me down a very similar path with the same 6.1.84-6-rk2410 kernel.

Here’s a summary of my findings, which I believe will provide further insights:

Confirmed Problematic Combination: My Rock 5B, running Debian Bookworm with the 6.1.84-6-rk2410 kernel and the r8125 driver (both stock and a manually compiled 9.015.00-NAPI version were tested), consistently exhibits network instability. 

Network Manager State: I can confirm that NetworkManager is also already disabled on my Rock 5B, leaving systemd-networkd as the active network manager, that allows to have connectivity to the internal network and even to internet (If I keep docker down).

Direct Trigger: Docker Startup: My most critical finding is that the network corruption is definitively triggered by the Docker service startup.
    Without Docker running: The Rock 5B has perfect Internet connectivity for extended periods (tested for 24 hours).
    Upon Docker startup: Connectivity is lost almost immediately.
Specific Symptoms on Rock 5B when Docker starts:
    The enP4p65s0 (main Ethernet) interface remains up and has the correct IP (192.168.1.246/24).
    Docker's virtual veth interfaces often get APIPA addresses (169.254.x.x/16).
    A critical issue is that the kernel's routing table gets corrupted: a default dev vethXXXXX scope link route (pointing to one of these APIPA veths) takes precedence over the correct gateway route (default via 192.168.1.1). This causes all Internet traffic to fail with Destination Host Unreachable.
Lack of Docker Daemon Logs: A major hurdle in my debugging is that journalctl -xeu docker.service consistently shows -- No entries -- upon Docker's failure to start or when it causes the network corruption, even with debug logging enabled in daemon.json. This prevents a clear understanding of Docker's internal issues.
Unsuccessful Mitigations:
    Standard Docker daemon.json tweaks (including bip changes, ip-forward:false, iptables:false).
    sysctl adjustments (net.ipv4.conf.all.accept_local=0).
    systemd docker.service drop-in configurations (After=network-online.target, Wants=network-online.target, delayed startup).
    A script I developed to auto-detect the APIPA route and restart docker no longer restores connectivity once the network is corrupted by Docker; a full system reboot is currently required.

Your report strongly reinforces my belief that this is a fundamental, low-level incompatibility between the Realtek r8125 driver, the Debian Bookworm kernel 6.x, and potentially how systemd-networkd (or even Docker itself) interacts with this specific network stack. The problem seems to manifest when Docker’s network components are initialized.

I will continue to monitor the thread you linked and will report if I find any breakthroughs or if Radxa releases a kernel/driver update that addresses this. Your confirmation of the ifupdown fix is very encouraging, and I might resort to that if no other solutions emerge for systemd-networkd.

1 Like

Thanks for the suggestion. I’ve already tested different kernel versions (stock and a custom-compiled one) and confirmed the issue persists.

I understand that a clear test case should make it easier, but as detailed in my main post, the specific problem is a low-level interaction between the r8125 driver, the Debian 6.x kernel, and Docker’s network initialization, leading to routing table corruption. This is proving very difficult to isolate and fix without detailed kernel/driver debug logs.

I was indeed considering a full system reset of my Rock 5B to a fresh Debian installation, but given the persistent nature of the problem, I’ve decided to try a different approach first.

My current plan is to install an alternative network interface card (NIC) in my Rock 5B, hoping it will replace the problematic r8125 controller as the primary network interface. This will help determine if the core issue lies specifically with the r8125 driver’s interaction with the kernel/Docker, or if it’s a broader network stack problem on the Rock 5B with kernel 6.x.

I will attempt to make the new NIC the main network interface and then reinstall and start Docker to see if the network corruption reoccurs.

In parallel, I will continue to monitor the Radxa forum and official repositories for any kernel or driver updates that might address the r8125 issues. Hopefully, Radxa can provide a solution that stabilizes the onboard Ethernet.

I’ll report back my findings with the alternative NIC.

1 Like

Hi, the news for today from me is, I connected the Rock5B R8125 to a better configured Ethernet network today. And the DHCP client in NM worked for the R8125. So I’m happy.

However because as I reported in the other thread NetworkManager still behaved super-unstable yesterday, I will not touch NM from now and on. If I want a second or third Ethernet interface on this Rock5B, I will use ip/ifconfig in a pessimistic barebones way, and not touch NM.

I also don’t think I’ll run rsetup's update function any more also to be safe. In brief, Yuntian says they’ll release a new 12 Bookworm image which will require fresh installation from scratch. I presume that image is the next point when the Radxa Debian may show to work a lot better than the current Radxa Debian. Please refer to the other threads for more details.

On separate topics, as @RadxaYuntian pointed out in the other thread, Armbian might work fine for Rock5B too.

In Armbian, Rock5B has its own download page https://www.armbian.com/rock-5b/ and there is also a web forum section here https://forum.armbian.com/forum/262-radxa-rock-5b/ . The biggest documented drawback with Armbian appears to be that, as I understand it, still they have not imported Radxa’s patches for the USBCD PD, and therefore in Armbian the USBC power only works in the non-negotiated i.e. 5V mode, and also any particular USBC PD charger might not work.

If you have particular problems in the Rock5B Radxa Debian 12 Bookworm, what about you flash Rock5B Armbian Debian 12 Bookworm to a MicroSD card, boot from it, and test if you get same or different outcomes, and share the learning here?

There is no such thing.

I do not know what they are exactly doing. I was just explaining what would happen when they say they don’t support PD.

2 Likes

Oh. Just curious, when they write “PD is broken for the 5B model (background) on most revisions that are in the wild and is causing boot loop.” and in the background article they speak about that the firmware/bootloader must carry a value into the Linux kernel/drivers -

What does your Radxa Debian bundle do actually? Because I always saw the Rock5B negotiate 20V PD. There were no PD negotiation problems. Did you implement a PD negotiation driver in Linux or how is the PD negotiation to 20V done?

Many thanks

It’s same as what they describe. We negotiate PD but it happens later in Linux kernel, which will timeout some power supplies and reset the board.

1 Like

Thanks a lot for the update! This is very insightful and aligns with my experiences.

It’s good to hear you got the R8125 working on a “better configured Ethernet network” with NM’s DHCP client. However, your decision not to touch NM anymore due to its instability is completely understood and reflects my own observations. I also agree that rsetup update seems to cause more issues than it solves in the current Debian Bookworm image.

The news about a new Debian 12 Bookworm image from Radxa that will require a fresh installation is EXCELLENT. This is exactly what I was hoping for, as it suggests Radxa is aware of and addressing these underlying networking issues in the current image. I will definitely wait for this new image and perform a clean installation with it, as it seems to be the most definitive official solution.

Regarding Armbian: Thanks for the links and the detailed feedback on its status. I was indeed looking into Armbian as a potential alternative. The information about the USBC PD patch not being imported is a crucial detail for me, as I rely on stable power negotiation for my Rock 5B. This makes Armbian a less ideal long-term solution for my setup if that issue persists.

However, your suggestion to flash Armbian Debian 12 Bookworm to a MicroSD and test it out is a smart diagnostic approach. I will consider doing this as a temporary test environment to see if the core networking stack behavior with Docker (and the APIPA routing table corruption) is different in Armbian’s implementation. If it proves stable in Armbian, it would point even more strongly to a specific issue with the Radxa Debian kernel/network config.

I’ll report back with my findings on that if I proceed.

Thanks again for your continuous help and insights!

I’m updating my previous post regarding the persistent network connectivity issues on my Rock 5B when Docker starts up. Following the valuable insights from this thread, I conducted a crucial test to isolate the problem further.

Previous Understanding: My Rock 5B running Debian Bookworm (kernel 6.1.84-6-rk2410 , with r8125 Ethernet driver) consistently lost connectivity when Docker started, showing 169.254.x.x APIPA routes taking precedence. The main suspect was the r8125 driver’s interaction with the kernel/Docker.

New Test: Disabling Ethernet and Using WiFi as Primary Connection

To definitively rule out the r8125 Ethernet driver as the sole cause, I performed the following test:

  1. Configured WiFi ( wlP2p33s0 ): I successfully set up the WiFi interface ( wlP2p33s0 ) using systemd-networkd and wpa_supplicant . The WiFi obtained a valid IPv4 address via DHCP and established a connection to my access point.
  2. Disabled Ethernet ( enP4p65s0 ): I then administratively brought down the enP4p65s0 (Ethernet) interface.
  3. Confirmed Host Connectivity via WiFi: The Rock 5B maintained stable Internet connectivity via WiFi, confirming the WiFi hardware and basic systemd-networkd setup were functional.
  4. Initiated Docker Service: With WiFi as the only active network interface providing Internet connectivity, I proceeded to manually start the Docker service.

Results of the WiFi Test (Crucial Findings):

  • Problem Reproduces: Immediately after Docker started, the network connectivity on the Rock 5B was lost, just as it was with the Ethernet connection.
  • Same Symptoms: The routing table ( ip r ) became corrupted, showing default dev vethXXXXX scope link routes (pointing to 169.254.x.x APIPA addresses on Docker’s virtual interfaces) taking precedence over the valid WiFi gateway route ( 192.168.1.1 ).
  • r8125 Driver is NOT the Sole Culprit: This test unequivocally demonstrates that the issue is not specific to the Realtek r8125 Ethernet driver . The same routing table corruption occurs regardless of whether the primary Internet connection is provided by the onboard Ethernet or the onboard WiFi.

Updated Conclusion:

The problem is a fundamental, low-level incompatibility or bug within the Rock 5B’s kernel 6.1.84-6-rk2410 (Radxa official Debian Bookworm build) network stack when interacting with the Docker daemon’s network initialization processes. It affects the kernel’s ability to correctly manage routing when Docker’s virtual network interfaces ( veth s) are brought up, leading to invalid APIPA routes being prioritized.

This confirms the broader issue hinted at by other users in this thread who experienced similar network problems with kernel 6.x and Debian Bookworm on the Rock 5B.

Test different stuff - really old 5.1, vendor 6.1, edge 6.15 as well as different user spaces - debian, ubuntu. Usually it’s as easy as burning different image and re-using same test case. Some important functionalities are broken at kernel, drivers, other just in packages or their configuration.

Debian up to one of version has issue with iptables and needed special configuration for it. This issue was present on most build, including x86. Yet again it’s easy to compare that with something different.

Or with debian itself :slight_smile:

Great :slight_smile:

This problem is easy to avoid with different PD adapter,
always You can power up board with plain 12V,
Armbian makes easy to test several things, build customized image with some specific kernel option. Sometime new update broke something, but it’s nothing new in software world.