Synology Docker - routing table entries do not survive reboot

What command did you use to re-add the route and was it done within the docker container or on the NAS? I’m wondering if I can incorporate that into the script I’m running to have the script run every 5 minutes to see if it can reach the zt network and fix itself if it can’t.

Thanks for all of your troubleshooting efforts. However I still cannot replicate this issue on any of my units. What you’re saying certainly makes sense, a missing route and a down interface would cause these problems. Also, ZT doesn’t ever bring its interfaces down unless one leaves a network, so if it’s down, I strongly suspect something else is setting it to that state.

A little info on my main test setup:

  • DS216+II
  • DSM7
  • Docker version 20.10.3, build b455053
  • Did not use docker compose
  • Nothing else other than stock applications installed

I wonder if it’s related to this issue that Nebula also has on Synology docker. https://github.com/slackhq/nebula/issues/256

For now, I have a workaround in the form of a script that runs hourly and re-adds the route if it’s missing. I’m wondering though if the device ztwdjclgcv is static or could that change? Here is the script for anyone interested. We have all our client NAS’s connecting to the same zerotier network (with flow rules preventing communication between them), so this may not work for everyone but will have to be customized regardless. I did verify that this works again the 3 NAS’s that were currently unreachable.

EXIST=`ip route show 192.168.XXX.0/24 | wc -l`
if [ $EXIST -eq 0 ]
then
route add -net 192.168.XXX.0/24 dev ztwdjclgcv
fi

Looks like you already have the comment in your script meanwhile :wink:
Thanks for sharing it.
In my case I would need to bring the interface back up first (also added some sleep to make sure interface is up before adding the route).

EXIST=`ip route show 192.168.XXX.0/24 | wc -l`
if [ $EXIST -eq 0 ]
then
docker exec zt ifconfig ztwdjclgcv up
sleep 10
route add -net 192.168.XXX.0/24 dev ztwdjclgcv
fi

I’ll give it a go next time the connection drops.

@zt-joseph: Could be that the state is set by something else but that’s really hard to debug. And the fact that ZT almost doesn’t log at all does not really help to be honest :confused:
Do you have a docker image with debug enabled?

I’m just wondering how the interface is supposed to come backup after host or container restart. (I don’t have a lot of docker experience so might be a stupid question). On the host /etc/sysconfig/network-scripts/ifcfg-ztxxxxxxx just contains BOOTPROTO=static (nothing like ONBOOT=yes, no IP range or netmask,… )
If the zt container only configures the interface when joining or leaving and the host does not contain any config…

for debugging, try (sudo) ip monitor on your host, then start the zerotier container. I don’t know if this will work.

@cleverit : I’ve some testing today with your script (with the ifconfig up added). Works like a charm :slight_smile:

I’ve not seen any disconnects over the last week, so I’ve been unable to dig further into the source of the issues.

Regarding the instructions Synology NAS | ZeroTier Documentation, adding tun.sh as boot script seems to do nothing (tun is starting without just fine). I have removed it. Because it’s added this way it is also flagged by security advisor as a malicious boot script. Maybe it make sense to add it as a boot script using the UI taskscheduler to avoid this? (If it’s needed at all)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.