Multiple docker clients on the same machine do not work well

I have proxmox server. Inside i have LXC container (ubuntu 22.04). Inside it i have zerotier in docker container. This is my docker-compose

services:
  zerotier-vw:
    container_name: zerotier-vw
    # https://hub.docker.com/r/zerotier/zerotier/
    image: zerotier/zerotier:${DOCKER_ZEROTIER_VW_VERSION}
    command: ${ZEROTIER_NETWORK_ID}
    environment:
      - PUID = ${DOCKER_USER_ID}
      - PGID = ${DOCKER_GROUP_ID}
    volumes:
      - ${DOCKER_VOLUME_ZEROTIER_VW}:/var/lib/zerotier-one
    devices:
      - /dev/net/tun:/dev/net/tun
    privileged: true
    cap_add:
      - NET_ADMIN
      - SYS_ADMIN
    network_mode: host
    restart: always

It used to work fine for like 1 year or more. Recently i created second container based on the same docker compose but with different ZEROTIER_NETWORK_ID and DOCKER_VOLUME_ZEROTIER. Offcourse service name and container name are also different. I started it and it seemed to work fine - i could ping ip. But next day i noticed i cant ping it anymore. Connection seems fine, both containers are healthy, in zerotier panel i see last connection like 1min ago. So it works, but it doesn’t :D. I restarted both containers and all things started to work again. It was yesterday. Today im at work, and i see that again a cant ping this machine. I can ping any other machine from ZT network but not this one.

Does it mean multiple ZT networks with multiple docker containers is bad idea? How it should be set ?

Ok seems that cause of problem has nothing to do with the number of simultaneous connections. My network is being shutdown at night with shelly plug. Like switches, modem but not the proxmox server. When network starts again zerotier clients stay disconnected - now i notice that i was wrong before, indeed in zerotier panel i saw last succesful connection exactly before last network shutdown.

For now as a workaround i just created cron script which restarts both dockers at 8AM. And today mornig i saw that all works as it should.

But if you have better/cleaner solution for this problem please share it here. I will keep this topic open to see if this workaround is consistent.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.