1.5.0 testing report

Hi,

I’ve been doing some testing with 1.5.0, building from source. Just two issues I’ve noticed so far.

Firstly, I had problems getting it to build on Centos 7 with GCC. Using clang it works fine.

Then I noticed that I was only able to talk to ZT peers with IPv6 connectivity. It seems that on an interface with both public(?) IPv4 & IPv6 addresses it will only listen/bind on the IPv6 address. Loopback is an exception - I see it listening on 127.0.0.1:9993 and ::1:9993 (but not eg.“enp0s3”). It does seem to behave if I just have a link-local IPv6 address (and no “real” one). And only IPv6 peers show up in “zerotier-cli listpeers” output.

Otherwise, bonding seems to work. :slight_smile: I haven’t tested exhaustively yet, but with “balance-aware” I would say it uses about the same CPU to get slightly less bandwidth (as measured with iperf3) than without bonding, but my test systems for that are VMs on a CPU-constrained Atom so that seems fair enough! It’s about 170Mbits/sec with bonding (same as for 1.4.6 without bonding in the same setup) and closer to 200Mbits/sec with bonding off on 1.5.0.

Anyway, I’ll keep testing, just wanted to mention the GCC build and IPv6-only issues in case you hadn’t seen that. And of course, to thank you for all your hard work. :slight_smile:

Adrian

1 Like

Thanks! I think there was a change to prefer ipv6 more than 4. Are you saying clients with no v6 won’t connect at all?

GCC that ships with CentoOS 7 is too old. We use clang to build our packages ourselves

I have a dual-stack network and “works for me.” Can you post some more details like what your interfaces and routes look like?

Hi Adam,

Thanks for looking at this. Hopefully the below gives you the picture. I built a fresh Centos 7 “minimal” host, updated everything and tested with 1.4.6 first to make sure I wasn’t going mad (worked fine on this setup for IPv4 and IPv6 peers/planets), then downloaded master.zip from GIT, built a 1.5.0 RPM with “make” and “rpmbuild -bb zerotier-one.spec”. I then upgraded with “rpm -Uvh” and it was back to IPv6 only for ZT on the NIC.

I’ve also tested on another host where I have both public IPv4 and IPv6 on an Internet-facing interface but only private IPv4 on an internal one, where ZT only listen on IPv6 on the NIC with both, but listens fine on IPv4 on the internal one.

(I had some fun trying to format the below for the new chat system… hopefully it’s readable.)

uname -a
Linux localhost.localdomain 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)

ip a ls enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:a9:bd:a4 brd ff:ff:ff:ff:ff:ff
inet 203.[redacted].158/26 brd 203.[redacted].191 scope global noprefixroute dynamic enp0s3
valid_lft 41893sec preferred_lft 41893sec
inet6 2403:[redacted]:a00:27ff:fea9:bda4/64 scope global mngtmpaddr dynamic
valid_lft 2591803sec preferred_lft 604603sec
inet6 fe80::a00:27ff:fea9:bda4/64 scope link
valid_lft forever preferred_lft forever

ip r
default via 203.[redacted].190 dev enp0s3 proto dhcp metric 100
192.168.195.0/24 dev ztks5vsi4d proto kernel scope link src 192.168.195.124
203.[redacted].128/26 dev enp0s3 proto kernel scope link src 203.[redacted].158 metric 100

ip -6 route
unreachable ::/96 dev lo metric 1024 error -113 pref medium
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 pref medium
unreachable 2002:a00::/24 dev lo metric 1024 error -113 pref medium
unreachable 2002:7f00::/24 dev lo metric 1024 error -113 pref medium
unreachable 2002:a9fe::/32 dev lo metric 1024 error -113 pref medium
unreachable 2002:ac10::/28 dev lo metric 1024 error -113 pref medium
unreachable 2002:c0a8::/32 dev lo metric 1024 error -113 pref medium
unreachable 2002:e000::/19 dev lo metric 1024 error -113 pref medium
2403:[redacted]::/64 dev enp0s3 proto kernel metric 256 expires 2591472sec pref medium
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 pref medium
fc7b:8769:ac00::/40 dev ztks5vsi4d proto kernel metric 256 pref medium
fe80::/64 dev enp0s3 proto kernel metric 256 pref medium
fe80::/64 dev ztks5vsi4d proto kernel metric 256 pref medium
default via fe80::203:1dff:fe02:c92e dev enp0s3 proto ra metric 1024 expires 1272sec pref medium

netstat -nap | grep zerotier
tcp 0 0 127.0.0.1:9993 0.0.0.0:* LISTEN 999/zerotier-one
tcp6 0 0 2403:[redacted]:9993 :::* LISTEN 999/zerotier-one
tcp6 0 0 ::1:9993 :::* LISTEN 999/zerotier-one
tcp6 0 0 2403:[redacted]:31957 :::* LISTEN 999/zerotier-one
tcp6 0 0 2403:[redacted]:31958 :::* LISTEN 999/zerotier-one
udp 0 0 0.0.0.0:39913 0.0.0.0:* 999/zerotier-one
udp6 0 0 2403:[redacted]:9993 :::* 999/zerotier-one
udp6 0 0 2403:[redacted]:31957 :::* 999/zerotier-one
udp6 0 0 2403:[redacted]:31958 :::* 999/zerotier-one
unix 3 [ ] STREAM CONNECTED 17555 999/zerotier-one

zerotier-cli info
200 info 4101210034 1.5.0 ONLINE

zerotier-cli peers
200 peers

17935e059e 1.4.6 LEAF 105 DIRECT 8820 8820 2403 [redacted]:842:7fc4:fe6:9983/53575
17d709436c 1.4.8 LEAF 169 DIRECT 2519 2519 2001:19f0:6001:2c59:beef:db:ad64:9c9e/33526
3a46f1bf30 - PLANET 164 DIRECT 3539 3377 2a02:6ea0:c815::/9993
62f865ae71 - PLANET 109 DIRECT 3539 3433 2001:49f0:d0db:2::2/9993
778cde7190 - PLANET 226 DIRECT 3539 3313 2605:9880:400:c3:254:f2bc:a1f7:19/9993
992fcf1db7 - PLANET 273 DIRECT 3539 3263 2a02:6ea0:c024::/9993
ccbb4c0ff5 1.4.6 LEAF 388 DIRECT 8545 8536 2403:[redacted]:fa:3c52:66c9:45d7/9993
ee46156b43 1.4.6 LEAF 2 DIRECT 3539 3538 2403:[redacted]:adc8:2ba5:f955:e0cc/9993

… and just for fun I had included ping/ping6 output for zerotier dot com but discobot thought I was posting links so… just take it from me that I have working connectivity!

Thanks again,

Adrian

As another data point, I’ve found that IPv4 starts working if I add another IPv4-only Internet interface (this one is NAT’d) and put in a default IPv4 route with a lower metric.

I finally got around to testing on Windows and MacOS as well (packages from download.zerotier.com/PRERELEASES) and I have the same problem on my dual-stack hosts on those platforms.

Just to tie off this thread, the problem turned out to be a bug in the code that classifies IP addresses into scopes, which caused all of 203/8 to be ineligible for binding, except for a single /24 test network. Just a one line fix, which is in 1.6.1.

1 Like