Will Zerotier support over 4000 nodes?

I would like to replicate Firecracker 4000 VMs demo on a single server (GitHub - firecracker-microvm/firecracker-demo: A demo running 4000 Firecracker microVMs.) with Zerotier client running inside each VM, I looked at other network-based solutions - including wireguard based, docker bridge and CNI bridge/ptp and tc interfaces and they all have shortcomings: bridge would not support over 1048 nodes. Will zerotier handle it? And what is the performance impact - I am planning to take this idea from the demo into production.

1 Like

Hello Alex,

This is interesting and I look forward to seeing this happen. Keep us in the loop here! I’m not aware of any limitations that would be an issue as long as you chose a large enough subnet, and I’m not entirely certain how bridging plays a role here but if you do have one node designated as a bridge be sure to give it adequate computing and network resources and you should be good.

If you do run into scalability issues we’re very interested in knowing what they are and can work with you to mitigate them.

Good luck!

1 Like

Thank you, @zt-joseph if you have any hints on how to automate zerotier deployment or use with nomad let me know. So far the biggest obstacle for me to use zerotier consistently inside vms and servers is the interface name (currently ztnpgwep3r on nearest RPI4), can zerotier create consistent interface - like ztoverlay on join?

Sure,

Check out this SO question and answer on how to use the devicemap file. (I know, we should put this in our docs.)

As for automation, I’ll refer you to @sean.omeara since he wrote our Terraform provider and may have some good advice. I don’t have any knowledge specific to Nomad unfortunately.

The interface name on Linux is consistent for each network. For network A, the interface name on linux will always be the same.

Good to know: just checking if I got it right, so if I spin 1000 VMs inside each VM will be a fixed interface ztnpgwep3r, which I can query with the script?

As long as they’re all joined to the same network, yes.

Thank you. Maybe there is an easy way to read node IP from zerotier cli? That was the whole point of figuring out the interface name.

each instance has a REST API. That can probably tell you anything you wan to know. You can also use the CLI to get JSON by adding -j to the command line arguments

Hi Alex!
For your use case, I would recommend the following

  • Calculate and assign IP address through the API (no_auto_assign_ips, ip_assignments) instead of relying on controller provisioned addresses. You’ll have a much easier time rendering information into configuration files and such.

  • The addressing scheme means you get a lot of packets for free with IPV6 (look into 6PLANE as well)…

  • If you must use IPV4, use the flow control rules to block broadcast traffic if windows is involved

Pro tip:

  • Remember that ZeroTier simulates an ethernet switch, so there’s actually nothing stopping you from manually setting arbitrary addresses on the interface if you need to.

Here are automation references

Automatically authorizing a member with curl:

If you’re using Terraform, here are some examples for creating identities and bootstrapping nodes

-s

2 Likes

Amazing reply @sean.omeara, thank you for the advice, I will look into it and the links - I may have to come back in a week.

One more thing I forgot to mention, re: Nomad

This exists: GitHub - hashicorp/consul-terraform-sync: Consul Terraform Sync is a service-oriented tool for managing network infrastructure near real-time.

This will let you use our Terraform provider to create memberships (network authorizations) in response to changes in the Consul catalog.

I’ve been meaning to find the time to experiment with it… I’d LOVE to talk to you if you get this working =)

-s

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.