Direct Kubernetes API access?

Is it possible to use ZT to connect to a kubernetes cluster through kubectl from a machine across the internet? Currently the cluster has been assigned a 192.168.x.x local IP and is therefore only reachable in its home network. Would I need some kind of jump host with ZeroTier running or is a more direct connection possible, e.g. through a pod or some kind of service running inside the cluster?

For kubectl access, you’ll need a router or bridge running zerotier to get you into the network the cluster IP is in. This may be possible to do via a pod if it’s assigned an address in the same network as the cluster IP.

Outside of kubectl, it’s also easily possible to run ZeroTier as a sidecar container in a pod and access the services in the pod via ZeroTier with something like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kube-hello-world
  name: kube-hello-world
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kube-hello-world
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kube-hello-world
    spec:
      containers:
      - image: my/nginx-server:0.0.1
        imagePullPolicy: IfNotPresent
        name: kube-hello-world
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - image: zerotier/zerotier:1.6.5
        args: [ "8056c2e21c000001"]
        imagePullPolicy: IfNotPresent
        name: kube-zt
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - SYS_ADMIN
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /dev/net
          name: dev-net
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /dev/net
          type: ""
        name: dev-net

Note: that this is a very naive implementation. The identity is regenerated on every start of the container.

ZeroTier can also be run as a DaemonSet for access to the individual nodes in a cluster as well, if you so desire.

1 Like

Wow, definitively more than one option. The sidecar approach might actually be a good fit for my use case. Will give it a shot :slight_smile: Thanks @zt-grant

1 Like

@davosian Let me know how this works out for you. I’m interesting in doing the same but hoping to find enough people to contribute device to the cluster.

Well, some initial testing was successful: I can attach the sidecar to a sample podinfo container, join the ZT network and access the pod at its http port 9898 just fine:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: podinfo
  namespace: dev
spec:
  minReadySeconds: 3
  revisionHistoryLimit: 5
  progressDeadlineSeconds: 60
  strategy:
    rollingUpdate:
      maxUnavailable: 0
    type: RollingUpdate
  selector:
    matchLabels:
      app: podinfo
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9797"
      labels:
        app: podinfo
    spec:
      containers:
        - name: podinfod
          image: ghcr.io/stefanprodan/podinfo:6.0.3
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 9898
              protocol: TCP
            - name: http-metrics
              containerPort: 9797
              protocol: TCP
            - name: grpc
              containerPort: 9999
              protocol: TCP
          command:
            - ./podinfo
            - --port=9898
            - --port-metrics=9797
            - --grpc-port=9999
            - --grpc-service-name=podinfo
            - --level=info
            - --random-delay=false
            - --random-error=false
          env:
            - name: PODINFO_UI_COLOR
              value: "#34577c"
          livenessProbe:
            exec:
              command:
                - podcli
                - check
                - http
                - localhost:9898/healthz
            initialDelaySeconds: 5
            timeoutSeconds: 5
          readinessProbe:
            exec:
              command:
                - podcli
                - check
                - http
                - localhost:9898/readyz
            initialDelaySeconds: 5
            timeoutSeconds: 5
          resources:
            limits:
              cpu: 2000m
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 64Mi
        - image: zerotier/zerotier:1.6.5
          args: ["mynetworkid"]
          imagePullPolicy: IfNotPresent
          name: kube-zt
          resources: {}
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
                - SYS_ADMIN
            privileged: true

Now I am trying to add the sidecar to a helm chart which does not provide for sidecar containers through its values file. I am trying to modify the deployment with kustomize patching, but I have not been successful so far (which is more related to my kustomize knowledge than anything else).

Then I would have to address the issues mentioned by @zt-grant. For one, I do want to reuse the identity created and do not want it to rejoin the network each time a new pod comes up.

Thanks for the info.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.