SSH with GUI programs - can I replicate local connection speed with Zerotier?

Over SSH connections started with -X (or -Y) flag, depending on where the
remote computer is, I’m experiencing drastically different performance for GUI

  • When the remote computer is also in the same local network, Firefox is
    responsive, and there is no noticeable lag when I trigger a forward search
    from VimTeX to Zathura.

  • Then, when the remote computer is at another location, it takes 10+ seconds
    for Firefox to redraw, and there is a slight lag for Zathura to render a new
    jump event.

Typically, pinging the remote machine over Zerotier got me

  • rtt min/avg/max/mdev = 10.964/12.883/17.748/1.840 ms

And, pinging the machine locally got me

  • rtt min/avg/max/mdev = 0.440/0.533/0.656/0.081 ms.

Is the slowness for remote GUI programs fully explained by the 10/100-fold
differences in the Round Trip Time? Or, am I missing some Zerotier/SSH tricks
that can speed things up? In my setup, the remote machine is behind an
enterprise firewall. Due to such noticeable lag, I usually first remote into a
Windows 10 machine in the same network using Windows Remote Desktop connection,
and then start a local SSH connection to the Linux server running Linux Mint
20.2 (Uma). Both the RDP and SSH connections are done using Zerotier addresses.

I’ve tried reverse SSH tunneling before, but recall having experienced the same
slow remote GUI issue over the forwarded port. Performance-wise, given that
Zerotier also runs on the jump box which is set as a “moon”, can I expect
remote GUI programs to perform the same when access through the reverse tunnel
or the Zerotier IP?

As suggested in this post, I tried ssh -YC instead of ssh -Y. Here, C denotes compression. This indeed improved the rendering speed for GUI programs.

I had the same problems and setup ssh ControlMaster, it allows the ssh session to multiplex. I have tested the latest ssh build with 20 iperf connections and it got me damn near the same without zerotier. AWS to Residential broadband got me 80 Mbps down and 14 Mbps Up. That’s about 100% of my available broadband.

in your SSH config:

Host *
ControlMaster auto
ControlPath ~/.ssh/cm_socket/%r@%h:%p
Then mkdir ~/.ssh/cm_socket,

Any time a connection to a remote server exists, it’ll be used as the master for any other connections.

ssh_config(5) manpage should give you the necessary hints to setup more restrictive configurations. If you need to disable control master for a given connection you can pass -S none to ssh (or set ControlPath none).

To set up the master connection, run ssh -M -S ~/.ssh/remote-hostuser@remotehost . Then, run ssh -S ~/.ssh/remote-hostuser@remotehost for subsequent connections to the same host.

Hope it helps

Thank you for your tips. Multiplex is totally new to me, and works right out of the box with the following three lines in ~/.ssh/config:

host *
    ControlMaster auto
    ControlPersist yes
    ControlPath ~/.ssh/cm_socket/%r@%h:%p

I’m with OpenSSH_8.4p1 on a WSL instance running Pengwin, and it does not seem to admit either the -M or ‘-S’ flag. Though, with those three lines in the config file, all subsequent SSH connections from the same local machine are all going through a shared connection.

I checked it both ways:

  1. Upon closing the connection, the message changes from “Connection to xxx closed” to “Shared connection to XXX closed”.
  2. Or, as mentioned in this page, ssh -O check HostNameOrAddress will return Master running (pid=????) if all goes well.

Now, to combine with what I learned from running with or without compression, I am using the following settings in ~/.ssh/config:

host *
    ServerAliveInterval 120
    TCPKeepAlive yes
    Compression no
    ForwardX11 yes
    Ciphers aes256-ctr

    ControlMaster auto
    ControlPersist yes
    ControlPath ~/.ssh/cm_socket/%r@%h:%p

I chose Ciphers aes256-ctr seems to be fast, and is the only faster types available on Pengwin (for WSL).

1 Like

If you setup a AWS/Azure/Cloud instance can you run iperf between the cloud instance and your remote node end, what do you get on download and upload? WSL am not too familiar on ssh capabilities.

Sounds like you got a better setup.

1 Like

What’s the goal to test using popular cloud instances with iperf? Is it to check how good are the upload/download speed for my home (or the remote-site) without Zerotier?

Thanks again for your notes on multiplexing the SSH connection. Now, my new SSH connections are established a lot faster than without. 6 seems to be a good number of concurrent SSH sessions to keep with the remote machine (over Zerotier).

Then, for remote GUI performance, I can finally use VimTeX + Zathura with nvim-qt hosted on the remote machine fluidly. I can barely tell the performance difference now. (Caveat: Firefox remains barely usable. Also, the initial launch time of nvim-qt is 3-6 times slower than launching it from a Linux machine in the local LAN network. I may have my X-server to blame.)

Lastly, I found Tmux helpful for launching those 6 SSH sessions to the remote machine. I used Tmuxinator with the following config.

# ./.tmuxinator.yml

name: Multiplex
root: ~
# Controls whether the tmux session should be attached to automatically. Defaults to true.
attach: false

# Specifies (by index) which pane of the specified window will be selected on project startup. If not set, the first pane is used.
# startup_pane: 1

  - P:
          - split1:
              - ssh Machine1
          - split2:
              - ssh Machine1
          - split3:
              - ssh Machine1
          - split4:
              - ssh Machine1
          - split5:
              - ssh Machine1
          - split6:
              - ssh Machine1
  - G: 
          - split1:
              - ssh Machine2
          - split2:
              - ssh Machine2
          - split3:
              - ssh Machine2
          - split4:
              - ssh Machine2
          - split5:
              - ssh Machine2
          - split6:
              - ssh Machine2
1 Like

I installed my own controller and moons in the cloud whilst blocking zerotier planets and only allow communication (block ipv6 and ipv4 ZT root servers) with my moons and I saw a performance uplift. If you check your downloads and uploads with a cloud instanc to see what you get and if its close to your available bandwidth that will narrow down your latency issues.

That’s great your performance has improved.

X Server isn’t the fastest, I tweaked turbojpeg and use nvidia acceleration to get around that issue. you could try kasmweb which will be faster than your X Server, I got native 4k streams working on a full desktop.

I use screen and I do love tmux awesome tools.

Thanks for sharing your findings.

1 Like

RE: Kasm
Just to confirm - with Kasm hosted properly on a home server, I have to always access it using a browser, right? Per this doc, the end users are always accessing the server via a browser.
Another limitation I run into, is that all users on Kasm seems to have zero access to files, in, say, /home/my_local_username/, right? I assume Kasm is intended to provide containerized apps and desktops to end-users, thus each of these end-users won’t have access to the files on the server?

I think what I’m really after is a way to speed up GUI apps that are running as “Windowed apps” when accessed remotely. Locally, I’m perfectly happy with X-forwarding of GUI programs. I think my next step is to find a way to hook up to those windowed apps over other protocols.

Kasm you can map drives to the session. It s a light client through a browser. X performance has been done and going back to my original pointer iperf performance will be crucial. Cloud hsting will be your friend to reduce latency.

hope it helps

@james.tervit Thanks for your suggestions. I think these are the steps I’ll take:

  1. With Kasm, map the local drive/folder for a user, and check out the performance in the browser;
  2. For VNC or (Free)NX, look into if there are ways to host windowed apps under each protocol.

Kasm will be the max you can achieve, if you have nvidia gpus it will be near perfect for a small use case. I have tested VNC and its limited without a rewrite, I gave up, FreeNX like all thes apps try to imitate RDP, thin client protocols and do it fairly well. I find them unpredicatble. I turned to writing my own app to accelerate and I am achieving the following also tested with no impact through zerotier. This does take a very expensive infrastruture to acheve it and a lot of coding.

Full HD (2K, 1920 × 1080) ~ 5.3 GByte/s (1.0 ms)
4K (3840 × 2160) ~ 11.2 GByte/s (2.08 ms)

1 Like

Glad to know that Kasm shall be the max, and that it takes an extensive effort to get 1080P/4K to work for remote accessing.

I also tried VNC (but not FreeNX), and find the performance deteriorating over Zerotier. (Also, Kasm was running with slow mouse movement over my half-baked Zerotier setup.)

Reflecting on the fact that I only work in two physical locations (home+office), I’ll call it a day and go with the following:

  1. Pair Windows machine with a Linux server, both at home and at work. (4 desktop machines in total);
  2. For “RemoteX”, remote into the Windows machine with RDP, and use ssh -X to connect to the machine locally.

From what I observe over the same half-backed Zerotier setup (with only one VPS as the moon), compared with Kasm/VNC, I’m seeing strictly dominating performance with RDP + ssh -X

Thats about the best you can expect without industrial acceleration. out of curiosity, whats the latency between your two locations.

Thanks again for confirming. Your advice should save me hours of pointless effort in the future :slight_smile:

out of curiosity, whats the latency between your two locations.

Sure, my office is behind multiple layers of stuff that I have no knowledge of. The other day, all my machines in the office suddenly went offline. My admins told me that the LAN port on the wall is supposed to switch to another VLAN one or two years ago. They reauthorized the port and all things went back online.

Here is what I see when I ping machines in the office from home:

  • For Windows machine with very fast RDP connection, the ping ends with rtt min/avg/max/mdev = 16.184/32.740/61.107/11.402 ms.
  • Yet, for the Linux machine with barely usable VNC performance, the ping ends with rtt min/avg/max/mdev = 11.125/12.112/14.124/1.016 ms.

At least for remote-accessing machines at work, RDP with 20-40ms latency seems to do a better job than VNC with 10-20ms latency.

Your welcome, that is what the community is all about helping each other. Happy to help you save time:-) I wasted much of that hahahah

I would check your connectivity locally at home to see what you get to discount firewalls and middleware. Then you can see if the performance locally is acceptable. When your remote do you get a peer 2 peer connection with ZT or is it being relayed?

When I go remote I use cloud connectivity as home broadband can be unpredictable

Oh, I forgot to mention - I happen to live across the street from where I work. Is 10-40ms latency as expected? Or, it should be a lot lower? I’d like to know where to end the pursuit before I start :slight_smile:

At home, my ISP is Comcast, and the workplace is managed by the University IT.

10 - 40 is quite a lot for a single hop. I would isolate everything at home first with a gigabit switch.

You said comcast so I’m assuming a cable modem connection. 10ms is the best you can expect on a Cable modem connection just due to the inherent nature of DOCSIS. Any connection over it will have a minimum of about 10ms latency.

Yes, cable modem. Thanks for setting the expectation!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.