SSH has no Host header
54 points by carlana
54 points by carlana
Or use different ports for each VM
Or use jump host with internal hostnames and IPs :)
ssh -J ssh.exe.dev undefined-behavior
Single IP, cheap and easy.
Author here. I prototyped a jumpbox service. I may still release it. But it's only easy if you're used to -J, if not, it's yet another way a service drifts from the goal, which is to be as close to having a computer as possible. The entire mission of this product is "I just want a computer."
You can use a match statement in the ssh config with a wildcard, so like
Match *.ssh.exe.dev
ProxyJump ssh.exe.dev
then ssh foo.ssh.exe.dev. It would require the 2 lines to be added to the ssh config, but if that's not too much of a lift for people it could work.
I think we're too far apart here on what "easy" means. I realize SSH options can be expressed in the config file. But as soon as instructions tell the user to do that before they can use their computer, we are better off shipping a gcloud or some aws ssm equivalent, because that can be expressed as a one-line curl ... | sh. The goal is to make something that looks like a computer and take the complexity in our implementation, rather than the daily lives of users.
I'm a happy exe.dev customer, but I probably never would have given it a chance if I had to install some vendor-specific software with curl ... | sh - the risk of it crapping over my system is too high for a service I might end up not using. On the other hand, I would have had no problem adding 2 lines to my SSH config.
This is only easy for SSH clients that support jump hosts. SSH client libraries sometimes don't support this, or if they do, they sometimes need for it be configured (since they might not be able to read or even access ~/.ssh/config)
Jump hosts also incur double encryption, and double headers. This might be okay for some usecases, but I can see why a platform might want to avoid it.
This is good solution, and also helps limit a lot of the broad scale dumb ssh attacks out there
Or have a per user config on a jump host that sets up a tunnel for you. You never see the jump host, but you get double tunneled ssh
I truly don't understand who exe.dev is aimed at. Perhaps people who want to pretend to be OPs but don't really know what they're doing?
The cheapest option is $20/month, for 2 CPU, 8GB RAM and 25GB disk.
For $22 Hetzner will give you a machine with 4x the CPU, 2x the RAM, 12x the disk and 200x the data transfer. What's more, they don't force all traffic through their proxies. If you want more than one "machine", LXC exists, LXD exists, or if you're a masochist, Docker exists.
But besides all that. Honestly the most egregious reason not to use exe.dev is that like so many others they use the term "bandwidth" when they mean "traffic" or "data transfer allowance".
If I woke up with a billion dollars tomorrow task #2 would be "file false-advertising lawsuits over companies that misuse the term bandwidth".
There are lots of examples of people paying a premium over the minimum of some technical specification in order to get something that's harder to specify. E.g. ease of use
Hi, exe.dev person here. Sorry about my misuse of bandwidth. You don't need a billion dollars, an email will get it fixed. :)
To explain our prices: they are high right now because we are paying high margins on the underlying technology to keep the disks regularly replicated. These are not simple bare metal machines subject to single point of hardware failure. Unfortunately, when you start a business unless you use a lot of capital upfront, you have to eat that margin. Luckily we have enough interest in the product that I am busy negotiating hardware that will let us dramatically drop the data transfer costs. We believe we can improve CPU over time at the current cost too. (RAM is trickier because of the state of the market right now.)
But more generally to your point, if you're fine spinning up a machine, not having your disks replicated off-machine, and comfortable running your own VMs, taking care of TLS yourself, and run your own auth proxy for sharing, you should do it. The target of this product is for people like me who don't want to manage Docker/Proxmox/etc any more. I'm tired of moving my blog every few years, and want a way to spin up toy programs without discovering how my machine broke while I was not looking at it. That's not everyone and I think that's OK!
Sorry about my misuse of bandwidth. You don't need a billion dollars, an email will get it fixed. :)
Haha good response.
I get that there are different markets for different people. It just seems so bizarre that you can find a market for people who simultaneously:
More power to you if you can make a profitable business out of it.
I hate that SSH has no host header. It makes it annoyingly hard to self-host git services (like forgejo) which allow cloning and pushing via SSH.
The nice, modern way to self-host Forgjeo would be to use a set of Docker containers orchestrated by docker compose. But that system naturally can't grab external port 22 for its SSH server, since port 22 is used by the host system.
With HTTP, this is an easy problem to solve: just run the container's HTTP server on some other port (ideally not accessible from the outside world), and use a reverse proxy like nginx which listens on ports 80 and 443 and proxies the request to the right internal port based on host. But since SSH doesn't have a host header, we can't do that.
That leaves you with two options:
I have tried to research how to achieve #1 and it seems really ugly with openssh (though I might have missed something). So I landed on #2, and I hate it.
As folks have said:
Note that VPS services manage to run a profit on US$5/month hosts with an IPv4 address, so that's my border for cheap. This one appears to be $20/month, with some very odd options that suggest that they shouldn't be used for 'production' at this point.
https://exe.dev/docs/pricing - $20/month, for up to 25 VMs. So 25 IPv4 addresses?
You get no IP at all
On the networking side, we don't give your VM its own public IP. Instead, we terminate HTTPS/TLS requests, and proxy them securely to your VM's web servers. For SSH, we handle ssh vmname.exe.xyz
Under the control of one entity. It's not at all clear to me why anyone would want to partition that way, but if you do, the set of {using one of them as a jumphost | using nonstandard ports | using IPv6 } should cover everything without inventing a new layer of things to go wrong.
This is rather shortsighted. ssh shouldn't ever require a third party be trustworthy unless that third party were already trustworthy, in which case we'd simply use ssh -J
Anyone who says that ssh -J is so complicated that we need to trust some third party is a charlatan and should be ostracized. That'd be utterly ridiculous.
Whoever use this service is already trusting the provider.
Sure, but there's an important distinction.
ssh -J means you have a key (or keys) used to connect to the jumphost and to the end system. The jumphost and end systems can be totally separate entities and don't even need to be related to each other at all. For instance, the jumphost could have both IPv4 and IPv6, and the end system could be IPv6 only, giving you access even if you don't have IPv6.
What's proposed here means that the end system needs to trust the jumphost / proxy, because the unencrypted traffic intended for the end system can be seen by the jumphost / proxy. This, for instance, is what Cloudflare does when people use Cloudflare to MITM their own encrypted traffic. This is an absolutely terrible, horrible, no good idea.
"I want to build a basket that can hold everyone's eggs" isn't a good thing at all, unless you're willing to overlook the super obvious flaws with such an idea. Any system that gives unfettered access to your servers' ssh traffic is nothing but a security nightmare.
Any system that gives unfettered access to your servers' ssh traffic is nothing but a security nightmare.
Alas, for a provider like exe.dev there are likely many systems that if compromised would have a path to accessing customer SSH traffic:
There's lots of ways providers can screw these up. (I'd hazard a guess that their SSH decrypting proxies are likely the easiest to get right of those!)
That's not an excuse to do no security, but recognising that the design space is broader than "you must terminate SSH on the VM".
Can't this also be solved with sslh? [0] Probably too much overhead for general usage?
But that would have been my spontaneous solution/trial.
Clever idea.
exe.dev is a surprisingly fun service.
It offers the flexibility of an Ubuntu machine (with HTTPS site), with the ease of a web based UI to direct LLM code agents, that can then build and deploy on those Ubuntu machines.
I don't think I'll continue my subscription, since I prefer to DIY that sort of infra, but I expect this (or something similar) to be useful for many.
When SSH connects, it presents a public key, and comes in via a particular IP address. The public key tells us the user, and the {user, IP} tuple uniquely identifies the VM they are connecting to.
Huh. How do they go about actually proxing the connection to the VM? AFAIK, the start of an SSH connection goes something like this:
userauth service to try and authenticate, and then sends their keysSo let's say we got to this point, and the proxy now knows where the client wants to connect to. What happens then? The connection the proxy makes to the VM will have a different session identifier, so if the client tried to authenticate, that would fail.
Do they drop the connection, and mark the next connection from the given IP to be passed on directly to the VM? That seems very fragile. Does the proxy use a custom protocol to connect to the VMs, as opposed to SSH? I assume they would've mentioned that.
Assuming they have the ability to add an SSH key to a machine, couldn’t the proxy just establish a separate SSH connection to the machine and forward everything? Only issue is that gets more challenging with things like agent forwarding and X-11 forwarding.
In that case they probably should mention they're MiTMing all my connections?
The "we took the time to build this for exe.dev" remark originally made me think that they wrote their own reverse proxy, but looking at their Github org, they have an (unchanged) fork of sshpiper, so they're probably using that.
Looking at the README it does seem like they do what you said. The reverse proxy has its own private key, so you probably aren't able to mess with authorized_keys on the VMs. That kinda sucks, the options you can set in there are pretty useful sometimes.
As an aside, many of the commits in the sshpiper repo are coauthored by Claude. I'm not really sure if I'd want my VPS provider to MiTM my connections with a vibecoded proxy, but I'm also not in the target audience for exe.dev ¯\_(ツ)_/¯
They're hosting your VM. They don't need to MiTM it. They can simply read its memory or disk directly.
If I independently discovered that my server is doing this, I'd assume I'm being hacked. If Amazon started doing this, news outlets would report on it.
I'm not necessarily concerned with the exe.dev themselves, as much as the box that's running the proxy. It has root access to all customer VMs, but customers also connect to it directly. I have much more trust in the security of OpenSSH than of sshpiper.
Thats no longer necessarily true, with e.g. AMD Secure Encrypted Virtualization or Intel TDX things can be set up so its very hard for the hosting provider to peek inside the VMs. Obviously not what's on offer here, and this sort of hardware is notoriously hard to get correct (and you still have to trust the chip vendor), but it's neat stuff.
This is a fascinating solution, that would only work for this specific type of product.
I recently wanted to host a soft-serve instance over ssh on the same box as a normal ssh instance without opening additional ports. I ended up using the public ip as a jump box to a localhost ip, and adding that to my git config.