This Website is Served from Nine Neovim Buffers on My Old ThinkPad
126 points by gnyeki
126 points by gnyeki
I did the exact same thing for firenvim, writing the bare minimum a websocket server needs in order for a webextension to connect to it.
It’s funny, 15 years ago I’d probably have dismissed the ability of implementing an HTTP server directly inside of a text editor as useless bloat and nowadays I can’t do without.
That is gloriously cursed. Thank you for putting that together and sharing it!
Pretty cool, reminds me of elnode :)
Fun fact: not only was elnode created quite a while ago, it was also used to implement the main elisp package repo at the time: https://github.com/nicferrier/elmarmalade
This is why god has abandoned us :O :D
The aiohttp app preloaded content into memory, but AFAIUI the Nginx benchmark didn’t? Couldn’t this be another possible reason for the performance difference?
You could probably avoid mucking around with Nginx configs pretty easily by just putting the files on a tmpfs. You’d still pay Linux kernel VFS overhead but… that seems fine for the vibe of this benchmark? :D
Why does the whole article compare against aiohttp? Is nginx not largely C?
I think because their implementation uses non-blocking IO à la aiohttp
Nginx also uses non-blocking IO, that was its original claim to fame compared to Apache: async IO in a single thread in a single process.
The way I approached it is, why isn’t aiohttp also faster than Nginx? Although in some sense, the answer is obvious because CPython interprets bytecode and LuaJIT compiles the hot path to machine code. I still think that the details of how LuaJIT avoids doing work are interesting. But you may be right, comparing to Nginx may be more relevant.
In the benchmark, both Nginx and nvim-web-server were serving a static website and both had logging disabled. This means that neither[1] did any disk IO and both were serving the content from RAM instead. Assuming that Nginx reads or mmaps the content from the disk, the kernel still pulls it from the page cache rather than the disk.
If this is correct, then the difference is that Nginx could incur a context switching penalty because it gets the content by making system calls. This is avoided by nvim-web-server because the content is loaded into a Lua table so getting the content is done fully in userspace.
This is just a wild guess though.
Emacs has a few httpd implementations too of course. I sometimes use simple-httpd.el to serve content locally. For the buffer-based serving described in the post you could do something like:
(defservlet mypage text/html ()
(insert-buffer-substring "mypage.html"))
edit: I was able to get about 3500 reqs/sec out of it.
I thought I was badass for running my site on a raspberry pi.
Little did I know that Gábor Nyéki would come along and blow me out of the water.
I have nothing but respect.
Thanks everyone for the kind feedback! I’m happy that you’ve found this interesting.
I hadn’t come across firenvim, elnode, and simple-httpd.el before. Firenvim is very cool. And I shouldn’t be surprised that Emacs already got a web framework 12 years ago. :)
This is great. It would be handy if it could (optionally) set up (and then take down) port forwarding automatically with UPnP.
dang, that’s cool and cursed.
Side question: my major concern about using old laptops for servers is power consumption. is that not a concern for you, or is it less than i estimate it to be?
It looks like its light-load (a.k.a. official battery life measurement mode) power consumption should be around 5W? Monetarily, it should be below $2 monthly in most places; waste-wise, manufacturing energy investment is usually large enough to make maximum reuse of low-ish power computing stock still better than replacement with more efficient models.
Unless you’re using specifically low-power server hardware or sharing hardware on a VPS, a laptop will probably compare favorably in terms of power consumption.
Very cool! How does the DNS work? Hosting public websites off a home computer is something I’ve wanted to do periodically, but that always feels like the complex/annoying/money-involved sticking point.
What I do is just TCP relay from a VPS. Then my VPS and local machines are responsible for keeping a network between them. With things like Tailscale or Wireguard, it’s pretty easy these days.
Appreciate it, I’ll have to check that out!
My recommendation: use something like HAProxy, Nginx, or Caddy to do the HTTP front-end part on the VPS.
Caddy if you find Nginx hard to understand or don’t want to deal with TLS and want to make Caddy do it for you(it’s killer feature). Otherwise use Nginx. HAProxy is if you have a lot of non-http stuff you need to do.
Tailscale, nebula, etc for the link between the local machines and your VPS.
I’ve been hosting webpages at my home for many years, with different ISPs and it’s not very difficult but it’s true that every ISP router is different. However, newer ones support DynDNS out of the box, which is useful if your ISP offers you dynamic IPs. So you register a domain in a DynDNS provider like No-IP, and the IP there gets updated automatically by the home router. That domain doesn’t need to be a public facing one, since you can use it in CNAME records for a public facing domain.
Also you might want to set up a machine as DMZ, this will redirect all incoming traffic to a machine, which can be a Linux server with your firewalls, proxys, etc
Lastly, some ISPs like my last one, have CGNAT. You can’t do this if you have CGNAT as you will be trapped in a NAT of a router you don’t control. In this ISP, going out of the CGNAT was an extra 1€ per month. They call it “advanced connectivity”. Older ISPs don’t have CGNAT in the fiber network.
I remember a time when it was possible to host your main mail server on residential IPs at with a dyndns host, as a friend from uni did, early 00s. That was before GMail and before other big email hosters went nuts with not delivering random people’s tiny mail servers’ mail.
I deployed an anti-spam system in 2003, soon after the rise of botnet spam. It was a time when Windows PCs were compromised as soon as they connected to the network and used to send spam. I was relatively late to anti-spam work, and by then it was already routine to reject mail from dial-up and residential IP addresses in general. More than 90% of attempts to deliver mail were spam from compromised Windows home computers.
Sounds about right, I don’t remember the exact years (I can only say it was later than 2003 due to not meeting before that), but I think at some point she switched away from that due to deliverability issues, probably by around 2005/06.
With some clever CDN sprinkled on top and a bit of SSI (!) this setup could handle some real business load. Maybe just keep the modern React stack at bay, otherwise Vercel is gonna nock on your door a bit too soon 😅