No one can force me to have a secure website
110 points by theelx
110 points by theelx
Additional context: https://www.youtube.com/watch?v=M1si1y5lvkk
No abstract.
It's good to deprecate HTTP in web browsers and to discourage people from using it, actually.
You shouldn't use encryption just when it's necessary, because then the connections that do need it stand out. If not using HTTPS on blogs was the norm, censorship circumvention tools such as Webtunnel would be much less viable.
If your browser didn't complain about the lack of HTTP, there would be much less collateral damage if a censor decided to block TLS outright. I don't think a complete TLS block is precedented, but China does seem to censor TLS1.3 and ESNI, because it makes it harder for them to selectively block sites. The more sites use these technologies, the less viable this is.
It doesn't matter that your particular website doesn't need TLS. This is an ecosystem issue; you could maybe compare this to herd immunity. TLS does have its fair share of flaws, but it's better than nothing.
Also, while I hate the entire WebPKI ecosystem, it's still better than TOFU here. It's great that Tom trusts his ISP and doesn't think that his country would MiTM his connections, but that isn't the case everywhere. Obviously, you can still just ask your local CA for a fake facebook.com certificate, but with certificate transparency hopefully that won't happen as frequently anymore.
Note that I'm specifically referring to ("consumer"?) web browsers here. I do actually dislike sites that forcefully redirect you to HTTPS, as for a static site that doesn't tend to provide any real security.
People who argue against mandatory HTTPS frequently also either forget or omit the fact that besides confidentiality, HTTPS also protects authenticity.
In the discussed scenario of "random anonymous viewer on a blog", confidentiality is perhaps not super important, but authenticity is. If a MITM attacker such as a naughty ISP can manipulate traffic, they can for example inject their own ads into this content. Yes, this has happened before.
The usual counterargument is that HTTPS here would only "paper over" the real problem, which is to have an untrustworthy ISP. I have two main retorts:
Go on and show me the magic spell that creates a trustworthy ISP that serves my home.
So because you have bad ISPs, everyone in the world who put things on the web needs to work part-time as certificate maintainers (for free)? This seems to work around the problem at everyone else's expense, rather than fixing it.
Yes, that's somewhat flippant, but the extra burden is a real factor affecting the likelihood of someone starting a blog on their own domain rather than on Medium, or running their own forum instead of using Discord. It's bad for the decentralized internet.
This seems to work around the problem at everyone else's expense, rather than fixing it.
I disagree, it's precisely the opposite. "I have a bad ISP" is not a problem people can realistically solve, "I can't be bothered to set up Let's Encrypt" is.
Uncompensated work is an overused lens. There are definitely situations where it's appropriate.
But we're talking about a one-time setup. There are even servers that do it for you. Hardly a part-time job.
Doing things for others is good for the soul.
It is not a one-time thing, though. Something breaks and you have to figure out what, certbot needs an update, better run some kind of monitoring to see if it's still up, etc …
If Dreamhost didn't do it for me, I don't know if I'd bother. There we see it again - centralization through friction!
Using TLS via Let's Encrypt is as simple as installing Caddy with your package manager and writing a normal reverse proxy config in /etc/caddy/Caddyfile, which is these three lines:
yourhost.com {
reverse_proxy localhost:8080
}
where your app is at localhost:8080, and you're done. Notice how there's no SSL config? It happens automatically. The certs are also renewed automatically. The maintenance burden is running sudo apt upgrade.
What you are saying was true ten or fifteen years ago, but we've come a long, long way.
yeah. the "bad ISPs" i'm concerned about are the ones engaged in state-level censorship. i think it's kinda good to be concerned about that, y'know
Do people not use shared hosting anymore? There's plenty of great providers around, and when I got into web stuff as a kid I remember them being the primary recommendation if you wanted to, well, start a blog or run your own forum. If you want to self-host your own httpd, well, yeah, that's exactly what you signed up for - but realistically you can just use something like Caddy and TLS will Just Work unless you have weird needs.
Go on and show me the magic spell that creates a trustworthy ISP that serves my home.
Hell, Tor exits can get a bad rep sometimes but if an exit was pulling that shit it would very quickly be marked as a BadExit. If I was using such an ISP I'd probably just use Tor most of the time.
Another side benefit is that HTTPS prevents data corruption on the wire. I've seen bad modems and cables cause bit flips with plaintext HTTP. In HTTPS the signatures that are meant to detect tampering also catch data corruption.
tcp has checksums
I have experienced bugs caused by network corruption that TCP checksums didn’t catch. Rare, and we switched over to TLS for internal traffic everywhere (this was years ago) because of it.
They suck. Over a few hundred megabytes with a bad phone line or buggy PHY I've seen them let bit flips pass by.
I click on links to malware sites all the time, because the major commercial search engines all prioritize malware sites above the actual indy blog sites that I would prefer to be visiting. HTTPS is not protecting me from this. Google is one of the main reasons for the push to HTTPS only. Perhaps they were originally concerned about security- and privacy-boosting routers and HTTP proxies that strip out Google ad revenue malware. Products that did this at the HTML level used to exist before Google went to war against HTTP, and I used to work on one. Whatever their reasons, Google is the biggest single source of malware that I need to protect myself against.
The point is that HTTPS is not protecting me from evil shit being injected into web pages. That's Google's main source of revenue. What protects me is my ad blocker.
[I am glad that it is still possible to find web browsers that run the full version of uBlock Origin, although Google has been fighting a war against ad blockers as well.]
I do get a benefit from HTTPS for web sites that ask for my credit card or for authentication. And the sites that do this tend to be well enough maintained that the certificates are valid.
Most sites I visit are not in this category. I prefer indy blog sites and indy information sources that are not full of SEO, adware/malware and AI slop. These same sites are often maintained by people who do not have the time or the high technical skills required to maintain valid SSL certificates. Do not tell me that this is easy and that the problem doesn't exist. The empirical evidence says otherwise. As a I result, I frequently have to navigate needlessly obstructive security warnings put up by my web browser. I don't mind getting a warning as long as I can dismiss it with a single click, but this Google-orchestrated problem with the browser trying to prevent me from visiting indy, advertising free web sites has instead been getting worse. Google has also fought a war against these web sites in their search result rankings.
In short, I agree with Tom's position. Web browsers should be user agents that are aligned with my wishes and policies; they shouldn't be weaponized by Google to enforce Google's policies against my interests.
but it's good to deprecate HTTPS
This is a typo and you meant HTTP, right? Pretty sure that's what you meant from the rest of your comment.
It's great that Tom trusts his ISP and doesn't think that his country would MiTM his connections, but that isn't the case everywhere.
I didn't get this impression from the video? I think his point is, for basic websites like his, HTTPS doesn't protect you from anything and is a burden on the user and developer. Which, at least as an owner of a static site, I agree with.
Though I also think that HTTP should be deprecated in web browsers. I like your "herd immunity" metaphor.
But what about local web development? Does that mean I now need to get a certificate for purely internal machines and leak my internal network topology? [1] Or do I have to run my own private CA now?
I do actually dislike sites that forcefully redirect you to HTTPS, as for a static site that doesn't tend to provide any real security.
I thought the argument here (during the "Encrypt All The Things" campaign that Google forced on us) is that bad ISPs could insert their own advertising on plain HTTP requests and it would be better to have a failed HTTP->HTTPS redirect than a forced used of HTTP. Or something like that.
Only tangentially related, but when Gemini (the protocol, not the AI) was being developed, there was a large group of people who hated the mandatory use of TLS. This group was split in half, one half wanted it to be optional, and the other half wanted to replace TLS entirely with some crazy encryption scheme they just read about that was supposedly faster, easier, more secure than TLS. I always found that funny.
[1] Via the certificate transparency lists being generated?
But what about local web development?
Deprecate, not remove.
bad ISPs could insert their own advertising on plain HTTP requests and it would be better to have a failed HTTP->HTTPS redirect than a forced used of HTTP
If the ISP is using some sort of reverse proxy to insert ads into your HTTP requests, only accepting incoming connections over HTTPS won't stop them - their ad-injecting proxy could just connect to your site over HTTPS instead. I believe that's what sslstrip does, for example.
the hack is fun, but any person who ever used network that injects malware ads into non-encrypted pages would understand viscerally why encrypting all the traffic is important. it’s less of a problem now (partially, I’d think, because all the traffic is encrypted anyway), but they should not have this capability.
there’re also governments that actually do MITM attacks on everyone, and jurisdictions where even opening a page on the internet could be potentially illegal depending on its contents. having your certificate issued by someone in another jurisdiction (with required SCTs + CAA) could be very useful to hinder that.
I started my career on the Doubleclick malware detection pipeline at Google. The advertisers also inject malware ads into encrypted pages; I'd wager that this is a lot more common than networks injecting malware.
If your concern is bad networks injecting things, you may want to sit down when you hear what gets served by websites intentionally.
By the way, I get the SSL_ERROR_UNSAFE_NEGOTIATION error in Librewolf that is brought up as "how dare curl refuse to interoperate with my site"...
If the site was truly HTTP only, HTTPS only mode would just tell me that there's no secure version and I can use HTTP if I want (the warning is more humane than the Chrome error complained about in the paper).
But httpv manages at least the insecure handshake, signaling it wants to use a secure channel instead of rejecting the connection, so there's no recourse without me manually switching the protocol to http, which makes the situation arguably much worse than if nothing was done at all.
I did also pick Librewolf, which intentionally enabled stricter negotiation rules than regular Firefox, so you can totally argue that I did this to myself..... or you could also just not do this in the first place.
Hounding how CAs are untrustable gatekeepers, then deciding that your chain of trust problem starts at the CAs (especially with certificate transparency nowadays) instead of the browser vendors who pick those CAs is... a choice.
Entertaining of course but I personally put “papers formatted for printing instead of plain html but entirely consumed on the web” in a similar irritation bucket to mandatory https
This is a paper submitted for the conference SIGBOVIK so when they finish their proceedings you can buy a bound volume: https://sigbovik.org/
SIGBOVIK also livestreamed their conference this year: https://www.youtube.com/watch?v=JazxeftHDwY
Tom7 also presents the paper there.
I'm immediately less inclined to be sympathetic to the author's "can't make me" tone because this document was published as a PDF that also shows no regard for accessibility, judging by the fact that it's not tagged.
Tagged?
FWIW, the author wrote his own personal typesetting system that, by his own admission in this paper, is still very much unfinished. I don't think we should expect every such endeavor to immediately nail accessibility and such. Let him have fun.
I love this…
Hah, seeing your username made me wonder if you'd ever met Tom at CMU.
I love his projects, and always watch what he publishes. I bet you'd appreciate https://www.youtube.com/watch?v=Y65FRxE7uMc
Considering it's a SIGBOVIK submission, maybe add a satire tag? I got seriously Poe's Law'd here.
This is one where it's important to read to the end to actually get the message.
Does the idea of being locked out of your own website bother you? Perhaps even more so than the idea that someone who can manipulate internet traffic can pretend to be your website? Or the possibility that Certificate Authorities could simply decide that enough is enough with your miscellaneous malfeasance, and refuse to issue you new certificates?
My read is that "force" is the operative word in the title- that the main issue is "centralized authority over what sites browsers will talk to", not with encryption per se.
(One of these days I need to figure out how DANE works...)
Totally agree with the sentiment here. Encryption should absolutely be an option for sensitive data or badly behaved networks... but throwing up big scary warnings for plaintext connections is dumb.
It's especially annoying when hosting services for my local network. Let's Encrypt obviously won't issue a certificate for a non-publicly accessible server, so in order to get TLS, I have to:
... or ... just use plaintext HTTP. On properly set up private network the chance of anyone doing anything bad is effectively zero. All the sketchy IoT devices are isolated, and even if they weren't, the shouldn't be able to see anyone else traffic on a well configured network.
In addition to being easier, it's also faster: The biggest source of CPU load on a web server is handling encryption, and on a single-board computer, it can really slow things down.
(Of course, the solution is to fix the browser instead of implementing placebo encryption)
Set up a DNS server.
As long as you own the domain, you could use DNS-01. DNS-PERSIST-01 will make it even easier, once it's out.
- Sign the server's key with the CA's key.
- Configure all my devices to trust that CA key.
If you run your own CA then you could use IP SANs instead and skip the DNS. (Not that DNS doesn't have its own advantages, but you don't need it.)
Even if there are some sketchy IoT devices on it, they shouldn't be able to see anyone else traffic.
Those are some pretty famous last words.
(although I do still recommend isolating those.)
You do still have to be able to access them somehow, though, so total isolation isn't an option either.
Though it does suck that all of these are pretty firmly in the land of "nerd shit", John Doe on the street probably isn't going to go buy a domain name.
for some use cases TOFU (as seen in gemini://) might be a better match than the now required PKI.
The big problem with TOFU is that there's no revocation mechanism. If a server is compromised and uses Let's Encrypt, for example, then the signed certificates are short lived. Someone who steals the private key can impersonate you for a week, but after you roll over the private key and re-sign the cert, their old one will eventually stop working. If you report the compromise and it's added to the CRL, then the stolen cert stops working immediately. In contrast, with TOFU the attacker's certificate keeps working forever and your new one looks malicious.
TOFU works well for your own infrastructure (e.g. in ssh) because you know when to revoke certs. It works okay for things like chat, where you mostly want a guarantee that the person you're talking to todat is the same person you talked to yesterday, and where you may have some other out-of-band mechanism for verifying keys. It's not good for communicating with servers that you don't know and don't have an out-of-band communication channel to.
thanks for explaining some use cases i was too lazy to detail.
revocation remains partly an unsolved problem. first, there is a window of opportunity for the attacker between revocation until the client finds out about it. second, checking revocation discloses your IP and browser fingerprint to the CRL/OCSP server. third, CRLs are subject to "soft fail": if the CRL server becomes unreachable (e.g., DoS attack), browsers often don't stop connecting to websites.
Why?
TOFU means that the cert you trust is either a self-signed on by the site operator, or a malicious MiTM cert. There's literally no way of telling.
Many gemsites just piggy-back their certs off Let's Encrypt, so you get frequent warnings about cert changes as the LE certs expire. This trains users just to hit the "trust cert" button.
TLS on Gemini only provides confidentiality, not authentication or integrity.
There are in fact people that are (were? it's not immediately clear from the link) factoring reused factors in RSA keys. There's a good chance that the keys containing "entertaining" numbers were found by this project (and perhaps revoked, as a service).
The video is pretty fun. I'm not sure if it explains https and PKI only to people who already know it or if these things cannot be explained and so we might as well make some silly drawings. I missed some ironies and said "huh" in a distant tone.
there are many fallacies around TLS. One is that the hundreds of trusted certs in a browser (take Firefox) are all trustworthy. Another being that they would be necessary to encrypt traffic. Why is there so much FUD about self-signed certs?
I love everything Tom puts out. This, too!
Having only watched the video, my takeaway of his main criticism of HTTPS is the inherent centralization that comes from reliance on CAs. I share those concerns. It made me wonder: are there any modern, practical projects that help facilitate substituting CAs for a web of trust for HTTPS/TLS?
I know webs of trust went out of fashion a bit with the likes of PGP (which I still use regularly and like the idea of, despite its... what shall we say, less than stellar user experience and myriad of footguns). But the idea is appealing. Especially if you can alternate between both worlds.