One machine can go pretty far if you build things properly (2022)

39 points by veqq


david_chisnall

There are two reasons to want a second computer: performance and fault tolerance.

The interesting thing with the second is that, often, hardware is much more reliable than software. Yet people will scale to dozens of machines for fault tolerance, each running exactly the same codebase and vulnerable to the same faults.

For example, as far as I am aware, AWS is the only major cloud provider that runs more than one hypervisor across their fleet (they mix Xen and KVM and have some very exciting in-house stuff to migrate VMs between the two). The Verisign-run DNS root is a mix of Linux and FeeeBSD, each platform running two different DNS resolvers, so a vulnerability in one resolver or one OS will take out half of the servers but leave them able to operate.

In the ‘90s we learned that software monocultures were dangerous. And then we forgot and declared victory when we’d replaced one monoculture with a different one.

acatton

I've said it many times, most websites could handle their traffic with a Raspberry Pi running PostgresSQL and a well written Go application with Vinyl-cache (aka "Varnish-cache") in front of it.

Shared PHP hosting services figured this out years ago. They ran hundreds of sites on servers that had the power of of 10 Raspberry Pis. Thanks to statistical multiplexing, they could handle significant traffic peaks on individual websites while oversubscribing customers.