Clustered PostgreSQL
16 points by gmem
16 points by gmem
I feel like “clustered” means “multi-master” or “partitioned” nowadays, no? Like I wouldn’t call standbys or read-replicas a cluster because they aren’t doing equivalent work.
Patroni is great. We’ve used it for a few years and it works extremely well. We also use it with etcd as the DCS and HAProxy in front.
I wish there was a Patroni for MySQL.
I happened to do something similar a few months ago, but by using CoreDNS’s SkyDNS plugin as a bash script. I decided to do it this way because I felt slightly uneasy about the idea of running a single HAProxy endpoint when the etcd
cluster is already meshed. Then again, this was mostly an exercise to run something atop of etcd
(which in itself is maintained via auto-configured WireGuard mesh via wirenix). But now I am wondering again – am I missing something regarding the HAProxy setup or is it really the case that people spend the effort to make a redundant PostgreSQL cluster but then slam a single-point-of-failure HTTP proxy in front of it?
But now I am wondering again – am I missing something regarding the HAProxy setup or is it really the case that people spend the effort to make a redundant PostgreSQL cluster but then slam a single-point-of-failure HTTP proxy in front of it?
HAProxy functions as a TCP proxy for the PostgreSQL protocol, not as a HTTP proxy.
Why do you think the HAProxy is a single point of failure? It’s stateless, you can just run multiple ones. We tend to co-locate it with the client application, so it just connects to localhost:5432 where HAProxy is listening.
Alternatively, if your PostgreSQL client support it, you can have it find the leader (the only writable member) itself among a list of PostgreSQL servers.
Why do you think the HAProxy is a single point of failure?
The architectural diagrams I recall seeing around the web showed HAProxy as a single application node outside of the database cluster. It seems that I took it too literally – both of your suggestions sound much more reasonable interpretations of what I am supposed to do.
I too was a pg on k8s skeptic, but things have advanced a little since I last checked things out. Seeing that enterprise-db were involved with cloudnative-pg was pretty much what pushed me over the line. They’re the postgres experts, so it’s a pretty high endorsement.
The storage story also has come a pretty long way too. I’ve talked to a bunch of companies that have been running postgres on k8s without much tears. As ever, the problems they were facing were postgres perf cliffs, so again it’s a little reassuring