My PostgreSQL database got nuked lol
70 points by Aks
70 points by Aks
FYI, podman doesn't mess with your host firewall, which is why I switched to it.
Not entirely: it won't nuke your configuration, but if you're using firewalld then the port-forwarding rules added by Podman for the --publish setting will bypass any other firewalld settings, essentially punching a hole in your firewall.
For example, if you block a certain IP from accessing your server then they can still connect over any forwarded ports. I spent way too much time trying to find out why blocked IPs were still able to connect to my server, only to find out it was because of this.
The workaround is:
--publish or PublishPort (when using quadlets)StrictForwardPorts in firewalld<forward-port port="443" protocol="tcp" to-port="443" to-addr="10.88.0.2" /> where 10.88.0.2 is the IP of the container connected to the standard Podman network bridgeFor services that don't need to be public, like databases, you can also skip forwarding ports entirely and have clients connect to the fixed container IP. The "I hate firewall configuration" approach. :)
Probably not any new knowledge to most people who run these things but hey felt like sharing what I learned. I'm glad it was not anything super critical.
I'm glad the vandalism happened before you'd committed any real data to the system. And I'm not happy that someone vandalized your system, but it makes me feel better to learn that I'm not the only one who was unhappily surprised by the docker behavior. (FWIW, learning that kind of thing is why I like to self-host some low stakes things. I do read documentation, much of the time, but there are some things you only put together when you work through a thing from beginning to end.)
I'm glad the vandalism happened before you'd committed any real data to the system.
Vandalism, or free pentesting? :)
Thank you for sharing! This is not my area of expertise, and I'm sure I'm not alone. And your post sparked a discussion about how others are handling firewalls with their deployments, which is cool.
(Granted, every single thread in the discussion is scaring the heck out of me. But hey, I'm learning useful stuff—and/or learning I should stick with a boring apt install for my DB and trust my distro's defaults?)
I remember in 2005 learning web development for the first time, and hearing that postgres was better than mysql, but when you install postgres thru apt, you get a configuration that's intended for production, and the set of steps it takes to create a user with a password that will let you connect from your application was very fiddly, and if you did the steps in the wrong order it would look like it worked but it would not work. The documentation at the time was awful, saying things like "consult your site administrator for details of how to authenticate"; just useless for a beginner.
I'm convinced the main reason mysql took off was that as a brand new user, it was just easier to install and configure in a way that would allow you to connect, and that if postgres had done a better job at this, it would have taken much less time to beat mysql. I knew postgres was higher quality, but people make snap decisions based on superficial impressions, and "it took me 5 minutes to get this working vs 30 minutes to get that working" counts for a lot.
Of course, now we have the exact opposite problem, that the defaults appropriate for development are being mistakenly applied to production. You can't win!
in 2005 [...] hearing that postgres was better than mysql
Damn, lucky you! It took me a couple of years to figure that out. At the time, I was doing Rails and some Drupal hosting and MySQL was like the "default" or even the only option. The LAMP stack was pretty entrenched and Postgres wasn't even on my radar. Certainly nobody was blogging about it in any of the places I frequented.
When I finally tried out Postgres, it was in frustration after MySQL kept crashing on a somewhat large data import task. I wasn't doing anything weird, but the MySQL server just plain died on that workload. Switching to Postgres was quite easy in Rails by then, and it had no trouble whatsoever with my data. That's also when my eyes were opened that databases don't have to be shitty. Proper transactions, no arbitrary and unreasonable limitations (like being unable to index text columns longer than 256 chars), strange behaviour around Unicode etc.
But trying to make others see this was still hard at the time. New projects which only supported MySQL were still quite common!
Oh, I didn't mean to imply that in 2005 I was not one of the foolish programmers who made decisions based on surface-level criteria. I tried postgres briefly and found it was a pain in the ass, so I used a worse database instead; knowing it was a worse database.
Is there actually anyone out there who learned about this "Docker bypasses your firewall" behavior through docs? I guess humanity slots into one of three categories:
(I'm in category b btw.)
It makes me a little sad that unix domain sockets are less popular for this sort of thing. They perform better, and there's no method for accidentally exposing them to the Internet.
My immediate intuition would be that Unix domain sockets wouldn't work with containers at all, so you'd have to use TCP. A brief search finds a random post on Medium suggesting that if you bind mount the Postgres socket directory out of the container it will work, which I had no idea about (if it's true)!
You can bind-mount directories that contain UNIX domain socket (if not sockets themselves, I'm not sure). Shared volumes with sockets work perfectly fine. Or host-mounted volumes.
One thing that's a bit annoying is that usernames may differ across containers, so UID-based auth (which is a blessing in Postgres) doesn't always like you'd want it to.
Yes, UID-based auth would be hard to lose if I moved to containers. It is so nice to just have proper auth with no passwords or other secrets. I just create the DB and it is perfect. And I can easily access it with a superuser account if I need to debug without navigating container networking to access the postgres port.
It's true. Sharing domain sockets between containers is just volume mounting the specific socket file in question.
My immediate intuition would be that Unix domain sockets wouldn't work with containers at all, so you'd have to use TCP. A brief search finds a random post on Medium suggesting that if you bind mount the Postgres socket directory out of the container it will work, which I had no idea about (if it's true)!
It's just in the file system namespace, so if you make that directory available in multiple containers, it should just work.
Honestly one of the saddest things about the modern internet is how actively hostile it is to people just messing around trying web things. It shouldn't be this hard to run a little hobby server without stepping on several security shaped rakes.
Eh, I am not sure how far you want to go back in time. But this always has been a thing as far back as I can remember. Which, as far as me remembering people doing hobby servers is three decades. What has changed is that it has become much more accessible to do the whole stack of hosting (expect hardware) due to cheap VPS availability.
Even then, when the web was mostly still LAMP based as soon as someone did something stupid with their fancy new php script it there would be people abusing it.
I'd say it is entirely possible there are now more rakes in play. But there always have been plenty to be weary to begin with.
Yeah, I mean deploying WordPress at all (a very popular thing to do) was a giant security rake, at least until fairly recently. And then there's MongoDB's infamous default of no access control.
Realistically, even a lot of professional developers are "just messing around", especially when trying to use an unfamiliar technology. The gospel of "secure by default" has not made it far enough.
For future reference, there is a docker compose linter that is very good and would have helped you catch this exact issue sooner: https://github.com/zavoloklom/docker-compose-linter
Another vote for ufw
Important to note that ufw doesn't work out of the box with Docker; you need to add custom config in order to make it work: https://github.com/chaifeng/ufw-docker?tab=readme-ov-file#solving-ufw-and-docker-issues
ufw is a wrapper around iptables, and Docker does crazy stuff with iptables that bypasses the default routing rules. I was bit by this myself; shocked one day to find that several internal services were actually open to the internet despite me setting strict firewall rules explicitly blocking their ports.
Good to know! I use ufw and run all my services as regular systemd services listening on Unix domain sockets, except Immich, which uses Docker Compose. I’m pretty sure its defaults are configured correctly but it’d like to try getting rid of Docker and packaging it for Debian.
Yep, docker is some voodoo sometimes. I try to always set the address in the ports after learning about the same thing the post is about. And I do it with podman too because I am paranoid.
Must have been a bit of cold sweat when you realized everything was open to the internet?
Must have been a bit of cold sweat when you realized everything was open to the internet?
Some kid (literally 13 years old he said) broke into my personal TODO app and left be a bunch of messages lol. As far as ways of finding out go, that's pretty good all things considered
Most of the actual sensitive stuff like databases were listening on 127.0.0.1 so weren't exposed, otherwise I would have had bigger issues (and realized sooner).
What exactly does ufw do in the situation where no services listen on external IPs except deliberately public ones? From thos thread, it seems to be giving a false sense of security and I don't see how it improves posture. If there's no listen sockets, there's no "open ports" for it to protect. (This can be verified with ss -lnp)
Is it just a defense in depth thing or is there some deep magic I don't know about?
Just defence in depth and default deny. It’s a very simple UI which covers the majority of basic firewall use cases.
I've definitely had cases where Debian's default-enable policy for systemd services has meant something installed as a dependency decided to start running and listening on all interfaces. ufw just makes it so you have to explicitly expose things, and the ability to just run ufw allow ssh or similar for most services makes it easy to use.
TIL. Thanks for sharing! You are definitely not the only person who is just messing around with this stuff!
I think my things are safe from this kind of attack since I have an extra firewall set up at VPS-vendor's web console, but I will also check my docker compose files to see if I've made the same mistake.
Update: All of my docker compose files have the same issue as the author. While trying to fix this, I somehow nuked my own PostgreSQL database in one of my containers. No idea why. Lucky I have backups
Why did you expose the port at the docker level at all? If the backend is in docker as well exposing the port becomes entirely unnecessary..
Just remove that part. Docker has an internal network, so containers can access each other without the "ports" section.
That only works if the only thing that needs to reach the postgres container is also being run using that docker compose file. OP was running a web application natively on the host that needed to reach the database.
It's always a good idea to have strong passwords set up for databases even if they're not publicly exposed. You never know when some absurd default configuration will bite you.
Also, in the past Docker didn't work well with UFW at all in some configurations and would override restrictions and expose ports anyway. This is probably fixed in some way by now, but I don't trust that UFW will work and put a firewall in front at network level instead (some firewall rules in hetzner) (also no shade to UFW, UFW is great, the issue is everything else).
The strong password pont is a good one. Do you use env values, or set a hard-coded strong password in the compose file?
I tend to put it directly in the compose file, but because I usually generate the compose file with some Python script.
Any option is good tho, whatever fits better for you. I dont think theres any difference security-wise if files have the correct permisions and only a privileged user can use Docker.
Heh, I made this mistake in college. My college didn't NAT users, so we were all given WAN IPs, which was cool, but meant if you made this mistake, you were putting your servers on the public internet.
One day I got shut off by our infosec department, and they told me I was mining bitcoin, which I wasn't. In that moment I remembered a few days ago I had seen my postgres container spike to 100% CPU, which I then killed and thought nothing of. The infosec guy and I were friendly already, so we had a good laugh, and that was the end of that. Lesson learned!
I think you don't need to specify the ports if you don't want to make your db accessible from outside the network