Why I (still) love Linux
53 points by draga79
53 points by draga79
I'm a generation (if not two) younger than the hardcore Linuxheads. When I started using Linux full-time (circa 2015), SystemD wasn't a newcomer vying for the throne, but the status quo, PulseAudio was still a thing, and Xorg was still dominant, but stagnating.
So I've only really experienced the big switches from PA to PipeWire and Xorg to Wayland, and neither really caused many issues for me. Wayland was a bit rough at the start, occasionally minor stuff didn't work or didn't work how I'd want, but there's been a lot of fixes since and I've not had any major issues for years now. And, in the case of the PA->PW switch, I've only had issues disappear and performance improvements.
So while I recognise why such shakeups might unnerve OP, my experience has been much more positive and optimistic, especially since Valve joined in and gave WINE/Proton a big boost. The (desktop) Linux of today is no longer some niche thing that you could game on or could use if you're willing to dedicate it as your craft. It is now something an Average Joe can use as their everyday daily driver working just as well, if not much better for many purposes than Windows. And it's done all this without fundamentally locking down anything.
(And, of course, none of this invalidates OP's experience, I'm sure if I had a couple decades of experience behind myself and if I was a staunch believer of the UNIX principle, etc., I'd probably see things differently. But in my own—differently biased—experience, things have gotten better and they are only getting better.)
I was around for both the "ALSA" -> ALSA + PulseAudio transition (OSS -> ALSA, too, kind of) and the convergence to systemd. Scare quotes around "ALSA" because it's obviously more complicate than that. "Unnerving" is a bit of an understatement for both of those :). By 2015, both Pulse and systemd were okay, but early releases were a complete dumpster fire.
With systemd it wasn't so much a quality problem as lots of things just not being covered and requiring all sorts of new hacks. Of course, the previous init generation was itself a web of hacks, it's just it was a web of hacks everyone already knew, so lots of people just saw no reason to exchange one set of clunky hacks for a new set of clunky hacks. In time, many of the said hacks were "absorbed" into the main systemd project and got proper integration, and systemd got... I wouldn't say good, but it went out of sight for most people. And, with various subsystems being integrated, it was a massive help for people who'd actually had to write complex init scripts.
With PulseAudio, on the other hand, it was 100% a matter of quality. For the first few years PA was a complete mess, and reporting bugs was a very surreal experience so it improved very slowly. It made everything about Linux sound worse. At a point where it was just kind of stable and largely worked out of the box, we got thrown back to the early 00s of troubleshooting sound servers. It started improving after a few years and by 2015 it was pretty much okay, as far as I remember, but the first releases, as adopted by IIRC Ubuntu and Fedora, were completely broken.
A lot of the bad blood comes from the transition experience. If you started using Linux around 2015 or so, you mostly saw a steady improvement. If you started using Linux around 2000 or so, you saw fast and steady improvement (2007 Fedora was light years ahead of, say Red Hat 7), followed by a massive dip in quality and stability in various parts that everyone was starting to take for granted as working (PulseAudio, Gnome and GTK3, lots of KDE code etc.). When things started improving again, mobile and web had sucked the air out of desktop development, so it took years to get back to what people perceived as the baseline that we'd started the rewrite frenzy from.
Edit: all the "bad blood" part being said, 2025 Linux desktops are a massive improvement over even 2015 Linux desktops. There are things that I miss from older desktops but they're all subjective; if you look at the objective indicators, from app availabilty and compatibility to how much you have to peruse HCLs before buying a new computer, 2003 me would've never imagined we'd be here.
With PulseAudio, on the other hand, it was 100% a matter of quality. For the first few years PA was a complete mess, and reporting bugs was a very surreal experience so it improved very slowly. It made everything about Linux sound worse. At a point where it was just kind of stable and largely worked out of the box, we got thrown back to the early 00s of troubleshooting sound servers. It started improving after a few years and by 2015 it was pretty much okay, as far as I remember, but the first releases, as adopted by IIRC Ubuntu and Fedora, were completely broken.
I think this attributes many of the bugs to the wrong place. PulseAudio - simply by it being a sound server which is fundamentally different than ALSA - simply happened to resurface thousands of bugs in the underlying sound drivers. But these drivers simply never worked well before, they only got the bare minimum functioning, there was just no common program exercising them fully before.
Of course PA had its own share of bugs, not denying that, but I think this gives the full picture. Also explains why PipeWire works so smoothly from day 1.
PulseAudio wasn't the first sound server to be built on top of ALSA. KDE, Gnome and Enlightenment all shipped with their own sound servers at some point, and all of them had ALSA backends. When PA's first public release hit (2008-ish?) some of them had been largely useless for a few years, and I think a lot people disabled them, so maybe that caused more bugs to surface out, sure.
There were a bunch of other issues that "technically" weren't PulseAudio's fault, for very defensive definitions of "technically". For instance: the initial resampling code was better than ALSA's, but it was also a massive CPU hog, so it looked like PA ate CPU while doing nothing, it sounded exactly the same to 95% of the people but they saw the fans spinning. Flat volumes exposed a bunch of different assumptions in different APIs and for a few years everything was either too loud or too quiet (e.g. here) and lots of people blamed PulseAudio. Technically it wasn't PulseAudio's fault; but if everyone's holding it wrong, at some point you have to think maybe the interface is bad.
But even without those, early PulseAudio releases were just atrociously bad. I was working at a Fedora shop when the first PulseAudio release hit and it crashed all the time, on all hardware. Even after it got sufficiently stable that you could go a day without a crash, it was a pain to use. Audio devices would get switched automatically, audio quality on Bluetooth -- a major selling point -- would degrade out of the blue and so on.
Audio calls on Linux were basically a no-go for at least a year or two, and it had literally just started to work (Skype for Linux was launched... what, 2007? 2008?) Disabling PA worked for a while, but then it started getting integrated everywhere so it was getting harder to use things without it, and it was so bad that we just resorted to resurrecting a few old laptops and slapping old pirated Windows copies on them just so we could do audio calls again. They ran nothing but Skype. I installed those in 2009; they were still in use in 2012 when I left that place. This wasn't just some obscure hardware, we had like six generations of machines there, all of them with super common hardware because we all ran Linux, so we picked hardware specifically for it.
And it didn't help at all that, for a few years, it was simply impossible to report issues constructively. Even small bugs would get debated to death until upstream would relent that something really is a bug in PulseAudio, rather than misconfiguration or a bug in ALSA. Early design decisions remained set in stone even when they were clearly unproductive. Some of this was ridiculous -- like, entire communities spent years working around the flat volumes nonsense before flat volumes were dropped as a default years later.
when the first PulseAudio release hit and it crashed all the time
Back in the '90s, I ran BeOS 5 for a while. It had a userspace sound subsystem. Periodically, you'd get a dialog popping up telling you that the sound subsystem had crashed and been automatically restarted. Without the dialog, you might have noticed a very brief hiccup in the sound. In contrast, Linux and Windows of the same era had in-kernel sound things that would sometimes get into a broken state and require a reboot to fix. Some of the Linux device drivers were staggeringly bad: the SoundBlaster Live! driver crashed the kernel at least once a day.
I always saw that as a benefit of a userspace sound system: it crashes, it restarts, no one notices. The fact PulseAudio failed to make that a benefit tells you a lot.
PulseAudio being so crash prone early on was a very tragicomic thing. At the time, both arts and ESD were old enough and, I guess, mature enough, that a lot of more recent clients didn't gracefully handle losing connection to the sound server. That was on the clients, of course. Except PulseAudio crashed so often that a lot of people blamed it for their applications hanging, because it looked like trying to play sound was what crashed the application. It didn't help that desktops in that era played sounds... surprisingly often. You'd find you couldn't log out anymore, for example, because that caused a jingle.
On the one hand that was kind of good, because it exposed a lot of code that should've handled things gracefully anyway.
On the other hand... people tried to do the obvious thing and handle crashes gracefully -- and it turned out that was not easy at all. It wasn't necessarily hard to detect that the daemon had crashed, for cargo cult Unix values of "easy", as in, you could poll the file descriptor to check that it's writable and do a non-blocking write on it. But even after PulseAudio had restarted (which itself took a while and caused more than a very brief hickup), it didn't always properly resume device state and volume.
That caused all sorts of hilarious things in the office. About once a day someone's laptop speakers would start blaring because Pulse had crashed and, upon restarting, it had decided to ignore their wireless headphones and instead go straight to speakers.
Edit: all the "bad blood" part being said, 2025 Linux desktops are a massive improvement over even 2015 Linux desktops. There are things that I miss from older desktops but they're all subjective; if you look at the objective indicators, from app availabilty and compatibility to how much you have to peruse HCLs before buying a new computer, 2003 me would've never imagined we'd be here.
I totally agree. FreeBSD is powering the Playstations, and the world loves it. Linux is powering Android phones, many desktop computers, etc.
The Open Source is much more relevant, today. We definitely won this war.
FreeBSD is powering the Playstations, and the world loves it. Linux is powering Android phones, many desktop computers, etc. The Open Source is much more relevant, today. We definitely won this war.
Eh, I'm not as positive. Yes, open source powers those devices, but so what? The platforms are locked down walled gardens. It might as well not be OSS at all.
This. Can you take a random Android phone and run a stock kernel on it? Mostly, no. The majority of hardware relies on custom hacky drivers that only work with specific Linux versions. We "won the war" in the sense that on the face of things, they all run "Linux", but it's in name only. When you really get down to it, it doesn't actually run a standard Linux in the sense that you as a user get to benefit from the freedoms it offers.
Thank you for your experience. I agree with you, many things improved drastically when we're considering the desktop experience. I can play games on my Linux powered MiniPC and everything's smooth and working. This is - by far - better than it was 15 years ago. But if you're managing important data and servers (like me), those improvements count less than the other things I mentioned in the article. I have one workstation, I can replicate it. If btrfs will "eat" my data, it won't be a problem. Just a waste of time to restore from the backup. But if a specific network interface has a new name after a reboot of an important, distant server - this could be a problem. And I manage hundreds of servers, so I need reliability and long term stability. Linux based servers have impressive uptimes - so I'm not talking about bugs. But yes...Pulseaudio VS Pipewire isn't something I care about: all I want is that my desktop will emit sound :-)
Heh, I started in 2021 or 2022? I don't remember exactly. Most of these switches, except X11 to Wayland, had already happened. And for me, the move to Wayland was rather seamless.
It's been great time and in a weird way boost to my mental health: I found more fellow nerds to chat with!
I'm still proud that I managed to compile a kernel that got debian to run Neverwinter Nights on my GeForce 2. Big ups to icculus for the port. It ran Quake3 well too.
Even if your btrfs, after almost 18 years, still eats data in spectacular fashion.
XFS 4 life
ext4 is king. Everyone talks about bit rot, but losing entire files and directories is okay as long as everything was checksummed and scrubbed while it worked.
I was choosing between ext4 and xfs recently and went with ext4. Not too long after I found myself needing to shrink the filesystem to leave more space for LVM snapshots. Was glad I didn’t choose xfs as it doesn’t support shrinking.
Agree, ext4 is pretty solid too.
I faced problems with ext4, too (not too long ago), but it didn't cause any data loss. Ext4 is solid (enough). I wish we could snapshot it (I know we can use LVM snapshots or dattobd, but that's not native snapshotting)
Same, though it was a few years back (2021). A power loss on my RPi led to complete lights out — there wasn't even an ext4 filesystem on the SD card to fsck afterwards.
(I recovered what I could and continued to use that SD card, and still do actually. So I doubt it was a hardware issue.)
I faced problems with ext4, too (not too long ago), but it didn't cause any data loss.
Can you elaborate on this? Are there any known ext4 bugs currently?
A couple of servers suddenly started to show this: EXT4-fs (sda3): error count since last fsck: 6 EXT4-fs (sda3): initial error at time 1761766928: ext4_journal_check_start:61 EXT4-fs (sda3): last error at time 1762729201: ext4_mb_generate_buddy:757
The systems were still usable, but those errors continued to appear. Those were VPSes with disks on Ceph (one) and ZFS, the other one. Both on Proxmox.
I lost data with it (XFS), too. But it was in 2006, ages ago.
Yes, BTRFS is probably... better, now.
With BTRFS I lost data some days ago. I meant XFS.
You lost data with XFS, that was not because the disk was broken?
No, the disk was ok. An unexpected power outage and everything seemd to be fine. But the system coudln't work properly. Many of the files that were populating my hard disk were gone, zeroed. Unrecoverable. Luckily, I had a backup. But they weren't open files, so that was quite strange. The disk was fine - it's still usable today.
The GPL double edged sword issue is an interesting take that I never thought about it but I don't think BSD licensing removes this issue. In both cases, forks are possible, yes with GPL must be accessible but you can still decide the direction. This happened with Android for some time. If most companies do not choose to start their own forks is because they get value from the other companies too.
We could debate for hours about the "Unix philosophy", if we ever had that, if BSDs and illumos are more UNIX philosophy friendly than Linux (illumos systems usually have SMF, which is systemd's cousin) or even if it's desirable at all. There are other principles in software design that could be better, even in free software we have Emacs which is not the Unix philosophy at all. I think Linux, above all, is about pragmatism. As you mention, there are some complex distros like openSUSE where everything seems to work. And you may need to use some very ugly hacks but it works! It gives you the flexibility to solve stuff sometimes in different ways so apps continue to work, to adapt for different edge cases, etc, for example with networking, sound, graphics,...
I was also very interested by this take as I’m a big copyleft proponent and often write on my blog about how permissive licenses allowed enshitification. So happy to read a different take on the subject.
This post also remind me of my serie "20 years of Linux on the Desktop"
I am a big opponent of copyleft, but I don't really agree with the point in the article. Nothing you do with a license can prevent a big company from paying all of the developers and moving the project in a direction that you hate. Your choices then are either:
The GPL neither helps nor hinders here. GPL with a CLA can be worse because then the big company can fork it and change the license to something that means you can't adopt their changes, but without a CLA they can't release their things under this license.
If you want to avoid this then you need to do a bunch of things:
First, favour software-engineering practices that encourage small loosely coupled components. If an entire project is ten thousand lines of code, one person can maintain it easily. If someone throws resources at it and creates a version you don't like, it doesn't affect you. Linux, as a monolithic kernel, and systemd, as a set of very tightly integrated daemons, are both subject to this kind of control. FreeBSD would be as well, but has mostly benefitted from being the second-biggest option in an environment where people who want to exploit the ecosystem are myopically obsessed with the most popular. If Linux slipped to second place, I'd expect FreeBSD to suffer in the same way.
Second, widen the set of potential contributors who see the value of it. SILE is my current favourite example of a Free Software project. It's written in Lua (which is easy to learn) but, far more importantly, it's written in such a way that it encourages people to poke at the insides. The GPL would be harmful here because someone copying a bit of the SILE typesetting code into their document to monkey patch a part of the layout pipeline now has to worry about the legal implications, whereas the permissive license means I did this for my latest book and didn't have to care. Projects with easy extension points make it easy for people to both make changes and also to value the ability to make changes. What proportion of Linux users have ever made a single change to the Linux kernel? Even with the disproportionately high number of C programmers among Linux users, and discounting any Linux-based things like Android and TiVo, I'd be shocked if the number were even 1%. It's hard to explain to people why the freedom to modify a program to better address your needs is valuable when even people skilled in the art find it difficult to do so. But if you have ten thousand users all making changes routinely, it's far harder for someone to hire enough people to unbalance the contributors.
Third, normalise the idea that projects are often 'done'. This is easier to do if projects are small and loosely coupled. If a project solves a requirement and is complete, the worst someone can do is create an alternative to it. And, if the original still solves your requirements, then you can just ignore it.
I like it because..
In Linux you can configure everything! And you will configure everything!