KDE Linux deep dive: package management is amazing, which is why we don’t include it
27 points by strugee
27 points by strugee
[…] There’s package management to get add-on software… and then there’s package management for assembling the base OS on your own computer.
The former is incredibly convenient and amazing. The latter is a Tool Of Power™ best suited for use only by OS builders and true experts.
I feel it’s a problem that historically we’ve used this amazing tool for both of those use cases[…]
isn't this, like, a selling point of modern BSDs? the clean base-system/packages separation
i feel like that's worth bringing up, maybe, tho i'm not really sure how one would map such a separation onto linux
Linux packaging is much like the BSD ports system, except that where BSDs have a monolithic "base" system that the packaging system is built on, Linux distros also distribute their "base" system as packages that you're not supposed to uninstall ("Priority: required" in Debian-speak).
This blog post is describing something a lot more like macOS, iOS or Android, where you have a monolithic base OS that is vastly larger than what BSD would put in "base" or Linux would mark "required", including GUIs and printer drivers and Bluetooth codecs and such, and then applications downloaded as binary blobs, sandboxed for security reasons.
base OS that is vastly larger than what BSD would put in "base" or Linux would mark "required", including GUIs and printer drivers and Bluetooth codecs and such,
Fwiw, both NetBSD and OpenBSD include GUIs and NetBSD at least has printer and Bluetooth audio support in the base system. (I would expect OpenBSD to have all of them in the base system too, but I'm less familiar with it.) Now, NetBSD is still going to have less software than KDE Linux out of the box, but it is meant to be a full workable desktop system without the need for extra addon packages.
I guess you're mainly familiar with FreeBSD? Of the major BSDs I'd say it's got the narrowest base system.
Ah, I did not know that!
Yes, I'm mostly familiar with FreeBSD and people bending over backward to get their software to build with the C compiler in the base system so they didn't have to depend on the full-featured one in ports.
That's a thing I've always extremely disliked about modern BSDs. For example, if I want to have zsh be the default user shell on my system, I want it to exist in a proper system location like /usr/bin/zsh, because it is clearly an important core component of my system, and not some optional addon package, which is what being stored at /usr/local/bin/zsh would suggest to me.
I've been running Fedora Silverblue on my main laptop for over five years and Fedora Kinoite on my travel laptop for the last several months, and I really enjoy this approach to desktop computing. I am very happy that it's not just an idea pushed by my distro of choice but something pursued by the developers of the desktop environments themselves (the GNOME and KDE folks).
Everyone will have different opinions on where the split should happen, and I still prefer the traditional "all package management" setup on things like servers. On devices that I treat more like appliances (my laptops, family computers, etc.), I really like the idea of an immutable base that is very difficult to break. I don't see the old way going away maybe ever, but I don't see much reason not to continue the advancement of this new way. Having more people working on this is only a good thing.
I think the split between "base OS" and "addon" package management is a huge mistake. It's inefficient, both in terms of wasted storage, as well as in terms of duplicated work, as we've effectively invented the wheel twice. We already have multiple deployment systems that are capable of installing packages that form an OS. Why did we need to make a bunch of different deployment systems specifically for installing addon packages, and then repackage all of our software for this new scheme, instead of reusing or building on top of the existing tech?
The obvious solution to all these problems is (of course) Nix, which provides the flexibility of the split approach when necessary, but also provides the efficiency of the "one package manager for both use cases" approach when possible.
More simple is not always better. Sometimes it is better to cater to the specificity of requirements directly instead of trying to make a general solution which is slightly less tailored for each case.
I don't believe that's the case here. Is there anything useful that these specific implementations provide that Nix's design does not, that would balance out the duplication of storage use and human effort?
yes, many things, especially given nix is designed as more of a build tool than a package manager. for starters, a user interface that wouldn't make a normal user want to completely give up after ten minutes.
Is there anything inherent to Nix's design that makes such a user interface impossible? Is there something inherent to the design of the other tools that specifically enables usage through a fancy user interface?
Nix is mostly just a build tool so it could be used to build the images described in the article on the back end, or for any other kind of shape of packaging system. It is not designed to be a package manager so its interface makes for a clunky experience for that compared to dedicated package managers that are not first and foremost build tools, and have been around for ages.
That's what I'm saying. You could use Nix to build all those packages, and then trivially copy them to the target system using whatever mechanism you like. That's literally what Nix was designed to do.
You are not saying just use nix as a build tool. You said use it as a package manager. These are different things
"Nix as a package manager" is basically nothing because all the interesting stuff happens at build time, and it produces artifacts that are absolutely trivial to deploy. I kept saying "Nix's design" because I'm not even partial to that particular implementation, any "component store" style of software deployment should do the trick.
I feel like I'm being gaslit lol
How pedantic do you want me to be? Either way, I think we can agree that storing software in a component store is far more efficient (in terms of storage, packaging effort and deployment effort) than a bunch of (mutable or immutable) disparate FHS trees, right?
I remember as a high schooler picking up openSUSE as my operating system because I heard that it was the best KDE operating system. Then after some time I broke the system accidentally by adding third-party RPM repositories to it and then doing a system upgrade. At the time, I didn’t really have the expertise to fix it so I just scrapped everything and switched distros. I think the approach described in the article is really sensible and probably would be the best default for a lot of users, maybe the majority of users. Based on my experience accidentally breaking Linux multiple times; the openSUSE breakage wasn’t the last either.
All this being said, I do find Flatpaks to be a flawed application distribution method at least based on my time using them back in 2021. I wrote about that regarding a specific experience here with Spotify, and another time I found the 1Password integration from the browser to the host OS was broken only in Flatpak. The long story short is that software often just doesn’t work how it should when it’s packaged in Flatpak. Maybe things have gotten better since I last tried it.
I think it's safe to say it's gotten a lot better over the last four years, but I have very limited experience using proprietary software in flatpaks. Everything I've installed from Flathub in recent history has worked as expected.
I see why they went with the base-image approach, but it's a little frustrating to see because the core motivator they mention - systems deteriorating and accumulating cruft over time - is exactly what NixOS very successfully solves, or at least has for me. KDE Linux's approach is basically "accumulation of (non-visible, forgotten) state causes problems over time, so let's eliminate that state by freezing the base system entirely in configurations we've chosen and tested." Which... works, but you lose out on capabilities and flexibility, which they mention but decide is worth it. The thing is, by making system configuration declarative and centralized, you still fix the "accumulation of forgotten changes" problem (by making them all visible in the same place), but without losing the ability to change your system. It just seems like point in the decision space they didn't really discuss or acknowledge.
Not arguing NixOS is suitable as a base for KDE's distro, but I do think there could plausibly be a "GUI-first" distro that still exposed a declaratively-configured base, so that you could get a presented view of "here's all the ways this system is different from as-distributed."