Convert Linux to Windows
30 points by linkdd
30 points by linkdd
This was an early goal of Lindows (now Linspire). From Wikipedia:
“Lindows, Inc. was incorporated in July 2001 by Michael Robertson and began selling products in January 2002. Robertson’s goal was to develop a Linux-based operating system capable of running major Microsoft Windows applications. It based its Windows compatibility on the Wine API. The company later abandoned this approach in favor of attempting to make Linux applications easy to download, install and use.”
Also there are many libraries that (not named GTK) do value ABI stability. This old site (https://abi-laboratory.pro/index.php?view=tracker) tracks some popular libraries.
It’s still possible to run old 30+ year old copies of Netscape Navigator (4.x) on Linux. You do have to start with the elf based releases though because the kernel removed the ability to execute a.out/qmagic binaries a while back.
I low-key hope Rust could be a force of change here. Part of Rust community has a strong culture of specifying interfaces, and of maintaining backwards compatibility. So perhaps one day some Rust folks will get radicalized enough to build an userland
crate which provides stable, comprehensive 1.0 API for developing desktop applications in Rust (of course, this also means that they’d have to implement the APIs, probably starting with pid 1, but implementation work is not as hard as the interface stability).
The old-school UNIX people had the same philosophies. You’ll see it in Linux (at least for userspace), FreeBSD, glibc, and a bunch of other core bits. I suspect it’s more of a ‘system programmer’ thing rather than a language-specific thing.
Unfortunately, the problems are all of the higher-level things layered on top. Part of this is because the problems change more rapidly up the stack. The core POSIX APIs for network and filesystem access still solve the same problems they did 30 years ago (though QUIC slightly changes the layering and may affect the networking bits).
When requirements change more frequently, you have to think more carefully about APIs to make sure that they’re extensible, and you have to ship more quickly to remain relevant. That makes it a hard problem.
I’d say it’s not so much about the rate of change, as it is about scope. POSIX is easy, because the bytestream abstraction carries so much.
When you get to GUI desktop stuff, my understanding is that there’s simply no narrow waist. You gotta have somewhat expansive ABI/API, which is naturally harder to guard with respect to stability.
I’ve written about this before too; the problems you’re going to face are:
Apple?
The Apple Music app for Windows that replaced iTunes (and the TV app) is WinUI. It’s the only third-party WinUI app I’ve seen that wasn’t some deep Microsoft fanboy thing from an ISV. Even Microsoft just uses Electron.
Why are people so insistent on library ABI stability? The kernel syscall interface is stable, so that’s the platform you build upon. That means if you want libc, you ship your own libc. If you want to load dynamic libraries, you ship your own loader. Simple.
This is a well-understood and normal practice on other platforms, the only difference is that the baseline includes more functionality. Why is Linux treated differently? Windows provides some stable libraries, but for anything beyond that, the same idea applies. If you want XYZ functionality, your ship your own XYZ library. You don’t hope and pray that xyz.dll exists somewhere on the system and if that happens to break all the time you don’t write blog posts claiming that “there is no good way to distribute binaries” for Windows. You simply rely on what the OS is guaranteed to give you, and for anything else you ship your dependencies.
Linux is treated differently because glibc is The Way You Do Things, and glibc doesn’t support it. You can’t statically link glibc, and you can’t bundle all the dynamically linked components (even with a custom build and rpath, ld.so is tied to the glibc version and must be accessible at an absolute path). And your only realistic alternative is musl, which has several limitations, not mentioned on that page is unimplemented dlopen, and the majority of development tooling does not work with it. If you cannot work with that, and need tooling that works and aren’t going to write it yourself, there is nothing you can do to make actually portable software; the user must have installed something up front to at least enable your use of a custom glibc build, whether that be Nix or Docker or Snap or etc. From this view Wine is just another runner.
And no, on Windows the same idea does not apply. No platform besides Linux stabilizes its syscall interface. Microsoft shuffles them on purpose so you don’t get any funny ideas. Your system interface on Windows is the C API of kernel32.dll and ntdll.dll.
But glibc is backwards compatible and uses symbol versioning extensively to support this. You can build something against an old glibc and run it on a newer one with no problems (the converse is not guaranteed to work), modulo the occasional bug, which gets fixed when reported.
And ELF allows rpath to be specified relatively to $ORIGIN, which is expanded to the path of the main-program executable, so you can include directories that live somewhere private to the program that contain your versions of shared libraries (which makes complying with LGPL requirements easy).
Yes. This seems to mostly just work, in my experience.
Julia’s yggdrasil project compiles stuff for a pretty old glibc version and distributes generic binary packages for Linux that work on every system I’ve tried them on except NixOS.
I’ve also managed to get some very old binaries from Loki games running on newer linuxes by grabbing a few old libs and setting a custom LDPATH. The windows versions don’t run on windows 11.
It is quite unfortunate that the ELF format mandates that the reference to the interpreter is an absolute path. This can be worked around with a static executable that executes the interpreter directly. Now you can ship your own dynamically linked everything, i.e. it solves all the problems. Support for something akin to $ORIGIN
for the ELF interpreter field would be the better solution, though.
Re: Nix, you absolutely do not need to have Nix installed in order to run software built by Nix. Just copy the closure with your favorite file transfer mechanism. The only limitation is that it needs to end up at the same absolute path.
And no, on Windows the same idea does not apply.
It absolutely does. The idea is just that there is a stable baseline. It doesn’t matter that Windows shuffles syscall numbers, the stable interface is, as you said, some system libraries. But that is the idea. People pretend that the stable baseline doesn’t exist on Linux because there are no stable system libraries, when the stable baseline is in fact the syscall interface, not the libraries.
This can be worked around with a static executable that executes the interpreter directly.
It turns out this is solvable, but it’s non-trivial. Specifically, if the underlying program reads /proc/self/exe
, you can’t just execute the interpreter with execve
. In my experience, it’s not all that uncommon for programs to use /proc/self/exe
, and it’s nontrivial to get it working in a portable way when you ship your own dynamic linker.
The only limitation is that it needs to end up at the same absolute path.
To me, that’s a pretty big limitation. We’ve really ended up in a place where users expect to just grab a binary from GitHub or whatever, and to have that be runnable without needing to go through hoops to set it up. This idea of needing to install to a fixed absolute path leads to some problems:
/nix/store
as the fixed path? If so, you’ll need to have different installation instructions depending on if the user has Nix installed or not.…when the stable baseline is in fact the syscall interface, not the libraries.
Other platforms offer libraries plus a dynamic linker, which is key. That, plus the use of absolute paths for the dynamic linker itself in the ELF binary, make Linux alone unsuitable as a target for shipping applications. You need to pick a libc to target as well. And, as already touched on in this thread: musl can be shipped portably but isn’t a universal drop-in replacement for glibc, and glibc really wants to be installed at the system level.
It turns out this is solvable, but it’s non-trivial.
What’s non-trivial about it? Does it matter as long as it works?
We’ve really ended up in a place where users expect to just grab a binary from GitHub or whatever
Then all the dynamic linking stuff is off the table anyway.
This idea of needing to install to a fixed absolute path leads to some problems
If you’re installing more than one closure and you don’t have any software to manage store paths, what you could do is have each closure in its own directory, and then bind-mount them to /nix/store in a user namespace. That limits what you can run with it, but it’s probably good enough for most regular applications. If you really want to, you can go the Windows route and have installers and uninstallers.
Other platforms offer libraries plus a dynamic linker, which is key. That, plus the use of absolute paths for the dynamic linker itself in the ELF binary, make Linux alone unsuitable as a target for shipping applications.
The combination is the problem. If you either provide stable libraries + loader, or fix the ELF format, the problem is solved. I think fixing the ELF format would be better than enforcing the requirement of a certain set of libraries.
What’s non-trivial about it? Does it matter as long as it works?
Well, I wrote about how I tackled it for Brioche some time ago (https://brioche.dev/blog/portable-dynamically-linked-packages-on-linux/). Since then, I also came across sharun (https://github.com/VHSgunzo/sharun) as another implementation of the same idea
So yeah, it’s fine “as long as it works”, but ~0 software uses it in practice. The exceptions I’m aware of are the ~80 packages currently in Brioche’s ecosystem and the 11 packages linked to in the sharun README. It’s largely uncharted territory still, and there’s still very little in the way of tooling that would help you produce working, shippable software structured like this today.
Then all the dynamic linking stuff is off the table anyway.
To be more precise: my goal is to be able to ship software 1) portably and 2) in a way that can be installed / run without root (unless it’s actually needed e.g. for special capabilities). A binary is the most common and easiest version of that, and I’d also include FlatPak / AppImage bundles that same category. I’d also include a portable tarfile or similar, assuming it could be extracted and run from any folder
[…] what you could do is have each closure in its own directory, and then bind-mount them to /nix/store in a user namespace
…isn’t that just Docker/AppImage/Flatpak with extra steps? I guess I don’t see what value Nix would bring for this approach
I think fixing the ELF format would be better than enforcing the requirement of a certain set of libraries.
100% on board for that! Honestly, it’s kind of baffling to me that ELF doesn’t support ${ORIGIN}
for the interpreter field (is that part of the ELF specification, or just a detail that the Linux kernel could address easily?)
What you’re talking about works, unless you need graphics drivers or similar. Can’t ship your own libc if you want to use such libraries provided by the distro.
Graphics drivers are only a problem for NVIDIA. If you stick to in-tree functionality, Mesa works fine.
What is so great about Linux? If you like Windows, why not use its kernel ABI too? Why not use a Windows compatible kernel and take advantage of all the drivers that have been written for Windows?
There are large differences in e.g. filesystems on Linux and Windows. Your beloved Win32 programs assume Windows abstractions, so why waste time mapping them to Linux’ way of thinking of file systems?
In other words, why does this article not emphasize ReactOS more? Isn’t ReactOS basically Wine plus a kernel and all the system management software around it (services.msc and such)?
My impression is ReactOS is great if you don’t mind your Windows kernel support being stuck 25 years in the past. For the most part, Linux has better drivers than the ones ReactOS is stuck running.
But being 25 years stuck in the past is, roughly speaking, what this post suggest. He mentions a self contained exe. Aren’t present day windows applications not built with things that didn’t exist 20 years ago? .net frameworks and whatnot?
I gave ReactOS a whirl recently (within the last year). In a VM, so no driver issues. I think it was stuck on very old browsers, so really it didn’t seem very usable for general-purpose computing (because this requires a modern browser nowadays).
I actually think ReactOS is a great idea, but for some reason, it’s less useful than Wine right now.
A browser is a user-space component. Surely if a browser works well in Wine, it works well in ReactOS too. So how is this related to the question of whether to use the Linux kernel or not?
With Wine, you don’t need everything to work on Wine, you can use Linux software too. With ReactOS, you do not have the escape hatch of having a Linux kernel that can run Linux software.
So using the Linux kernel can cover deficiencies in Wine.
Also, I think ReactOS cannot run everything that Wine can.
It’s probably a good idea, most daily applications don’t need the extra performance of a native implementation
The link isn’t loading for me, so this might be ignorant, but if it is using wine, there may be litle to no extra performance anyway. wine is basically just yet another toolkit providing a cross-platform api, not that different than Qt or Gtk, just with a custom binary format loader (but the contents of that binary are not that different).
it’s not even native implementation, there’s no VM there. Just “wrong” cross-library call ABI call format.
WASM with HTML UX might be a different way to solve this same problem. The upside, you get cross-architecture compat too.
Like others have mentioned WINE for games and other things with custom UI/UX are fine, but things using windows UX tend to be annoying.
This would be nice, but it would have to be near-perfect for me to switch. I use windows because “it just works” for me, I never run into the same issues my *nix friends do and it’s important to me because I’m just not into configuring my environment (old age laziness?) I can round-trip between photoshop, illustrator, figma and nodejs, golang and docker to build what I need to build to do my job, I can close my windows and fire up a game to relax, if any of those flows has friction then it’s just a no-go and thus why I’ve never switched to linux (yet)
it sounds like a Sisyphean task but this post does give me hope, I’ve worked a lot with the win32 stuff building software that needs to run on XP, 11 and everything in between so the argument around compatibility makes sense to me. Isn’t ReactOS doing something like this? are there any examples of groups building something akin to what’s outlined in this post? the critical question for me will always be how stable is it, can I install anything and just use it without needing to dive into obscure commands and stack overflow questions? that’s the bar I think for folks like me.
excellent post though!
I’ve been on Linux for 15 years (from Damn Small Linux, to Ubuntu, to Debian, to Archlinux and quickly back to Debian, to Gentoo, to LFS, and back to Debian again) before switching to Windows some 7 years ago.
There are 2 main reasons why I switched:
With Windows, I am not tempted to tinker with it and spend more than 5min customizing it, because you simply can’t customize it beyond choosing the taskbar’s color and your wallpaper. And I’m fine with that.
The gaming aspect is only secondary, as most of my games run on Linux anyway:
I can second that on the tinkering years ago, I used to love Rainmeter and other windows mods, if I used linux back then I’d have probably done the same!
As for dev env, interestingly (so it seems to some) I don’t even use wsl, despite feeling the same way about git bash, cygwin and that mess, instead I use NuShell which I think is a really beautiful example of a well considered cross-platform shell, it follows all the rules of the OS its installed on (mac and windows for me) instead of trying to shoehorn linuxisms into my windows or mac, I’d highly recommend it, even if you stick with wsl! other than that yeah I find the stock install pretty satisfactory and haven’t needed to customise a lot.
I approach this from the other side these days and I’m pretty happy - Windows 11 + a tiling window manager as a desktop environment over NixOS running in WSL.
As someone who only recently started programming for Windows relatively recently (in the last 5 years), for all of the baggage of the Win32 API, I am very impressed by the level of backwards compatibility.
Which tiling wm? Does it handle your Nix GUIs too?
I use komorebi for my tiling wm needs on Windows (mainly because it’s a project I wrote for scratch for my own needs - it’s nice when a piece of software does exactly what you want it to)
On the (very rare) occasion that I need to use a Linux GUI app I’ll spin up VcXsrv and it handles any GUI apps running in WSL as expected. I don’t recommend using WSLg because unlike VcXsrv, the windows it spawns do not (yet?) respond to Win32 API window positioning calls correctly.