The Downfall and Enshittification of Microsoft in 2026
114 points by Caio
114 points by Caio
2025 wasn't bad either. Retiring Windows 10, while requiring new hardware to upgrade during an economic downturn, and aggressively pushing unpopular features nobody asked for.
Whatever led to this sequence of floundering and cartoonishly stepping on rakes feels like it should be a case study.
Just pure greed. Never been in short supply at MS HQ, but this time short-term-ism has finally defeated the "long con" faction.
I don’t agree (having worked with the Windows team for a few years). There were several factors:
TPM 2 for Windows 11 was an important baseline for a load of security features that they were being hammered for not having. Apple and most Android phones had security coprocessors that did more than TPM 2 and so it was viewed as table stakes for security by a lot of government and corporate customers. The Microsoft sales org is focussed entirely on these kind of customers because of the company!s scale. Getting a $1M contract is barely worth a sales person’s time. The ones that move the dial in their org are at least tens, if not hundreds of millions of dollars. This means that they simply do not talk to SMEs, let alone individuals. They have a partner ecosystem to sell to these people but the feedback channels are not good.
They have an ingrained culture of lying to management. Partly this is incentives being misaligned, but there’s also fear of things leaking to journalists. No one wants a slide deck about how far behind a competitor they are to leak, so mid-level managers delete those slides when they present to senior leadership. This means that senior leadership has no idea how bad things are. They get a rose-tinted picture and reduce headcount in the org (and the mid-level manager gets promoted for making things good).
Windows is full of technical debt. The virtual memory subsystem alone is more code than a minimal Linux configuration. It has a load of copied and pasted code that made SQL Server marginally faster on one specific thing. Only three people in the company understand it (and one is semi-retired). Win32k is a massive attack surface that has vast numbers of things in the kernel that have no business being there. All of this happened for good reasons in the ‘90s but modern computers look nothing like the computers for which all of these optimisations were done and they hurt both maintainability and performance. Worse, most ‘anti-malware’ systems hook into these things and they can’t change them significantly without other vendors crying antitrust and claiming they did it to make Defender the only usable option. Apple isn’t a convicted monopolist, so was able to kick all third-party code out of the kernel. Windows still lets third parties run code in interrupt handlers (ever wondered why interactive performance is so bad on Windows? Take a look at how many things run in ISRs: every other OS moved to a policy of ‘ISRs wake threads, they don’t do anything directly’ twenty years ago).
A lot of the senior people on both the product and engineering sides don’t use competitors’ products at all, so they have no idea of what the competition looks like. They’ve been using Windows and Office for decades and believe it’s better. And they’re so used to Windows and Office that, if they do briefly try something else, the fact that it’s different makes it seem worse.
There are a bunch of cultural issues that reward shipping new features without any reference to user demand or how they integrate with other things. It’s not quite at the level of Google’s ‘get promoted for shipping a new chat system’ phase, but it’s quite bad.
But, first and foremost, senior leadership has no vision. I asked a bunch of senior folks what they thought computing should look like in 10-20 years. None of them had an answer. You can’t define a strategy unless you have an idea of what the strategy is aiming towards. And, because this comes from the top, no one with any kind of vision stays long enough to get promoted, with very few exceptions. It was noticeable that about half of the people I respected and enjoyed working with quit in the year before I did, and none of the ones I thought were a waste of space did (some got promoted).
There are more issues, and now that the stock price is tanking the board should be asking Nadella some hard questions.
TPM 2 for Windows 11 was an important baseline for a load of security features that they were being hammered for not having.
I don't get it. Those security features can be made optional for people who have the necessary level of TPM. Home users don't care and shouldn't need to upgrade their hardware for that reason. They don't even need secure boot. If some corp wants to play with trusted computing, the cost of that and the choice should be entirely on them.
They don't even need secure boot
And when they don't have it, MS gets panned whenever there's any large-scale exploit that secure boot could have prevented. And when they don't have secure full-disk encryption, Microsoft gets panned when they lose their laptop and all of their stuff is leaked. And when they don't have WebAuthn with keys managed on a separate device, Microsoft gets panned whenever there's a Windows vulnerability that allows at-scale credential harvesting. And every single one of these events draws a comparison with macOS, iOS, and Android that all do have these features in the baseline hardware that they support.
The problem was that management could see the cost to Microsoft of not doing it, the benefit to their big customers of doing it, and did not factor in the pain to the majority of their ecosystem.
And when they don't have it, MS gets panned whenever there's any large-scale exploit that secure boot could have prevented. And when they don't have secure full-disk encryption
But that doesn't follow. You can have full disk encryption without secure boot. (I have that) Secure boot only protects you from evil-maid style of attacks where someone replaces your boot to extract the decrypted data later. And that's seriously not something that will happen to home users (when did it ever happen on a large scale?) - they're more likely to lose data to hardware failure that leaves their disk encrypted and locked. (See the number of posts about lost bitlocker access on Reddit) There will be a thousand simpler ways to hack them anyway. Meanwhile corps can enforce whatever hardware and domain restrictions they want.
I came across this series of posts a few days ago that seems to be relevant here.
I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality. My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.
[...]
I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.
Windows is full of technical debt...this happened for good reasons in the ‘90s
I know it upsets engineers a lot, but I don't think users are annoyed about things that have been true in Windows for 30 years. They're annoyed at what keeps happening now.
A lot of the senior people on both the product and engineering sides don’t use competitors’ products at all
This used to be true but these days it seems driven by envy. There was an article posted here a while back about a shell developer lamenting that they were asked to do things by designers who invariably used Macs, not Windows - the result being a system that's constantly trying to visually emulate a different platform without achieving polish or coherent experiences.
They have an ingrained culture of lying to management.
Agree with this. Being a senior leader at Microsoft looks awful. It's hard to control software projects at the best of time, but when everyone is lying it asks leaders to be psychic. But I think it manifests as "emperor's new clothes" where everyone tells managers that the manager's idea is being implemented and is going great, and the only way to make changes is to replace the entire leadership bench, which happens every 5-7 years and the entire public can see it happen as the product violently lurches.
I know it upsets engineers a lot, but I don't think users are annoyed about things that have been true in Windows for 30 years. They're annoyed at what keeps happening now.
They don't care directly, they care about the consequences. Windows did things to a lot of things overfit for mid '90s PC designs. As those things changed, Windows added code rather than rethinking core abstractions. This makes changing things to better fit newer systems much harder than, say, Android or macOS building things that are designed for newer systems. New features that are driven by feedback aren't built to do what users want, they're built as some compromise between what users wanted and what is possible within the current design, and the technical debt in Windows skews these things a lot towards the latter.
There was an article posted here a while back about a shell developer lamenting that they were asked to do things by designers who invariably used Macs, not Windows
I thought the term "dogfooding" became famous because it was standard practice at MS. This does not extend to everyone in the company?
(ever wondered why interactive performance is so bad on Windows? Take a look at how many things run in ISRs: every other OS moved to a policy of ‘ISRs wake threads, they don’t do anything directly’ twenty years ago).
I had in fact wondered this exact thing! Recently migrated my MIL off Windows to macOS (she tried Linux first but couldn't find a DE she enjoyed using).
Repurposed her old laptop as a Linux desktop machine for our local Scouts group, and was surprised by just how much more responsive Mint seemed than Windows 10 on the same hardware.
Preventing third parties from running code in ISRs would help a lot here, but it would also break a lot of existing device drivers. This is one of the problems with the DoJ ruling: if Microsoft breaks third-party things, that causes antitrust scrutiny, even if they do it for good reasons.
For a lot of these things, the right thing would be for Microsoft to announce the new rules and a timeline for enforcing them, then move all of their code to following them, then only certify drivers that follow the rules, then block ones that don’t, over a period of five or more years. A lot of that should have started ten or more years ago (providing a sane copy in/out model rather than having drivers poke userspace memory directly would have been a good start. You can’t adopt SMAP without that) but that would require leadership with a vision of where the platform should be.
Unfortunately, a lot of major customers depend on unmaintained drivers (or other kernel-mode things) and so even a five year deprecation model will not get them upgraded.
Windows is full of technical debt. The virtual memory subsystem alone is more code than a minimal Linux configuration. It has a load of copied and pasted code that made SQL Server marginally faster on one specific thing. Only three people in the company understand it (and one is semi-retired).
Some years back, I ran into a Windows 10 bug where enough VirtualAlloc2 calls w/ MemExtendedParameterAddressRequirements would irrecoverably destabilize the whole OS. It seemed like the kernel just kind of got stuck while attempting to satisfy the address requirements. This, to be clear, was in a 64-bit process, with plenty of address space to fulfill the requirements, and the system as a whole being approximately ~180 GB from OOM.
So if this is accurate, I guess I shouldn't be too surprised that a bug like that shipped.
The biggest place where this bites people is the commit-charge mechanism. This requires all space in memory plus swap to be fungible. The total memory plus swap space is added together and this is the total available commit. Whenever a process requests a page or the kernel allocates a page on its behalf for kernel state associated with the process, that commit is charged to the process. If there is no available commit, the allocation fails.
It guarantees you can limit the maximum amount of memory a process can consume. If you limit all processes to less than the total available space, so you will never get an exception when you touch a page[1], but it may cause swapping or lazy allocation.
This was a really good approach in the NT3.51 days. Treating all memory as fungible let them swap out almost everything. When you swap out a page, the not-present PTE has the swap device index and the offset where the page is stored put in the unused bits. You can then swap out the page-table page and everything associated with a process except the page-table-root page, then demand-page everything back in. This is amazing when your processes often take a multiple of the amount of available memory and you want to have the foreground one use the most real memory and then be able to swap in others as they’re needed, and not have things fail.
The immediate downside of this is that a lot of processes (especially Electron apps!) allocate a lot of memory that they don’t touch. Processes may easily allocate 2-3x the memory they need on the common path. On Windows, this then causes other processes to fail allocations unless you allocate a lot of swap. On my desktop with 128 GiB of RAM, I needed to allocate a 512 GiB swap file to be able to actually allocate all of the RAM. I don’t think I ever saw more than 8 GiB of the swap file actually used, it just had to be present to avoid commit being exhausted long before RAM. And that causes a load of system instability because almost no software actually checks all possible allocation paths (and, if they did, a lot can’t actually do anything other than exit gracefully if allocations fail).
Oh, and Windows has a low-memory notification for GCs to use when doing a GC would be better than swapping. This is triggered only when RAM is low, not when commit charge is low, so you can have 30 GiB of free RAM, no process able to allocate memory, and no GC being told to free things.
But there are more consequences of this. Something like MTE adds metadata to memory (but not necessarily all memory). This means a page of RAM may need a bit more than a page of swap. This breaks the entire abstraction. Storing the extra metadata also requires completely rearchitecting the swap subsystem. Adopting MTE would require a from-scratch redesign of core parts of the Windows virtual-memory subsystem. On FreeBSD, Linux, or XNU, it’s a fairly simple addition, because they allow allocation during page out.
Similarly, modern operating systems typically have compressed swap. Ideally, you keep a small number of pages in RAM before paging them out, then compress them and write them out. But this can’t be done natively with the Windows pager, so instead they do a complicated dance where the pages are assigned to another process that does the compression. This subtly messes up the commit-charge accounting (a swapped-out page may use much less than one page after compression).
There are a lot of things like this, where decisions that were good (even essential) for performance around 1995 have become core abstractions that now can’t be changed.
[1] This is actually untrue. You can do CoW mappings of files and they don’t count to your commit charge until you take the fault, and then get an exception. If you want to do overcommit on Windows, you can do it like this.
Microsoft's misdeeds and behavior are well documented, over decades.
Explaining them away as bureaucratic dysfunction, does not give MS credit for their frequent, active choices.
I would note that TPM2 is not the hardware requirement that's hard to satisfy. Every relevant CPU in the past 10+ years has it (albeit some earlier ones required switching it on in the bios for some reason).
Given 2010-2019 saw almost no advancements in CPU performance what so ever[1], that's a lot of hardware that could still be perfectly viable.
[1] AMD was floundering and Intel was resting on their laurels. It wasn't until Zen 2 that things started moving again.
In 2015 Microsoft required that all machines certified for Windows 8 required TPM 2.0 so even low end machines had to include it. But prior to that higher end CPUs already shipped with it.
So I think it highly unlikely to have a machine that both meets Windows 11's other requirements and doesn't have TPM2. Though I would concede there may be such a CPU and it may be still in use by someone. But even so I don't think that's at all common.
I have a system with a 6700k. Perfectly good CPU, it handles everything Windows 11 throws at it. But Windows 11 requires hacks to run because Microsoft decided it's too old.
Same boat with a 4790k. It's fine for my very occasional jaunt into PC gaming. Hell, I even played Marathon just fine in the server slam.
But no, it's basically doomed because it can't move to W11 since TPM 2 is a hard requirement.
It's stupid, the CPU is more than capable still.
Sure. But that's not because of the TPM 2.0 requirement, is my only point. It would fail the requirements regardless.
I love it. The worse Windows gets, the easier it is for Linux to reach parity on the Desktop. Which by now it probably has, depending on your distro, window manager and use-cases.
Yeah Microsoft is genuinely helping Linux reach parity with Windows. Not because Linux is getting much better, there are still lots of weird unnecessary problems and even more stupid limitations and incompatibilities than before thanks to Wayland -- but Windows is getting worse as Linux is standing still.
As the meme goes:
> does nothing
> wins
I feel like it's a whole lot more than doing nothing. "Don't get worse" entails a lot of hard work, as anyone who maintains code and products can attest to. It just doesn't come with fancy announcements and promotions.
This is fair. A lot of work has gone into supporting new hardware as well as old hardware, and the whole Wayland transition has been a monumental (yet arguably necessary) time sink which has taken a ridiculous amount of work to get to a point where it's not worse than X11. What it's lacking is more or less made up for by what it does better. (Though I won't consider Wayland "better" until I stop regularly encountering needs which would be easy in X11 but which, under Wayland, turns out to be really hard or impossible, or require some obscure GNOME extension, or some proprietary app to be updated. Examples include configuring libinput, remote desktop, push to talk in various programs.)
And we probably shouldn't discount the actual areas of improvement either. It used to be that for a large segment of the population, Windows could get almost infinitely bad before they would consider switching, because they play games and you just couldn't really get games to work on Linux. The fact that the trade-off is no longer "I will not be able to play any games", but instead "some games may be a bit more buggy or a few have anti-cheat which makes them not work but most just work out of the box", is an immense achievement.
Next up is getting folks off of the other Microsoft-owned platforms like GitHub, Azure, npm, LinkedIn, & Teams. It’s a slap face how many job application forms require that you have accounts on Microsoft’s GitHub + LinkedIn social media platforms when developers are on the front lines of knowing the harm/neglect Microsoft symbolizes.
I run a small office on all-linux desktops, and have done for 10 years. So much of our work is web-based now it's a breeze.
I don't understand how bad some of the regressions are.
Here's one that I experience at least a few times a week. File Explorer can get into a funk where it doesn't notice updates to a directory and its contents without clicking refresh.
I can right click -> new folder and nothing shows up. Refresh and there is my new folder.
Right click -> rename and type in a name for my new folder, hit enter and it's still called "New Folder". Refresh and the rename went through.
It's bafflingly explorer can't even keep track with its own modifications.
i see this often with the File-open / file-save dialog where I've downloaded a file and know it is in the target directory, but even "order by date" does not show the file . I have to close and re open the file viewer to list the file properly. Absolute unfathomable bug.
nearly all software has been declining exponentially . Including games, social media. OS, productivity, business systems -- you name it. Microsoft is an easy target, and we all feel great punching down on them, but let's not pretend they are an outlier among greats. Nearly all software has become shittier, buggier, slower -- including some of the former greats like Apple
I don’t think it’s really “punching down” to criticize the dominant player in the home and enterprise desktop market.
Most, if not all, of the criticism that Microsoft is getting for Windows 11 is self-inflicted. By and large people seemed content with Windows 10; the company shoved ads and half-baked AI cruft into the OS and now acts surprised that its users are unhappy.
is any of that exceptional? Apple has ads and upsells for it's services , and are in the process of adding third party ads. Most Apps have ads, tracking, upsells and run like garbage.
Was this written by AI?
That post is remarkable not because it is ... but because it ...
That is the indictment.
What makes this especially damaging is not just the ... It is the ...
That wording matters.
The result is ...
And the entire section "The Mood on X", including its title.
i had the same thought… but then i missed several pieces lately that other people convincingly argued WERE LLM-generated, so i don’t really trust myself to distinguish properly… which is frightening!
Worse, this isn't a blog post about e.g. why enabling swap is important. If it's LLM generated, it carries the same moral significance as the hit piece against the matplotlib* maintainer. The moral significance is not diminished just because the target is Microsoft.
Maybe 2027 will be the year of no new Windows software for many developers.
Or perhaps more realistically, soon we’ll have the year of “Windows no longer a priority for new software”
I certainly wouldn’t want to bother with shipping stuff for Windows.