Something Is Rotten in the State of Cupertino
51 points by carlana
51 points by carlana
I struggle to comprehend getting this upset about Apple quietly choosing to scale back, delay, or entirely cancel some of their LLM slop features. Even in the scripted demos all of it looks like useless poison, and the less of it that’s genuinely shipped in MacOS and iOS the better.
If anything, joining the GenAI hype-train is the part that damaged Apple’s credibility, and it’s heroic that someone at Apple leadership may be exhibiting the good sense to not release garbage that doesn’t work.
quietly choosing to scale back, delay, or entirely cancel some of their LLM slop features
I feel like the compliant in the blog post is less about that, and more about how they are doing the opposite - they are loudly promoting it, while they’ve delayed it twice without ever showing an actual demo of it working. Everything shown so far has been pre-recorded concepts.
In fact, I feel that Gruber doesn’t really care what’s promised, and more that they keep promising it without showing any sign of ever delivering.
They’re never actually going to do it! It’s a conspiracy to give MS FOMO to put more AI in Windows so it becomes worse quicker, and more people will want to use Apple!
Yeah, totally agree, it’s not about the AI. It’s about company culture and trustworthiness. Steve job Apple was built on the idea that at every keynote, if they would show something, no matter how completely out there it was “available now”. The iPhone was only announced before its official release because they had to do FCC filling. Or, in other words, product were advertised for what they could do now, rather than what they could maybe do in the future.
This Siri vaporware, like the Vision Pro before it, seems to exists because the stockholder expect it to exists rather than because Apple has “finally cracked it” and created an amazing product to ship to consumer. Apple would have laughed at Microsoft for doing this in the 00s. It’s symptomatic of a company focusing on its next financial quarter rather than long term viability. Not good. For long time Apple users like myself, it’s yet another warning sign that the future could get ugly and that I should have an exit strategy ready in case the enshittification become really bad.
For my personal machines, I made that final jump five years ago and haven’t looked back. We’ll be keeping it cozy for you over in linux land! More are always welcome.
But, alas, I’m writing this from my employer-issued Macbook, because for me it’s not a hill worth dying on just yet. I disable all of the worst features as best I can, but it’s still a drag. Helps with the work-life balance though: I’m never tempted to use this machine on my own time.
some of their LLM slop features
I understand the reflex to call everything “AI”-related “LLM slop”, however, if you think about it, then interpreting natural language within a natural-language context and generating machine commands accordingly is pretty much the thing LLMs are supposed to do.
So I don’t think this is helpful.
No, what LLMs are supposed to do is take natural language and then complete it with more natural language. And they’re great at syntax and grammar and semi-sensible word choice, though they’ve always been bad at actually producing text that isn’t devoid of useful meaning. Generating machine commands is not something LLMs have ever been even half decent at.
I don’t agree with this at all. You just need to use notebookLM on a bunch of meeting transcripts to realize how useful of a capability it can be applied to the right problem domain.
Sorry, but that’s untrue. The first ever transformer, from Google’s Attention Is All You Need, was a seq2seq model. It is incorrect to equate decoder-only models (GPT et al.) to LLMs in general.
I struggle to comprehend getting this upset about Apple quietly choosing to scale back […]
Really? It makes perfect sense to me, and the post was a relaxing and pleasant read for me, because the motivations involved are clear and plain and it feels like a bit of common sense is starting to prevail.
Here is how I read it, in case it aids you in your struggle.
If you don’t know this – which is all step-by-step obvious to “AI” skeptics such as myself – then the motivation is obscure. If you did look at the early LLM bot results and conclude it wasn’t “AI”, then it’s clear.
Comparison: cryptocurrencies and blockchain. It’s really a very very slow, extremely resource-intensive, distributed database. Cryptocurrencies were going to change the world, but the skeptics thought they wouldn’t. They didn’t. Then other extensions of blockchain were going to: smart contracts, web3, NFTs. Skeptics said they wouldn’t, based on understanding the limitations of the underlying tech. They did not in fact change the world.
If you look at the underlying misunderstandings then the sequence of events is clear, obvious, and was eminently predictable.
But many still cling to the fallacies and they shout angrily at me when I say this. I predict that people will angrily decry this reply.
Generative “AI” is a scam. It has a tiny number of real uses, which are extremely niche, and the rest is nothing but wildly exaggeration. GenAI will implode and be largely forgotten in a few years, and so will the blockchain.
I can hardly believe I find myself quibbling with your reasoning, which I do think is basically sound. But…
With these in mind, I don’t expect LLMs to go away after the inevitable bubble burst. I expect them to become normalized and spread out, and for expectations to gradually adjust to their inherent limitations.
OK, I concede your points.
The only tiny thing I’d say is that it seems LLMs are good at generating program code – but that doesn’t mean the program code is any good… :-)
I just want a “Death Valley” macOS release where they strip all the AI stuff out and don’t regularly try to turn it back on every time you install a software update. Make it a thing you can install from the App Store if you really want, but please do not silently opt everything in to using it.
That, and fixing bugs. No new features, just fix stuff.
This is one of reasons why free software is far superior to anything else.
The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
However I agree with you – if you have to use MacOS (or Windows) it is always better to have optional AI (and other problematic or spyware features) than mandatory ones.
shrugs, I’m not sure that’s true about OSS being superior because it doesn’t add new features. Maintenance is usually seen as hard, boring work. I am way to lazy to look at how many times KDE and GNOME have been re-written from scratch instead of fixed. Lots and lots of times. More times than bugs have been fixed? Probably not quite that many times.. yet ;)
Linux has this problem more generally. Instead of fixing a particular subsystem in the kernel, just replace it. How many audio systems have we had in Linux land now? I stopped keeping track a while ago.
Certainly there are some OSS projects that don’t do this, the BSD’s come to mind.
As for AI, I’m with you. Optional, not forced AI please.
Gnome 3 came out 14 years ago now… and I think maybe it was the right decision? I did use Mate for a few years, and I’m still a bit bitter that you cannot mix and match desktop environments and window managers, but I think PaperWM has surpassed my previous favorite xmonad + Mate.
Yes, but I’m talking about all the individual components that make up Gnome or KDE. If you read the release notes or follow bug reports, they totally re-write the components all the time, and close all the old bugs saying well in X version written in Y language those bugs don’t apply anymore. Except of course new bugs show up and many of the old bugs re-appear…. Eventually the bugs languish and some new person comes along and rewrites the component/app in Z language and bumps the version again.
This is not a Gnome only trend.
Ah, thanks for the clarification. You might have a point, but really we don’t know how much closed source software does this?
I’m sure rewrites happen, but only if the business people in charge are not paying attention to the software side of the house. It generally makes no business sense to do re-writes of existing code. They would rather fire some people and keep the code in barely maintained, so they can get more cash, vs putting resources towards a re-write.
At least that’s been my experience. But you are right, we don’t really know.
It depends in part on what distribution you use. For example, I’ve been upgrading (in an officially supported way) from one Fedora release to the next since 9 or maybe 12 (it’s been a long time), and I can count the number of times I had to fix something on one hand (and if I waited to upgrade until the new release had been out a couple months, I might not have needed to manually intervene but I was too eager to upgrade).
I think I agree with your original point–FOSS projects are more likely to move on to the new than maintain the old–but in practice it hasn’t really mattered to me. Again only speaking for Fedora’s default GNOME install, the moves to PulseAudio and later to PipeWire were seamless. PipeWire is a dream, by the way. In the same period of time there have been far fewer major releases of Windows, but the changes have been drastic and unpleasant in comparison.
If you compare Fedora 12 to 42, sure, wildly different. But if you simply used them to do your work, upgrading along the way, I don’t think most people would notice. I also don’t think most people read the release notes like we do. The more things can just work the less it matters what components changed from one release to the next.
If I was building my own distro I might care more about all those underlying changes, but I can’t build my own macOS or Windows.
EDIT: My bad, PulseAudio came in Fedora 8 before I would have been upgrading from one release to the next. But that was in 2007, and the only other soundsystem change in the last 18 years was PipeWire which definitely was a seamless upgrade.
I agree, as users, as long as you are on the happy path, it’s mostly “just worked”. But as soon as you get off the happy path in Linux, you will probably find lots of bugs, especially if you play in GUI land(Gnome/KDE/etc).
If you aren’t prepared to fix those bugs yourself, it’s anyone’s guess if they will ever get fixed(especially it seems in GUI programs). Some OSS maintainers are very good about fixing bugs or returning a Won’t Fix. Lots of others are less likely to do either of those things. In my experience, especially with GUI programs, the chances are the bug will get closed every few years, untested, with a ‘new version X is a re-write, this bug isn’t valid anymore’. It’s anyone’s guess if the re-write fixed that particular bug or not.
Well, it’s not exactly the same but Microsoft keeps putting out new ways to write apps every few years:
https://www.irrlicht3d.org/index.php?t=1626
(What Microsoft is of course good about… you likely can run applications made in any of those nowadays without much trouble.)
I’m pretty sure most closed source companies do this. I have been involved in many rewrites at different jobs! Joel’s article about the topic didn’t come out because only OSS projects do this :D
That, and fixing bugs. No new features, just fix stuff.
Staying good and getting better is worth more for a foundational workstation machine than anything else.
They managed to make Gruber mad at them. Seems bad for Apple.
To me, the important bit that’s missing from this article is that the current generation of AI is fundamentally tied to the financial market, not to the market for products. Like blockchain/Web3, investment in generative AI is supposed to provide a story that makes these companies inflated P/E ratios seem reasonable, by promising a new field of unlimited growth for companies that are now actually in a mature industry. Whether it provides any products that are actually useful to consumers (including businesses) is beside the point; the products are only a prop for telling the story.
The thing is that Apple is a product company. To the extent they’re telling a story, it’s that buying and using their luxury products will make you happier and more productive than using the commodity products in the same categories. Promising AI features may seem like something natural and even necessary for them, given that it’s the current hype cycle in the tech industry, but it’s actually crossing the streams. Touting AI is basically sending the message that their product doesn’t actually matter, and given Apple’s product orientation, that was probably not intentional, and doesn’t actually reflect their plans.
I don’t think Gruber is conscious of this, given that he says, “Generative AI is the biggest thing to happen in the computer industry since previous breakthroughs this century […]. Nobody knows where it’s going but wherever it’s heading, it’s going to be big, important, and perhaps profitable.” But he’s got to be feeling the disconnect, he’s just partially mistaken about where it comes from.
Maybe somebody should sit him down and make him read https://www.wheresyoured.at/wheres-the-money/
$ pbpaste |wc -w
9538
That’s … a lot. It kind of makes me want to ask an AI chatbot to summarize it.
This is making a very good point about how to recognize vaporware.
For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.”
Tim Cook should have already held a meeting like that […]
Wow. My reaction was more along the lines of, I’m glad I never worked in a place like that.
Here’s a different perspective, from a film-maker/artist who recently passed away:
If I ran my set with fear, I would get 1 percent, not 100 percent, of what I get. And there would be no fun in going down the road together. And it should be fun. In work and in life, we’re all supposed to get along. We’re supposed to have so much fun, like puppy dogs with our tails wagging. It’s supposed to be great living; it’s supposed to be fantastic.”
I have been a Mac user on the desktop since 2007, but this coupling of products just annoys me. This has always been there to some extend on macOS and strongly on iOS. But with AI it ticks me off more than with other areas. There are so many different considerations (privacy, quality, accuracy, etc.) that I just want to pick my own AI tools and given with the rate the field is still progressing, my preference can change by week/month.
IMO the role of the OS is to provide good extension points that different AI products can hook into. Apple having a stake in selling Apple Intelligence, makes providing extension points less of a priority or perhaps even undesirable for them.
Of course, it’s their product and they can do what they want (though perhaps not under the DMA anymore), but it makes me less happy with their devices as a user.
(Yes, I know it’s a walled garden, etc.)
I like Gruber but I’m very much in the opposite camp. I hate Apple shipping software because they said they would in a keynote. I don’t turn to Apple because they hit all their OKRs on a spreadsheet.
If they’re not happy with it, don’t ship it. That’s why I pay the Apple Tax. I don’t want half done mostly working sorta broken crap like almost all the AI products I’ve tried. Do it right or don’t ship it. I actually don’t care at all if these features ever launch, but I’m certainly not going to get upset with Apple for doing what I’ve begged them to do for years.
I don’t think Gruber’s argument is that Apple should have shipped the feature even if it doesn’t work - he’s arguing that Apple shouldn’t have announced it (and used it in marketing materials to sell more iPhones) if they didn’t yet have a working project that they could reliably ship.
That’s not the point. Assuming the software is shipped when ready anyway, there’s still the problem of advertising features and selling hardware based on something that is far away from shipping, and may have even been just a fake concept art.
Apple is generally very strict about not talking about future features, and they typically announce features when they exist, and only need polish and bugfixing before shipping. The AI Siri was apparently different — they’ve announced it as a real feature before they’ve implemented it.
It’s the discrepancy of showing a feature “working” in TV ads for general public, but not having the feature working enough to prove to anyone outside Apple that it even exists.
His entire point was that, unlike other releases, they never even got to half done or sorta broken.
They just advertised pure vaporware. Straight up fraud.
I smell a class action lawsuit coming, and Apple deserves it. If you bought an iPhone 16 based on the AI promise, I’m sorry you fell into Apple’s trap
An LLM ingesting all the stuff I view on my screen sounds absolutely terrifying.
Microsoft has this in the works (Recall). Just need for the disgust and fury to die down before trying to launch it again.
Recall’s blunder was storing all this information, regardless of user’s intent, or even consent.
In principle, I don’t see a problem with an assistant using on-screen info when explicitly asked to, assuming the data isn’t sent anywhere without consent. Apple is hinting at implementing this with care about privacy and consent, we’ll see when/if that actually ships.
Current versions of Android are currently shipping this as the “hold home button” action. With a very useful “Translate” button which helps a lot with text inside images.
As a technologist, “ingesting all of the stuff I view on screen” sounds like a terrible violation of privacy and security.
As a consumer, “being able to ask ‘what was the name of that article I read two years ago about the monk who used volcanos to prove the earth was older than 6000 years’” sounds like it’d be really useful.
For better or worse, there are a lot more consumers than technologists.
(It’s this article btw)
I think a big issue with Recall wasn’t that MSFT was ingesting all that content and sending it to their datacenters to act as digital butlers, it was that they made elementary mistakes in the implementation that made it relatively easy for 3rd parties to access the data too.
FWIW I believe Apple will do a better job in that specific part if and when they release their version - maybe called iRobot ?
Yeah, Recall wasn’t sending anything at all to their datacenters - but plenty of people didn’t trust Microsoft when they said that, or were worried that they might change their mind in the future.
The bigger problem was that Recall was logging everything you did with your computer to files on local disk, which meant that anyone else with access to your computer (your IT department, an abusive spouse etc) could spy on everything you had done.
Thanks for the clarification. My first instinctive visceral reaction to Recall was “I must find out how to turn it off as soon as it’s activated”. I didn’t dig too much into the details.
I don’t see a problem with an on-device LLM “ingesting” screen contents on request, especially where “ingesting” is for context, not training.
I… kinda agree?
I avoid Apple products actively, but there are many things I admire about Apple. #1 is their accessibility track record, and perhaps #2 is that although I think they sometimes do things I don’t like and I don’t consider them 100% honest, compared to almost everyone else, they ship way less crap and their marketing is more honest.
So hopefully they will not ship crap, but it’s true that they have betrayed their own marketing practices which I think have taken them to where they are now.
But… the whole AI hype is so big that who knows if this is an event of the proportions that the article mentions.
AI isn’t easy, especially something that’s powerful enough to do what Apple has promised and doing it on-device. I’m glad Apple is taking the privacy-preserving slow and cautious approach. Better to take their time and get it right.
And to those upset about the iPhone 16 and not having all the AI stuff right now, well you have some of the AI stuff and a great phone. Relax. There are certainly bigger things to worry about nowadays.
yeah it’s called “Tim Cook is an old fuddy-duddy”
I can’t even read it. Gruber generally can’t be trusted. His incentives are to rile Apple fans up so much and position himself such that he can keep milking subscription revenues.
Everything he does should be viewed from that lens.
Don’t they mean the city of Cupertino?
Comparable! I don’t think you’ll get Gruber’s poetic license revoked with such nitpicks.