The Resonant Computing Manifesto
18 points by atharva
18 points by atharva
Many utopian personal computing manifestos have been written, going back to the 1970's.
What's new is that the earlier manifestos emphasized how computers would you more powerful, and improve your agency.
This manifesto depicts users as passive, with the AI acting as the user's "big brother", deciding what the user needs, and giving it to them.
We can now build technology that adaptively shapes itself in service of our individual and collective aspirations.
Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it. [Note the passive voice.]
No mention of the user making choices in order to adapt the computer to their needs. Now the AI does this.
I was going to point to the "adaptable" principle here, but you quoted that and pointed out the passive rather than active voice. That's a solid critique.
I finally found the piece it reminder me of, that was focused on user needs and agency and didn't look anything like a manifesto: https://catgirl.ai/log/comfy-software/
users as passive, with the AI acting as the user's "big brother", deciding what the user needs, and giving it to them.
replace "AI" with "script", "macro", "makefile", "Terraform", "CRD", etc etc. Not meant as snark, but really let's consider that delegation and self-regulation have always been an overarching goal of computers since their invention. Do you know, and are able to debug, every control loop in your k8s cluster? I view malleable/on-the-fly software as enabled by LLM codegen as simply an evolution of that line of thought.
the only real problem of "malleable software" is that, at least currently, only frontier models are able to generate it properly. And frontier models require frontier infra to run. And that's why we all pay rent to the providers, and in fact are subjects in their digital feudalism scheme. These are the real stakes, and user ownership of the means of digital production is, as ever, the only meaningful goal of this manifesto.
Some of the open weight Chinese models (like Kimi K2) are in the same ballpark of capabilities as the leading closed models now when it comes to code generation. And you can run those Chinese models on ~$20,000 of hardware (2x 512GB M3 Ultra Mac Studios running MLX).
I expect within a few years you'll be able to run today's leading open weight models (like K2) on hardware that's more in the $2,000-$4,000 range.
I think these are positive trends.
I think the original manifesto implies, or is dangerously easy to misread as implying, that the LLM will be driving the software constantly while in use.
There are at least three views of how one could be using an LLM for malleable software. Maybe to do long-use changes from time to time. Maybe to regularly fully make a tool of the week. Maybe involving LLM constantly during the workflow. These have quite different implications and exit strategies.
In the first option you can switch the providers and keep your current tooling. Maybe even try to experiment with the partially-wrong answers of a local LLM to see if it can explain you enough — enough to be able to review and guide its work, pushing towards your needs beyond its one-shot capabilities.
In the third option you are indeed a serf in digital feudalism.
Of course, for the first option to be feasible, you need to pick platforms with strong anti-churn bias. Because you need the part you modified not to need constant further changes. This is of course welcome for any malleable software, regardless of LLM use. Like Make is backward compatible enough that Makefiles in some places can live as malleable software and undergo reproduction with mutation and exhibit evolutionary emergent effects…
We don't need models to be able to write full apps, though. They only need to be able to write the snippets of code that extend the malleable base software.
In my experience (I'm building one such "malleable base"), small models are not there yet, but my personal hope is that, in a couple of years' time, small-ish models (~30B params) will have become good enough to write decently correct snippets, and the hardware to run those models will be "high-end, but not in the mortgage range anymore".
But I also don't have such a bleak view of the current "inference market" (though I might be naïve). As a user, there are tens of competing providers I can choose from. I just hope the giants don't end up buying them out one by one.
It's unthinkable now that twenty years ago, mainstream OSes allowed users to configure and customize the UI. And, surprisingly enough, no form of "AI" was involved. ;)
The article presents a problem: scaling up tech leaves fewer resources to optimize for edge cases, and you end up with a product that doesn’t fit anyone well by trying to please everyone
And then:
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale.
Are we fucking serious right now? The core problem, and even the beginning of the article says this, is essentially that tech products don’t care about their users enough because they’re scaling too much or too fast to treat them as individuals.
But instead of giving people tools to customize their own experience, you’re handing them over to an unthinking, unfeeling machine.
The problem is that the capacity to care for users and their edge cases is quickly outstripped by overscaling, and instead of not growing too fast for our own good, we’re going to force something on people that many folks distrust and that is, by its very nature, antithetical to user agency?
When I saw this was tagged vibecoding, with the title and the well, vibes of the page, I thought it would argue for smaller scale, carefully crafted by humans, experiences.
This article reminds me of how sometimes when awful people go to therapy, they don’t become better people, they just learn a bunch of new vocabulary with which to launder accountability.
The manifesto presents a claimed solution to the problem: the entire computing experience getting worse from enshittification (strict Doctorow sense).
Then you look at the signers ... and it's the VCs who made it this way. Literally the guys who did this.
This leaves me not sanguine.
No wonder their formula is magical AI that doesn't exist.
I am very confident that LLMs can help users customize their software more effectively than the last ~30 years of attempted software mechanisms for helping users customize their software.
This is one of the most promising applications of the tech in my opinion.
As @David_Gerard said:
The manifesto presents a claimed solution to the problem: the entire computing experience getting worse from enshittification (strict Doctorow sense).
Then you look at the signers ... and it's the VCs who made it this way. Literally the guys who did this.
This leaves me not sanguine.
No wonder their formula is magical AI that doesn't exist.
I have less than zero confidence in LLMs being able to do this work well enough, in theory or in practice.
I don't get it. What does VC signatures have to do with LLMs being able to help people customize their software?
It’s like asking a fox to lock the henhouse.
More like hundreds of people signed a note saying the new henhouse looks nice and two of them happened to be foxes.
This is one of the most promising applications of the tech in my opinion.
Completely agree! Putting an LLM in the hands of end users in a way transforms them all into software developers, in the sense that now they can write code. This makes many end-user programming "patterns" (formulas, userscripts, extensions) that until now were only accessible to the computer savvies available to everyone.
don’t care about their users enough
The disaster is that they care, just in a negative way. They absolutely care enough to actively break anything that tries to provide a customisation to UX, especially if it drives user further from the dark patterns.
I thought it would argue for smaller scale, carefully crafted by humans, experiences.
On one hand, they do argue for the only way experiences can be crafted by humans at a small enough scale — actually let the user craft it.
On the other hand, their proposed solution seems to include a constantly in use LLM, apparently with long-term persistent state, which seems to be an even more sure path to software constantly gaslighting its user than the already impressive gaslighting achievements of the current state of software development.
It's not even local LLM explaining the user how to supervise this same LLM to get help with the needed customisations, where I could imagine a meaningful discussion of hurdles…
There's a feeling you get in the presence of beautiful buildings and bustling courtyards. A sense that these spaces are inviting you to slow down, deepen your attention, and be a bit more human.
What if our software could do the same?
and, more importantly, which KPIs would that improve?
urbit, you've invented urbit
I had to look that one up and ... huh. https://en.wikipedia.org/wiki/Urbit
Urbit is a decentralized personal server platform based on functional programming in a peer-to-peer network. The Urbit platform was created by far-right blogger Curtis Yarvin. [...] has received seed funding from various investors since its inception, most notably Peter Thiel, whose Founders Fund, with venture capital firm Andreessen Horowitz (A16Z) invested $1.1 million.
It's not just the people behind it, they also embedded their worldview in the architecture. https://lobste.rs/s/prlffn/urbit_good_bad_insane has some (not very productive) debate on that, with more reading.
edit to add: From the horse's mouth, there are 32 bits reserved for "human" level entities but they call it "planets", that can then break down further into moons/"devices" (of that person, presumably) and comets/"bots" (on those devices).
A crucial difference between Urbit and other networks is that planets are scarce. Even when the network is fully populated, there are only 4 billion. Early in Urbit's life, most stars and galaxies are not yet operating, so far fewer are available. No one will ever be able to get planets trivially and for free.
4 billion. Let me check the world's population count real quick...
That would be relatively simple to fix, but seriously, it's a (in their words) bad neighborhood to start out in.
politics aside, the assembly language and higher level language (which are esolangs, please lets just say they're esolangs) are enough to deter most reasonable people from seriously using it
and that it doesn't work. Literally doesn't work. e.g. Holium - who are 100% believers in what Urbit and Yarvin want - who had to move off Urbit to a more, ah, functional functional programming stack because Urbit didn’t actually work.
c.f. https://x.com/HoliumCorp/status/1764753991534047398
Urbit has a messaging app, and fans can put an Urbit address in their bio on socials. Perhaps that's enough use case.