Competence as Tragedy
79 points by Johz
79 points by Johz
Nicely put. I've been thinking lately about another aspect of AI in general that's somewhat related. What's different about AI, compared to other technologies, is that it turns us all into Tony Starks, or Elon Musks or DJ Trumps if you will.
We become privileged CEOs with our heads in the cloud, passing our days dealing with the "important high level stuff", willfully ignoring the details, oblivious to the consequences.
That's the thing that disturbs me the most about AI - it implies a positioning of oneself in relation to one's environment that is inherently callous and arguably immoral, both socially and politically.
Sometimes, serendipity allows one to discover someone saying almost exactly what I was planning to write:
I don't quite know that I'd go as far as immoral, because that implies that any sort of high-level approach, ignoring the details, is immoral, and I don't think that's true. In any well-rounded endeavour, you want to have a mix of people, some of whom are looking at the big picture, and others of whom are worrying about the details.
I think the danger with AI is that you end up in a situation where no-one's worrying about the details at all, and everyone's working at the high-level view. Like, with a compiler, most developers will work at a higher level, not worrying too much how the compiler is working, but there are a lot of people who spend a lot of time making sure that compiler is working and worrying about all the details that are necessary to make sure it's producing efficient code. But with LLMs, if you delegate your understanding of your codebase entirely to AI, then there is no person in the world who is paying attention to those details. That feels quite uncomfortable to me.
But with LLMs, if you delegate your understanding of your codebase entirely to AI, then there is no person in the world who is paying attention to those details.
The fix for that is not to delegate your understanding of your codebase entirely AI.
I've started thinking about this in similar terms to when I worked at a large company with many different engineering teams.
If another team built an API for my team to use I wouldn't go and review their code on a line-by-line basis, but I'd at least take a look at their documentation and maybe glance at their repo to make sure tests exist.
If that API later started misbehaving for me I'd go and dig into the code to figure out what's going on.
I'm starting to treat code by LLMs in a similar way - if it's security-adjacent I'll review it closely, but for increasing portions of it the fact that it works when I run it and the stakes are low reduces my desire to dig into the details unless I need to.
The big difference is that when you delegate to another person, that other person is responsible for what's going on. You don't need to understand the details, because you can trust that they do. But an LLM can never be responsible for the code it outputs — rather, it's always the responsibility of the person who used the LLM to output that code.
This, to me, is the biggest danger of LLMs in software engineering — that people delegate their understanding of a codebase to LLMs, and only later, once things start going wrong and they need to understand more in detail, do they actually try to understand what's been written.
In fairness, this can already happen in companies where one person who has written the code understands it, and then leaves, in which case the effect is similar to if an LLM had written the code — in both cases, there's no-one in the company who can take responsibility for the details of the code. But with a person leaving, we recognise that that's a dangerous situation to be in (and talk about bus numbers and sharing knowledge and code review as solutions). Whereas I see a lot of people who quite willingly delegate all of the details of their codebase to LLMs, and do not look in detail about what sort of code is being generated.
I also think that programmers are already far too lackadaisical about the effects that their work can have on others. The Horizon scandal in the UK demonstrates that even relatively dull admin projects can end up costing people their lives if it's not done correctly. Or at the other end of the spectrum, there was the site recently that calculated the lifetimes lost to having to work around buggy MacOS code. My worry is that LLMs will make all of this a lot worse, because software developers are essentially abdicating responsibility for their code.
(To be clear, I'm not saying that LLMs aren't useful, or even this kind of "vibe coding"-style delegation of responsibility to a machine. Like, sometimes I need a script to fetch some data, and it's completely okay if that script is completely unmaintainable. But that's not most of the code that I write.)
But an LLM can never be responsible for the code it outputs — rather, it's always the responsibility of the person who used the LLM to output that code.
That's entirely true - it in this case I'm the person who used it to output that code.
I'll take full responsibility for that, and I also benefit from having built some level of trust in the model I'm using based on having prompted it in similar ways in the past.
Of course, that makes switching or upgrading models painful because I don't yet have a level of trust in my ability to predict how the new ones will respond to the prompting patterns I've seen success with in the past.
I've started thinking about this in similar terms to when I worked at a large company with many different engineering teams.
If another team built an API for my team to use I wouldn't go and review their code on a line-by-line basis, but I'd at least take a look at their documentation and maybe glance at their repo to make sure tests exist.
In the large company you're presumably able to do this because you know that someone is still reviewing the code and not inserting malicious statements or else they'd be fired.
This further assumes the API boundary, as ensured by the test coverage is sufficient which I'd posit it's not in many professional scenarios. Even assuming the tests cover a sufficient portion of the API, it does nothing to ensure there are not malicious statements intermingled with the implementation providing the correct answer to the test.
LLM use is probably more akin to accepting an OSS dependency without line by line review or by merely taking a quick glance at the docs and tests. Although this is an increasingly popular occurrence, doesn't make it any better an idea.
I really enjoy your takes on the current trends, but if there's something I've learned over the last 15 years as a full-time programmer, it is that sometimes we are terribly bad at calling out the stakes. In a disproportionate majority of times, the stakes are much, much higher than we first assumed.
This is something I tell excitable new vibe codes all the time: it's fine to vibe code something for personal use, but you've got to stay aware of the stakes and avoid using vibe coded projects in places where bugs could harm other people.
The problem there is that understanding the stakes involved in systems is itself a non-obvious skill that can take years to develop!
For years, I've been in a position where the amount of code I write is variable and low. It's not every day; usually I'm meeting other teams and diagramming and negotiating what's needed and just so much comms. Lately I find it's necessary to get coding time in regardless, because I'm not comfortable becoming oblivious like you described. You can typically rely on management or mentors who came from your same experience, but now my deepest experience isn't current and needs to be.
Reminds me of a verse from Expert In A Dying Field by The Beths
Hours of phrases I've memorized
Thousands of lines on the page
All of my notes in a desolate pile
I haven't touched in an age
And I can burn the evidence
But I can't burn the pain
And I can't forget it
How does it feel? (How does it feel?)
To be an expert in a dying field
And how do you know? (How do you know?)
It's over when you can't let go
You can't let go, you can't stop, can't rewind
Love is learned over time
'Til you're an expert in a dying field
This is the second time I've seen someone reference this song recently, and I hadn't heard it before. Went to go listen, then went and bought the album on Bandcamp. It certainly speaks to a moment.
I've always interpreted this song about the pain of leaving a relationship - even if you know it's the right thing, you are making all those bits of knowledge about how to love that specific person obsolete. None of it will matter in your future.
But it can be about programming in the age of AI too if you want it to be!
Exactly this song has been triggering mildly depressive episodes in me about my field for the past few years (both formal methods and software engineering). Funny to run into someone who also interprets it this way. I like the song, though, in a bittersweet way. Weird how it's actually a love song and works just as well.
Exactly this song has been triggering mildly depressive episodes in me about my field for the past few years (both formal methods and software engineering).
FWIW, in EU, I've seen formal methods job openings going from incredibly niche to rare in the last few months. This increase in popularity comes from LLMs, as some founders have the thesis that these will make formally verified code cheaper. There is some hope.
This is my take. LLMs benefit mostly the "barbell" of accuracy: either pretty inconsequential stuff (like reviewing my posts on Lobste.rs) or important logic that must be formally verified. Everything in-between benefits less, and the closer to the center of mass of this barbell things are, the less they benefit. I'd argue that most of commercial software sits right there.
I don't understand why so many people are so concerned about AI and development. Maybe it's because I've been around a bit, but honestly I'm not that old.
When I was first getting into undergrad it was
they are going to move development jobs out of the US and you won't have a job when you graduate
Which definitely happened, but not as successfully as people thought. Shortly out of undergrad, my first major project was...fixing a project that had been moved out of the US.
and then shortly after all of that
no/low-code is going to change software as we know it
and basically that didn't happen. It was a silly premise to begin with since a lot of the idea/tooling had been around forever already
and now it's
AI is going to change software as we know it and is going to take our jobs
It's cool that Claude can refactor your code so quickly, but refactoring code is not your entire job. I've also seen LLM code reviews that gave completely birdbrained feedback ESPECIALLY around security. I think LLMs in software are changing things, but I've already seen basically the AI/LLM equivalent of the first ever project I worked on...which was a rescue project from the thing other people were worried about.
Model T was launched in 1908. But horses were still used massively in World War II, with Germany and USSR using millions of horses. https://en.wikipedia.org/wiki/Horses_in_World_War_II
The Wikipedia article also has this fun sentence: "The United States economy of the interwar period quickly got rid of the obsolete horse; national horse stocks were reduced from 25 million in 1920 to 14 million in 1940." Even adjusting for the fact the population and economy grows over 20 years, 14 million is still a lot of horses, clearly it wasn't that obsolete.
My daughter visited a town at the top of a mountain in Italy where donkeys are still used for transport, because cars can't climb the only access route (lots and lots of steps). Police in the US still have horses.
Wikipedia tells me that were 6.6 million horses in the US in 2023; taking care of horses is still a career you can have.
Software change is faster, yes. But then again, Indeed.com currently lists around 8 job openings which state that knowledge of COBOL would be helpful.
(This example is courtesy of The Shock of the Old, by Edgerton, if I remember correctly.)
There was this moment a few years ago where I first wrote racket, following along the little schemer. And in the process, as I was thinking about a problem, for the first time ever I wrote code that I thought was beautiful. I've had this experience several times since, especially when writing functional langs (most recently with OCaml and then F#) where I wrote some code and was struck by its beauty.
The weird thing is, when I showed this code to other programmers, many of them were put off. Sure, it was beautiful. But it wasn't "readable", and "too clever". And this was, and continues to be, a trend in the industry IMO: we want replaceable cogs of engineers who can write code that any other engineer can change, and any aesthetics gets in the way of that. I'm glad the author received compliments for their elegant code, but that hasn't been my experience in the industry, and even on this site people will happily argue against writing code like that.
So, I don't know. Was it LLMs that killed beauty in code?
I want my hobby code to be beautiful. I want my work code to earn money. Because I work to earn money.
Upvoted, I really enjoyed it as a literary piece, and I don't think it is dead wrong. But, although I might be wrong, writing readable code is just a tiny fraction of what programmers are supposed to do, and I'm not even talking about the higher-level stuff. Bits are still traveling through buses and cables, core dumps and stack traces are going nowhere. If anything, the sanctuary where our skills matter will be there in the foreseeable future.
One can pick hooves with a brand new tool without forgetting their ways around a horse.
I can understand loving the craft for the sake of the craft itself, without any concern for its practical value, but that definitely is not the perspective I share.
I got into programming and software development because I love solving problems, not technology for the technology sake. I think it is beautiful and fascinating in many, many ways; but only as a means to an end - utility is a part of the beauty.
Because of that, to be honest, I welcome the current transition - new tools, new possibilities and more productive ways to solve problems using software. It still requires as much, more in fact, competency. Just of a different kind
This is a very different vibe from “vibe coding.”
I’ve been acutely aware that there’s a venn diagram of things you:
For many on earth there is no overlap between all three. So to have been in that sweet spot for awhile, is a thing of fleeting beauty.
I did get into programming for the money, sort of. I wanted to make things and make a living making things. I could have gone with “woodworking” but considered I would rather not have to work through retirement.
It’s a constant fight and struggle to find balance to find moments of joy. I think a trick is to convince yourself that the search for that joy and balance is fun too. If not, then it's not sustainable.
Edit: changed “can do” to “good at” in the list
I see other tragedies in the making. The young generation appears to have a very hard time on the job market, and it will take some time until it gets better, which is very sad. We had CASE tools, outsourcing etc. etc. before, and every cycle was hard. Another tragedy is that the AI took the technology (training an AI, building a tool, not using it) away from the financial reach of the common nerd. Yet another one is that interesting infrastructure work (kernel, OS, programming language, crypto algorithm, network protocol) is hard to monetize (don't see the AI taking over these domains).
Sort of reminds me of the shift in painting once the camera became commonplace. Instead of despairing that their job of literal representation went away, they found impressionism and other ways of seeing that still communicated truth, rather than simply literal fact.
I feel like all this AI doomerism comes from softare developers who are already experienced and, thus, know already
Which are anything but trivial skills! I myself need to learn them! And these skills are needed both to give a meaningful command to an LLM agent, and to even evaluate whether the output is any good. These skills are honed through years of experimentation, "playing", and deeply thinking about your work. If you were to put an LLM agent in the hands of a beginner, or in general someone who doesn't get 'the craft' yet, they will not be able to get the same results as an experienced developer, because they don't even have an expectation for the output of the LLM, and don't know what to ask from it.
I feel like this doomerism is a manifestation of impostor syndrome: people just don't appreciate their own skill that is used even when using LLM agents, and think that it's the agent doing all the work or something.
I am struck by the author reading All the Pretty Horses and The Crossing but then going to read The Road instead of finishing The Border Trilogy by reading Cities of the Plane. That's the real tragedy.
Jokes aside, I think it's an interesting connection, and the picture crowprose paints is a good one: Your skills won't save you, the world will move on without you and make you obsolete.
I will say that I don't think John Grady Cole's tragedy is his competence.
I enjoyed this piece.
If you're feeling that mood, I highly recommend the short documentary Farewell EtaoinShrdlu, about the final days of manual typesetting at the New York Times, in 1978. A world of lifelong craftsmen, impressively complex machinery, and a whole culture vanishes overnight.
My wife does manual typesetting (seriously manual, not Linotype). Her most modern piece of equipment is from about 1965 and her main press is a century old. It’s the printing equivalent of retrocomputing.
What I find interesting is that she’s performing the exact same activity that you would apprentice for in 1900, or go to vocational school for in 1950. (She literally uses a textbook from the 50s to understand the techniques.) It would have afforded you a lifelong career. Now it’s an esoteric, some might even say eccentric, activity that is in economic terms pointless; it’s just an art form.
There are still a few crusty old guys (and they are all guys) on the mailing list who remember what the “real” working culture was like, but the community is mostly graphic designers, artists, and hobbyists nowadays. So the culture has shifted quite a lot. But they all still value the craft — in fact the craft itself is now the only thing valuable about the whole field!