Programming is Dead: The Future of Software Engineering
13 points by hampton
13 points by hampton
I feel like this take is exhausted at this point. I agree that we're at an inflection point in history, but I don't think this is news to anyone anymore.
The author's "get over it" tone of voice is also unhelpful, beyond driving clicks.
The smug blog posts will continue while the clicks are flowing.
Frankly, I believe some have made appreciable productivity gains. But there's also plenty of incentive to make incendiary posts to ride the wave. In a sense, we're at peak blogger-voice like we had in 2013 or so where brand-building became more important than sharing information.
It's a weird situation. I've shared almost nothing about my LLM workflow except with my coworkers. I have measurable results from LLM work, in dollars saved, on projects I almost certainly would not have done on my own (because they would have been too time-consuming). I also have many less measurable results where I believe LLMs led me to write better code (by letting me prototype several designs), or discover research results I wouldn't have done otherwise.
But I don't write blog posts about it, because I'm not doing anything new. I'm basically always one step behind, paying attention to the things that have survived at least one round of hype. It's the same with my coworkers – we're using LLMs largely for boring things like searching a big codebase. So all the voices saying "LLMs are a moderate productivity improvement and good for some things" tend to get drowned out.
So, why would I use this person's UUID7 implementation when I can just vibecode one up like he did? And how is it "high performance?" Against what? Where are the benchmarks? Just because it's written in a compiled language? Or just saying makes it so?
While this might read like negativity, I believe that these questions is where is at. Even if they sound snark.
People are rushing up to show off that they get it better than others by doubling up on "ai will do that". But they very quickly reach the roof of ai just do anything and at that point, the discussion has passed the absurd limit.
Why don't people just ask Chatgpt how to get rich and have Chatgpt doing whatever it is it suggests?
Regarding the negativity, I'm starting to understand more of some analysts' exasperated rhetoric. Arguments circle around and you don't know which of the conflicting arguments will present you as opposing the side you're agreeing with.
I’ve been successfully using agents inside some of the most critical and complex backend systems at Square.
Who carried the pager for those systems? As an SRE, I expect that the answer to this question will reveal how the business actually aligns its incentives. I do think that being oncall for a production service that one has vibecoded is an adequate punishment for having vibecoded in the first place.
If agents aren’t working well in your codebase, you need to figure out why that is instead of throwing your hands up.
Oh, it's because the agents are incompetent. Previously, on Lobsters, I invited the community to try vibecoding in three different aspects: vibing on my code, vibing with my code as an example, and vibing with two other peoples' code as examples. The task that involves reusing my existing code is the easiest of the three. You can read three graded attempts at that easy task; no agent was able to replicate tagless-final encoding style nor did any of them go searching for e.g. my blog post explaining both the style and this particular codebase (also discussed previously, on Lobsters).
[T]oday is the very worst these Agents will ever be. These models can only get better[.]
I'm reminded of Invader Zim's commentary upon being told that he has made a conflagration worse; he famously replied, "Worse…or better?" Here, I'd ask: better…or worse?
In terms of code review, the language I think I can review best from your list is Ruby. Your Ruby uuidv7 is overall poorly factored, with poor code reuse. There are two distinct implementations with different signatures and error classes. The fact that some methods take blocks is documented but not demonstrated, suggesting that it's an extraneous feature which doesn't correspond to a need in the API design. There's a lot of magic numbers. Some methods are far too long. Monotonic clocks aren't a guaranteed feature of Ruby runtimes but I'd consider not letting the user provide a clock and instead choosing a standard monotonic clock.
there is so much wrong with this post the entire tone is off and the assertions are entirely wrong.
AI sucks at solving problems, THAT IS ENGINEERING!
Because you have to specify exactly what you want, to the point that you already know the problem space, means you could just as quickly write the software. you're making every decision. I personally would rather write code than write AI prompts
If AI allows me to create a quick function for something tedious, so be it, that is not "writing all the code"
I know I may not be "using the best prompts" or practices but AI continually fails me to do even the most basic of things. It is not capable of making tradeoffs on an implementation, ie a native library vs executing a shell command. The agent just runs away trying things over and over, boiling our oceans and it really has no clue what it's doing...
software engineering is far from being driven purely by AI chat bots. If that's all you do, then you're just a bot jokey, writing a form of software that is not creative or has real challenges.
This AI hype has gotta be taken for what concrete systems it's developed that are proven and tested.
You’re not exactly wrong, but agents enable other ways of working - for example quickly trying different approaches
Again it’s this forced comparison to the Industrial Revolution which doesn’t really fit. It’s a poor comparison because the Industrial Revolution changed mechanical operations. It didn’t replace people’s judgement in the moment, AI is poised to try and replace that. If all AI did was speed up the production of code but produced that code at the level and with the features that we’ve collectively judged to denote quality. None of this would be so contentious. No one is railing against intellisense style autocomplete or spell checkers in their code base, or vim motions and snippet templates. But people are railing against something that makes clearly opinionated outputs based on its own probabilistic judgement. Not based on the engineers judgement or taste. Similarly in civil engineering vs construction workers. Construction workers ASK for clarification when they’re unsure, they push back on experiential knowledge, they don’t just blindly build based on assumptions and then rebuild all over again when more context is available (at least good construction teams don’t) over and over. They don’t build a bedroom without a door just because the engineer didn’t draw one on the drawings. They express and use their judgement. A good contractor will tell you when something is a bad idea. Similarly, the architectures we value the highest as a society are the ones that had maximal thought, taste and deep understanding of construction techniques and the limitations of approaches encoded into their design, these things were known by the engineering side from experience and deep research and experimentation. We’re saying we don’t need that granular level understanding because the speed at which we can produce the code is a better trade than higher quality results based on know how, personal judgement and craft?
Would you rather eat a crap nondescript piece of white, bleached bread with zero flavour? Or an artisanal baked loaf with character, flavour and texture?
Would you rather eat a crap nondescript piece of white, bleached bread with zero flavour? Or an artisanal baked loaf with character, flavour and texture?
Which is cheaper?
Ask people what they want when flying, and they'll say things like more leg room, wider seats, a good meal. But over and over again, people show what they want by what they pay for, and they won't pay for leg room, wider seats or even food.
As a counter to your "Construction workers ASK for clarification when they’re unsure ..." the book Why Buildings Fall Down is fascinating reading, because sometimes, construction workers don't ask for clarification.
This piece uses "programming" as a term for the act of turning requirements into written code - I sometimes think of that as "typing code into a computer", and agree that it's the thing that LLMs and coding agents are taking a huge chunk out of:
The audience is people who think that their job is to get a relatively clean set of tickets and requirements handed to you and you need two weeks to complete that task.
There's still a very healthy career available for professionals who build software beyond that process of exclusively turning a spec into code.
There's still a very healthy career available for professionals who build software beyond that process of exclusively turning a spec into code.
Do we have any reason to assume this to be the case, though? The author says something similar:
Engineers still have careers. Programmers (construction workers) do not.
If a type of work involves reasoning, pattern matching, and creativity, and both the inputs and outputs can be stored digitally ... yes, LLMs can take a big chunk out of that. The reason most "Software Engineers" enjoyed programming is that most of us new that between software engineering/architecture and writing code, the latter was often far more challenging and interesting. Challenging engineering projects definitely exist - they're just not that common.
The reason I mistrust these kinds of articles that, in some way, they seem like some elaborate way of coping with the changes, people writing a story in which there is still space for them, because they're engineers not programmers, as if programming was the easier job.
Don't get me wrong, I'm trying to cope with these changes myself. If there is a future where cheap, ubiquitous commodity AI makes safe, reliable software for groups and people that previously didn't have the market power to build it, that's certainly a public benefit worth the loss of my employment opportunities.
But a future where employers will not pay me to program, but will pay me for the far less exacting and challenging task of prompting an LLM (which, for some unexplained reason, can't be automated) seems contradictory.
What does that career look like though? It’s a completely different thing than what most of us signed up for
Right. You become a manager… not of humans, but of machines. The machines don’t need to take time off & are as clever as many developers (myself included). They need help doing novel things (even correct sometimes if the training data is filled with anti-patterns) since they survive on doing the “average” thing… but the bots aren’t gonna provide comradery around the code base. The dev team will be much smaller as a result & if you liked techy (or let’s be real, others on the autism spectrum lol) coworkers, you might be the only one.
The worst part is we all valued the Commons & open source, now the corporations vacuumed up the attribution & sold it to us while there is nothing but maintenance burdens in open source as your new patches will be sending AI code to review—if not they will just use an AI, not to contribute back to the Commons but to re-implement your work proprietorially. We still need folks doing some basic churn in open source, but what was for the good of humanity & your fellow coders, is now just ‘free’ training for the AI. The gatekeeping is gone, but many are having identity crises with the realization that following this trend, yeah, is “compeletely different thing than what most of us signed up for”.
I think in terms of the concrete future this does mean that we’ll be doing more specifying and proving. I think it’s also clear we haven’t figured out card looms yet. We’re at the stage of power looms making threading faster if used right.
I think it’s also clear that the trend will be for more agent-friendly codebases. Which probably means more considered design.
As to the “ones who prospered” op can jog on - the ones who prospered were the ones who had the chance to build skills instead of being laid off. Individual merit in the job has much less to do with survival than luck and connections.