I don't know if my job will still exist in ten years
16 points by lalitm
16 points by lalitm
It’s unseemly to grieve too much over it, for two reasons. First, the whole point of being a good software engineer in the 2010s was that code provided enough leverage to automate away other jobs.
The rare member of the "leopards eating people's faces" party who understands their own face is edible. No snark directed towards the author--I think the consistency is healthy.
And as uncomfortable as it is for us, I don't think automating away jobs is a bad thing--my leopards references don't imply a moral judgment towards automation.
In my first job, I automated a process that had previously required hours of labor by nurses per patient. Because this is very expensive, and because the job was not part of emergent patient care nor required by law or insurance, it simply didn't get done most of the time. With our automation, it got done more often, and many clinics we worked with actually hired more nurses to handle patients identified as high risk by our software.
It is possible to deploy automation in a way that both provides a lot of value and doesn't disrupt lives. It's capitalism - private ownership of the automation and the value it creates, basically - that leads to these bad outcomes.
All this to say, I don't think working in software inherently makes you a leopards eating people's faces supporter.
I don't know, i have had to read the output for a project, and it kept doing stupid stuff. If the LLMs are so good at fixing bugs, why do they also introduce them? As long as the tools don't understand what they are doing, it's not doing more than wasting time and electricity. Understanding seems still quite far away, at least with the current algorithm. And they are only cheaper than dev time because the prices are lies themselves
I think oddly enough, current cutting edge models are both good at introducing bugs, and at identifying and fixing bugs once they learn of them (even frankly some impressively obscure ones), but are not as good at discovering that the bugs exist. Even if you get them to implement tests, they fairly often miss important edge cases.
If the LLMs are so good at fixing bugs, why do they also introduce them?
How good are the automated tests?
And who writes the automated tests?
If it's the LLM, it's still random output that doesn't understand what's testing.
I have seen pages of tests for trivial stuff (like if addition works), that barely touch business logic, or that had business logic bugs which would not work with real data
The LLM writes them, you review them to make sure it didn't do anything stupid. If it did you tell it what to change.
This is one of those things which the November-onwards frontier models (Opus 4.5+, GPT 5.1+) are significantly less likely to screw up than previous models. I'm finding they have much better taste in tests now.
A previous manager of mine trusted the tests more than the code. I constantly bitched at him that the tests themselves might not be good as they reflected the understanding of the person writing the test (me, mostly) than be a valid test (communication at The Enterprise was not that great).
The way I understand LLMs, they’re imitators more than anything. As such, I can see them doing an average job, though perhaps not yet.
Thing is, "average" these days is terrible. The state of our industry is so abysmal, the last thing I want is automate the same kind of crap code I’ve been seeing the past 20 years on the job. We first need to get our act together and pick up the various ways to build actually good software — that just works, performs reasonably well, and isn’t an unmaintainable mess under the hood. Then we apply machine learning. Maybe.
Oh, and there’s the reliability, trust, and liability thing. If the AI comes up with a machine checked proof that the code it generated is correct, perfect: I’ll learn to write formal specs. Until then though, quite a few niches will still need humans at the helm.
The more I use LLMs and the more time passes with them being available, the less I'm worried about losing my job to one. My main concern all along has been the anti-democratic method that these tools have taken in supposedly "reshaping the nature of work", but mostly as an excuse to lay people off.
I don't have a problem with the tools, I don't even think copyright is a useful concept, but I don't like it when people's livelihoods are put on the kill line because it would raise someone's stock price by 2% this quarter and net 15 other guys like a billion bucks. That's not even AI-specific, people did that back in the day with automatic looms and rightfully had their factory smashed.
But rich people haven't learned that lesson yet. We can cooperate to make society better, it doesn't have to be this Lord of the Flies bull-ish. We have a massive corruption problem in the US because everyone handles rich people with kid gloves. We need accountability up in here!
They'd just have to get better and more reliable at doing the things they can already do.
So far, my impression is that they got better, but they didn't get more reliable. So I wouldn't use "just" here.