Do I belong in tech anymore?
62 points by Shorden
62 points by Shorden
I left my job a month ago for similar reasons; I felt like my job was to provide adult supervision to the people on my team who had given up thinking for themselves. It was honestly terrifying to watch their ability decay over the course of a year, in some cases going from a skilled, trusted professional to someone regularly making junior level mistakes several times a week. It was sad to see what I had spent years building fall apart so quickly, but I had to get out. Luckily I found another gig where I wasn't required to use any LLM stuff, hope others can do the same.
Your co-workers were like this before AI was a thing. Leadership had the same narrow and close-minded view of you and your job before AI too. You were judged in very similar ways then as you are now. The value of what you do hasn't changed at all, and as other commenters have said it probably never mattered that much and never will.
AI just amplifies what was there already. The fact that the golden age of Silicon Valley has gone and software devs aren't a rare in-demand hero contributes to this as well.
As far as the political stuff you talk about, I largely think that follows with the fact that the balance of power has shifted from employee to employer in tech in a lot of places. This year, for the first time in my career, I felt like I was in direct competition with my coworkers for my job rather than working with them as a team.
AI companies care about profits and growth more than things like the greater good, the environment, or whatever. Just like all the other companies in the world. Politicians and other people in positions of power use AI to entrench and reinforce their power, just like all other technology.
And yeah, AI kind of sucks as a person who puts value in their ability to write code. I've not been in flow state since 2025 and it used to be something I experienced more days than not.
idk what position I'm even taking here. I guess I'm just using this as an opportunity to vent or something. Who cares.
Ironically, what I’ve gained from AI is a deeper appreciation for human communication, in all its messy imperfection. The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be. Friction is good.
Felt this.
Time was you might give the same piece of code review feedback to the same person five to ten times and then the knowledge would probably stick for them. We were helping each other to grow at the same time as we produced code.
Agent adoption beyond a certain threshold takes that away. Your code review feedback is just another prompt for the agent. The person who's nominally running the agent doesn't have time to read and internalise any of the actual ideas expressed, because the whole point of why they're even using the agent is to save exactly the time they would have spent doing that for running more agents.
I get that agents open some doors but the whole philosophy of work that's inherent to really extreme adoption does seem to structurally relegate human communication outside the critical path of the work. My current job is nowhere near as extreme as the workplace described in this post and I hope it never is. It sounds very very lonely.
We know at work who are the big users of AI. I have noticed a big trend in our code reviews.
The AI power users consistently require the most detailed reviews, with obvious UI breaks on their branches that you’d never miss if you were looking at the result of your change.
But more to the point, I will consistently give, say a comment of feedback on a particular way of approaching something and offer a suggestion to fix it, mentioning that “this is a better way to do this”/“more consistent with the codebase” etc etc
over and over, they will fix only the instance of that issue that the comment was pegged to, leaving the rest of the PR untouched when the same issue was present multiple times, even when I mention those other instances in the original comment
And then more PRs come through with the same issues, and I keep on having to essentially go line by line and tell them each instance to fix
All in all, I just know that these people are not reading the code they submit, and are not able to implement feedback or take it on board because they’re not concerned with the quality of the code in the first place
over and over, they will fix only the instance of that issue that the comment was pegged to, leaving the rest of the PR untouched when the same issue was present multiple times
I have had the same experience.
I wonder if my nits and gripes even matter. The AI-inclined amongst us produce a lot of code. That code does something. Do its little warts, failures, deceptions, etc. matter? The world is transitioning to a low-trust relationship with many products.
LLMs maybe give you an answer to the question you intended to ask. Now products maybe do what you intended to request.
I kinda think that the vast majority of software that people use never needed to met the standards I set for the software that I use.
I'll say it's all in the first paragraph
Does any of this work actually matter?
If it does not, you will never feel fulfilled, and hereby the culprit. I'll say no matter which large language model you use, or any shinny new tool your CEO enforce you to use, we as professionals and people who love the craft, should never forget why we do the things we do.
Aside of this, I'll say I'm a bit tired of reading posts like this and I'm under the impression that we, programmers, need to be a bit more detached from the current day to day job and take it as other professionals: as a job
Furthermore, and this completely personal opinion, I believe this post it's simply too negative with statements like
Ethically: Generative AI tools, powered by data centers which consume vast amounts of water and pollute our environment, are built on the collective theft of the works of millions of artists, developers, authors, and other creatives, supercharge the spread of disinformation and fascism, have repeatedly provoked psychosis and suicide, and concentrate wealth in fewer hands while providing cover for widespread layoffs.
Yes but no, many of these issues are and were caused by political movements and think tanks outside technology, and we are blaming just all to AI tools or Palantir; it's just an escape goat. AI didn't vote (yet). I'll recommend to watch Trauma Zone by Adam Curtis
Eventually and as Christopher Alexander used to say, we will connect as humans, making computers do computer stuff
I really like the idea of an escape goat. It's not fast, but you don't have to pay for gas.
There has indeed been a significant uptick in the use of escape horses in armed robberies. Just like escape goats, escape horses remain unaffected by the recent volatility and price hikes in global oil markets. But escape horses are also fast and versatile, quite the opposite of escape goats in this regard.
Yes but no, many of these issues are and were caused by political movements and think tanks outside technology
You are saying, who cares about B causing C, because A also causes C. I think in abstract it should be pretty clear that's not logical. Instead, the conclusion should be to reject both A and B, not to focus on only A or B.
Do we belong in a world in which human mind is devaluated ?
Can we simply survive now that people in powers, be them official dictators or just ordinary oligarchs, don't need smart people anymore ?
This is a profoundly unserious time in many spheres of life. Also, tech has always been quite unserious, and AI amplifies that further.