AI will fuck you up if you’re not on board
5 points by rmoff
5 points by rmoff
Now we're getting AI slop that opens by admitting that AI slop is ruining the internet. Mildly amusing in its irony but slop nonetheless. Nothing here is new, the same things have been reiterated again and again ever since 2023.
What in the article indicates it's slop, and not hand-written content you disagree with?
It's slop in the sense of having a polished appearance but lacking originality. None of the points it makes are novel and it doesn't connect them in novel ways either. The paragraph about agentic AI for example is simply repeating the basic marketing promise of agentic AI. Everyone who has heard of agents knows that this is the idea.
Whether the piece is AI generated slop as well is kind of besides the point, though it is quite rich in em-dashes and the author is a self-admitted AI user so I'll give it a 50/50 chance at least.
It's slop in the sense of having a polished appearance but lacking originality.
Boy do I have bad news for you about almost everything ever written.
Sure, Sturgeon's law is far older than GenAI, as are stock photos, muzak etc. There has always been a market for slop, but AI is shifting the balance by making it so much cheaper to produce. I think we'll soon wish that only 90% of everything was crap.
In the comments to their article, the author states to not have read Ed Zitron's writing on AI finances. While I don't agreee with everything EZ says, I think this betrays the usual (in LLM proponents) blind spots on the side of the author, especially the oft-overlooked externalities of LLM use/production.
Even with all the horrible generation of fakeness all across the internet, I would think very differently about the technology if it was usable (in the same quality) on, let's say high-end, commodity hardware, and without psychologically scarring knowledge workers and draining the water reservoirs of whole communities.
Incorporating an immediately skill-deteriorating workflow, based on a non-replicable SaaS, whose creators vow to take your job, with completely intransparent future cost, into ones workflow is a pretty big ask in my opinion. At least definitely not a no-brainer to anyone not running around with blinders.
"oft-overlooked externalities of LLM use/production"
Are they really oft-overlooked? They certainly aren't on Lobsters or Hacker News or Bluesky or any of the other online spaces I hang out in.
I don't know Bluesky apart from what gets bridged into my Mastodon feed, but I still spend too much on the orange site (usually until I can't bear to read another comment full of neoliberal "Geschichtsvergessenheit" (incapability or unwillingness to remember historical precedent)).
It gets brought up a lot, but is usually not really accepted as a valid counter to anything. The workers don't seem to matter, and the energy is only produced in extremely dirty ways until the SMRs are online (sure), the water would not have to be used in closed-circuit cooling (which is not used because there's already not enough energy as it is, and closed-circuit costs even more energy), and inference isn't so bad and we could simply stop training (and every other competitor would surely do the same) and also humans also need energy and then even dare to not produce anything in their first ~20 years.
In short: I don't agree.
I think the examples show the situational nature of the hype train.
Anytime you hype something or react to hype, remember that variation.
There's another interesting shape of a hype cycle I want to mention here: Nuclear technology. In the 1950s, people called it the Atomic Age and it was certain that it was going to be everywhere. You used it to paint your clock dials, kids would get little nuclear science experiment kits, every home and car was going to be powered by a micro nuclear reactor.
That didn't exactly pan out. Nuclear energy is still massively relevant, as is nuclear medicine and of course nuclear weapons. Nobody disputes that the tech is "here to stay". But we've largely banished it from domestic life, mostly due to safety concerns.
I don't exactly see the same happening to LLMs, because the cat is out of the bag as you can run a (shitty) open weights model on your consumer PC right now. But the safety hazards may still prove to be a serious issue, especially as the major AI companies seem to be in a race to the bottom w.r.t. how many safeguards they can remove.
I’m not sure. Nuclear technology went into microwave ovens, fertilizers, atomic clocks, etc. it has become one of those ubiquitous things that is so ingrained in every day life we don’t think about all the ways we benefit. It really is everywhere.
I could see the safety concerns going the way that Instragram and TikTock accounts for kids and teenage girls is going, in that some governments are slowly waking up to potential harms for categories of users and starting to regulate access. I guess it's also possible it goes the way of gambling addiction services and regulating that industry as well, which is a little more disheartening as it's essentially, "Know your limits and it's on you to ask for help." but I'd still support a robust safety net even in those cases.
In any of these cases though, I think it's safe to assume any sort of regulation will be slow going and takes years to get up to speed.
This raises a (to me at least) interesting ethical question.
Anyone who's been paying attention to the AI assisted programming discourse, both generally and here on Lobsters, will surely agree that this message is going to be deeply unpopular with the AI holdouts who it is trying to reach.
If you've already decided that AI is overhyped nonsense, or environmentally and socially harmful, or irredeemably unethical in the way it is trained, or actively makes programmers and the code that they produce worse, or all of the above... then being told that your career is under threat if you don't adopt it is the last thing you want to hear.
So, the ethical question: If you have come to the conclusion yourself that a) this stuff isn't going to go away, b) it really can produce significant productivity gains for people who learn how to use it such that c) people who don't adopt it are going to face career challenges as a result... is it ethical not to engage in those conversations with people just because you know they don't want to hear it?
Of course this now has uncomfortable parallels to that evangelical Christian thing where if you're certain people who don't "embrace Jesus Christ as their savior" will go to hell then the only ethical thing to do is become an irritating missionary.
In all of my writing about AI-assisted programming I've avoided writing that "this is going to hurt your career if you don't figure it out" because nobody wants to hear that... but I'm starting to question if this is the right stance for me to take, or if I'm just being a coward about it because I don't want to get yelled at.
In all of my writing about AI-assisted programming I've avoided writing that "this is going to hurt your career if you don't figure it out" because nobody wants to hear that... but I'm starting to question if this is the right stance for me to take
Perhaps the dichotomy you've drawn up is false? If you genuinely want to reach those AI-holdouts (of which, I'm firmly one) I can't feel like you're giving me false pretenses that match only your worldview and not my own. You have to meet me where I'm at. You have to see things from my point of view, genuinely and not in a patronizing way. Upping the level of hyperbole does not get your point across faster, or more urgently; it only makes you sound more evangelical and confirms my suspicions that you aren't seriously considering my side.
AI isn't going to take my job if I don't pick it up immediately. Period. There is some probability (although how much is unknown) that it will change my job in ways I may view as negative. There is some probability it will change my future job opportunities in ways I may view as negative. But in all of these cases there are so many unknowns and assumptions that just yelling, "Get on board or get left behind!" is both disingenuous and borderline insulting regardless of whatever you think your intentions are.
You have to meet me where I'm at. You have to see things from my point of view, genuinely and not in a patronizing way. Upping the level of hyperbole does not get your point across faster, or more urgently; it only makes you sound more evangelical and confirms my suspicions that you aren't seriously considering my side.
Thanks, I think that's a much better explanation for why this kind of rhetoric wouldn't be likely to have a positive impact.
So you're done arguing for actual advantages and just go for inevitability? Then don't say "It will hurt your career" but "This career is incompatible with your values".
I am coming around to the position that this career is incompatible with my values. I was kind of able to hide from it for a long time by working in a public interest organization. I didn't have to write anything privacy-invading, anything related to advertising, anything with military use, anything national security-related. But now, without any of that changing, my job is becoming unethical out from under me, just by virtue of the tools I'm to be required to use.
I don't like React, but I'd recommend learning it sooner rather than later to somebody if the only thing they care for is a job as a webdev.
But that is only if I view it through a certain lense(getting a tech job) and discard other ethical issues.
I do think giving such advice lacks a certain spine. The LLM hype machine feels bigger than a few other things I've seen, and I don't know how to deal with it, because "deal with it responsibly" hasn't worked out for a fair bit of other software related things(gambling, gaming, social media, data harvesting).
If LLMs have done one thing, it is opening eyes to a general obliviousness/don't-care-attitude of the tech sector to consequences of ones actions, through the sheer scale of it all.
Hah, as a fellow React-holdout that's a pretty solid comparison.
I think React usually results in over-engineered, slow-loading, inaccessible web apps that often should have been a web page - and that new engineers who start with React often miss out on fundamentals that would make them stronger if they learned them first.
But.... I would never advise a new engineer not to learn React, because it's an important technology to be familiar with if you want to get hired!
I think React usually results in over-engineered, slow-loading, inaccessible web apps that often should have been a web page - and that new engineers who start with React often miss out on fundamentals that would make them stronger if they learned them first.
Funnily enough (and this may well have been your intent), that's a darn good summary of my view on LLM-engineered software!
But.... I would never advise a new engineer not to learn React, because it's an important technology to be familiar with if you want to get hired!
The big difference here is that I can (and do) work perfectly well on a codebase with coworkers who mostly do use LLMs, which isn't possible for a library/framework.
This might be a (possibly EU/US?) cultural difference, but a troubling question for me is the number of students and young professionals who choose to learn things just because they perceive that it will get them hired more easily, or that there is money in it, instead of learning things that sparks joy for them and stimulates their interest.
I can't believe that this is all driven by necessity and all of these people do that because they are terrified of unemployment. I can only understand this as people valuing how much money they make more than they value what they know, what they like and who they are, and this is deeply troubling to me.
This might be part of why it is so difficult for pro- and anti-LLM folks to understand each other: they each consider their position strictly rational and just can't get (A) how could someone possibly voluntarily choose to be "left behind" or (B) what could someone possibly find so important to overrule all the ethical issues of LLMs.
As far as I am concerned, I think that no argument on the accuracy of LLMs could ever convince me to use one. I also think that if there ever came a time where I couldn't get a job in my field that didn't imply LLM use, then I would do something else.
"This career is incompatible with your values" is a bit strong, but I suspect that some people will find that their values restrict them to a much smaller slice of the available job market.
And if that's a choice they want to make with full information of what's going on then that's fine - I still think veganism is a good comparison here (though it sometimes offends vegans), since vegans knowingly let their values significantly restrict their options for food.
There are plenty of companies I wouldn't work for on the basis of my own values. The challenge with AI is that we may find ourselves in a world where the majority of companies that develop software encourage its use. Even then, I expect there will still be employers that don't (highly regulated industries, leadership who hold out against the wave) for a long time to come.
We also find ourselves in a world we inhabit and shape with our actions, which is why I spend more time encouraging people to envision what that should look like than to accept things as inevitable. It may be hard or even impossible, but that's never a reason not to try to fight for a better world.
I personally don't go for veganism (which, while limiting, can also be freeing and, first of all, healthier if done well) as an analogue, but for investing in weapons manufacturing stocks. The only way to profit is to accept hurting others.
Programming without AI, using exclusively Free Software, and veganism all share a common ethic.
Yes, they are inconvenient at times, but the work that you do to compensate is not a sacrifice, or a meaningless cost, it's an investment in yourself.
Seeing them as an investment is one way to justify the costs. It makes more sense if its something you personally enjoy or have decided to be a core value. For those where it's neither, there's almost no practical benefit.
The practical benefit is that you liberate yourself from dependence on sociopathic and massively destructive corporate entities that view you only as a means to make profit.
"This will hurt your career" is very hyperbolic and you have been right to avoid saying it.
Have you heard the good news? [1]
Corridor Crew used AI to solve a workflow issue they have when they use green-screens for filming. Not only was it impressive at how well it worked, and how much time it actually will save them, but more importantly, they trained the AI exclusively with data they generated, and they open sourced their solution so other film makers could use their solution [2]. So my question to you, Simon, is what problem is AI assisted programming actually solving?
The issue isn't using AI just to use AI---it's to solve an actual problem. Aside from the ethical complaints, maybe another reason for the anti-AI crowd have is they don't see the "problems" it "solves" as being worth using it.
[1] Just a bit of proselytizing humor there.
[2] They trained the AI on their own equipment, so they might have side-stepped the environmental impact complaint. They did side-step the copyright violation complaint by using their own training data, and they side-stepped the rent-seeking complaint by releasing it.
Long and short of it is that the mandates coming down from on high at many large tech companies (and perhaps small) is "show us you use AI or we are going to show you the door".
Even if AI is a fad (I don't think so) it will last long enough that it's going to filter out the hard-line holdouts from not just that current job, but the industry as you won't get hired without showing AI usage skills. You're betting on the fad and that your savings will cover you until it's over. I think that's a bad bet.
Maybe software engineering just isn't the job you want it to be anymore and so filtering out is a good thing. But maybe it still is and you haven't grown your AI skills because you just don't like it.
We experience edicts about our jobs we disagree with on a daily basis: a design we think is bad, a programming language we don't like, etc. It took me a long time to fully internalize "disagree and commit". Using AI in your job is a "disagree and commit" moment. But I don't think you'll have as long as I did to truly get there before your runway has run out.
personally, this article by scott smitelli was convincing enough for me to not go all-in on vibecoding/agentic coding, just because of fomo. I use it sometimes, but I still prefer to write code myself.
(I wanted to submit the article, because I think it is very good, but new users can't submit articles about vibecoding :)
In all of my writing about AI-assisted programming I've avoided writing that "this is going to hurt your career if you don't figure it out" because nobody wants to hear that... but I'm starting to question if this is the right stance for me to take, or if I'm just being a coward about it because I don't want to get yelled at.
I get this from the opposite side. I've considerably warmed to AI and use it quite a bit. However, I have very bad interactions when asking difficult but well-intentioned questions, especially about system maintenance of large codebases (millions of LoC of either previously existing or AI generated code).
this reads like a sales post on LinkedIn
I wrote this blog title as a joke on LinkedIn, but enough people egged me on that I then fleshed it out into a full article. If that was you and you were joking…oops.
To sum up: blah blah blah.
Written at least two years too late, given that it's maybe the 35th post I've read saying all the same things
sigh...
Unless inflammatory reactions is the goal this article adds little value to an already saturated topic. Even the arguments repeat themselves between authors.
What’s stopping some junior half your age who is actively adopting AI running rings around you and taking your job?
vs
So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you
Source: We Mourn our Craft
It's really more of a volume multiplier than a force multiplier in my experience. SOTA AI models are very good at producing large volumes of code[1]. This isn't great, especially when working with AI models, which have a fairly limited ability to see the bigger picture of the code base, causing them to inevitably further complicate the architecture.
With a lot of coaxing you can make them produce shorter and simpler solutions, but it very much feels like herding cats in my experience. 15 minutes of the model one-shotting the problem, followed by one or more days trying to make the solution less shit.
[1] Openclaw seems to grow at a rate of over 200,000 lines per week.
actually using it and getting real value from it
I may be in a different bubble, but our company got from 20 to 200 employees in really short time because of "big data" (I'd consider that as getting "real value" from it), one of my digitally illiterate relative got into crypto (besides my social media walls were flooded with it), another one bought a house from the bitcoin divident, so yeah, that one was (is!) a real value indeed (for them at least).
I agree that we need to know our tools, I myself made a few apps already with the help of (or exclusively with!) AI. The problem is not on this level. What bothers me most is that most people using it carelessly, often as a "glorified google search" (tm) and don't care about the result. And usually I am there to review, fix, validate, fact-check and explain whether those results are good or not. AI will (and does) "fuck me up" whether I'm onboard or not :(
bought a house from the bitcoin divident, so yeah, that one was (is!) a real value indeed (for them at least).
One could argue that value wasn't created as much as wealth was transfered upwards in a pyramid scheme.
What bothers me most is that most people using it carelessly, often as a "glorified google search" (tm) and don't care about the result.
I am convinced that AI is intended to replace search. You will not be allowed to access basic data, only the conclusions that billionaires want you to reach.
our company got from 20 to 200 employees in really short time because of "big data" (I'd consider that as getting "real value" from it)
Adding nine times the employees is a cost. In a well-run company, of course, each employee delivers more value than he costs — did the company also add nine times the orders or customers? That’s the really important thing.