AI Angst
23 points by equeue
23 points by equeue
Huh, why was this merged? It does quote the “AI angst” article once, way down in the post, but it’s overall far from a response to it. It spends much more time taking apart https://fly.io/blog/youre-all-nuts/, which has also been on lobste.rs recently.
It feels like the Go programming language is especially well-suited to LLM-driven automation.
I have read many critiques of golang over the years, but this is probably the most succinct and savage yet. (but he is far from the first to make it.)
If this is true, how does it reflect negatively on the language?
Because it implies that Go requires a lot of highly predictable boilerplate in order to get things done.
It also implies that a lot of highly predictable boilerplate is sufficient to get things done, which is an interesting angle and sounds less like a criticism.
Conflating predictability and boilerplate seems like quite a stretch.
Is it? I mean, without getting too much into semantics, the whole irritation of boilerplate is that it is frequently copied and pasted and then lightly modified. What’s more predictable than that?
“Boilerplate is predictable” certainly makes sense. “Anything predictable is boilerplate” doesn’t seem at all obvious.
All links to primary sources for it are dead, but there’s something Rob Pike said about Go a long time ago that kind of stuck to me, so I looked it up:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software.
I’m a Lisp fan, so I’m not exactly C-centric or predisposed to liking Go, but there are a lot of things about it that I don’t hate. There are worse metrics to pick for a language, and IMHO I think they nailed the one they picked pretty well, too.
That would explain some aspects of its ease on LLMs to some degree, too, I guess.
That quote was extensively discussed on this site a year ago. @denz got the cites:
https://lobste.rs/s/tlmvrr/i_m_programmer_i_m_stupid#c_itjpt0
Edit - in those same comment threads, @hwayne writes:
I hope in the next few years the pendulum swings back and we start valuing creativity and cleverness in programmers again.
I wonder - will that even happen in our current timeline of LLM-assisted coding? Any marginal advances in productivity gained from using a non-mainstream LLM-unfriendly language will be overwhelmed by developers pumping out Python/Go from LLMs.
I’m somewhat optimistic, with the caveat that creativity and cleverness aren’t evenly distributed, and they never were, really.
Some 25 years ago or so, one of the guys I learned programming from was working on the frontend for a huge train reservation system. This was pre-AJAX, shortly after peak Windows client-server era. Last I spoke to him, he was very fond of the code he’d written at that time – he’d basically ended up working on what we’d now recognise as a dynamic DOM, except over MFC, which required a lot of clever hacks to work over slow-ass modems in the middle of nowhere hooked to computers that barely met the minimum specs to run Windows 98. He still thought programming was fun – but he wasn’t doing frontend app development for reservation systems, either.
There are applications and systems where creativity and cleverness are necessary. But they’re not the applications and systems that our industry treats (sometimes incorrectly) as “solved problems”. It’s not a good thing and it certainly plays a role in the brittleness we experience in modern computing platforms, but I do think it’s still possible to do cool stuff in our field despite it.
Edit: I went and read the comment you linked to after I posted this message, since I was primarily interested in the cites, but FWIW I definitely agree with @denz’s message there.
More to the point for what we’re discussing here, I don’t think Go “works well” with LLMs because it’s a language for n00bs, I think it has some ingredients that makes it both useful for fresh graduates and easy to get a good training set for. And which I like, independently, too. Some boilerplate, sure, but a lot of community consensus on which boilerplate, not too much language feature churn, few duplicate features. It’s a language that’s remarkably easy to learn by example.
One could use Go as a “neutral-level language” that more expressive languages compile down to, but that you’re still expected to read & write, unlike assembly.
You could, but due to their refusal to allow loading code at runtime, any such “more expressive languages” would still lack a repl, so that’s not a very interesting area to explore for me.
The quoted bit about how the future of work is constant anxiety about being replaced by AI also suggests a way to get some money out of all this: Get enough workers terrified of losing their jobs, and they’ll be more likely to put up with pay cuts, longer hours, worsened working conditions, etc.
I really liked this, even if I disagree with some parts of it. The way I see it, interacting with LLMs is inherently risky:
If you don’t know about the subject you’re asking, you don’t have a good baseline to judge the correctness of the output. You could ask for citations but there’s a high chance you won’t be able to properly evaluate those citations either.
If you know about the subject, you still need to verify that the output makes sense. But since LLMs are non-deterministic, you need to be on guard 100% of the time. Most of the time it might appear to be fine, leading to complacency. Eventually, you’ll be tired, or having a bad day, or will be already conditioned to accept that the output is correct. This is not any different from any automation that still relies on human action (self-driving cars or auto-pilot, the better the automation appears to be, the higher the chance of human error when intervening is required).
For me it would be like willingly going to cult meetings or going to a palm-reader on the off-chance that I learn something interesting. The chances with LLMs might be different, but the risk is similar (not only on an individual level, but also society-wide as more people rely on them to make decisions).
(aside: this post doesn’t deserve to be merged with the “AI angst” one, although it mentions it in passing)
Because a lot of writing is just as much about the author convincing themselves as it is about them addressing the reader.
Well put.
Before LLMs arrived, the critics believed that existing software dev was flawed, largely inadequate, and a recipe for future crises, whereas the fans thought things were great but needed to be faster.
The LLM tools are all geared towards making existing – highly flawed – approaches to software development go faster, without addressing any of the fundamental issues that have caused recurring software crises over the years.
I’m going to have to ask for a citation here. But regardless of this detail, which seeks to explain the difference between AI skeptics and believers, I felt an absence of assurance of why it’s a scam and not a once-in-a-lifetime invention.
Is the point of this post that we can’t know whether it is a scam without years of study — and engaging in its use now is irresponsible? I guess I can agree if so, but that is to me the equivalent of throwing a bucket of water on a house fire. It’s not like there’s any regulation to prevent the rampant testing-in-the-wild of AI while many people on the fence are pulled in because of FOMO and hype.
Perhaps those not yet bought in will have to wait for someone to ring the alarm bells about environmental impact — and clearly, mind you, because I keep hearing XYZ say it’ll melt the ice caps and ABC say it’ll be like putting another light bulb into everyone’s home.
Seriously, since LLMs by design emit streams that are optimized for plausibility and for harmony with the model’s training base, in an AI-centric world there’s a powerful incentive to say things that are implausible, that are out of tune, that are, bluntly, weird. So there’s one upside.
I enjoy weirdness too, and I get that this is a bit tongue-in-cheek… but is that really an upside? Especially over the long run, this just sounds like pressure to increase variance, which will in turn decrease the effectiveness of inference. There will still be a middle ground, but there will be less in it and it will be less useful.
This blog post is confusing to me because I don’t think I understand the author’s threat model.
The author says “LLMs are psychological hazards” and “you shouldn’t experiment with psychological hazards” and then gives homeopathy as an example. I agree that homeopathy is a good example of when something might seem like it works but not really work. But writing code isn’t like homeopathy! The failure mode is not “someone maybe dies”, it’s “I have to revert my PR”. And it’s pretty clear whether the tool is working - I explicitly see and feel and use the tool, as opposed to just trusting and waiting for medicine to work. This feels like a false analogy to me.
The author then proposes that we should wait for scientific studies. Why? If my LLM-powered autocomplete clearly feels useful to me, why should I wait for someone to study whether it’s actually good? It’s just like any other tool in my toolkit. Should I also need to wait for someone to tell me that LSP is useful?
I just don’t understand where the harm is occurring here.