Why AI Isn’t Ready to Be a Real Coder
6 points by adamcstephens
6 points by adamcstephens
verbose code not to the point.
…yet.
Honestly I doubt this, Large Language Models do not have the capacity to think. Unless something drastic revelation happens in ML, we’re not getting LLMs anywhere near what constitutes a “real coder”. Development requires reasoning and thinking skills, which LLMs don’t possess.
Deep Blue also couldn’t think, but it beat Kasparov at chess all the same. It turns out that it is possible to play grandmaster-level chess without thinking a single thought.
I don’t know how many of the things humans think about turn out to be doable by machines that solve them without thinking. I’m not very convinced by “but surely that will never happen to us”.
Billions and billions of dollars are being pumped into making us unemployed and rendering our skills worthless.
Chess is a computational problem. Kasparov and chess gm’s and engines look forward in branching paths and evaluate the board state. Programming requires critical thinking and reasoning.
Sure. And we didn’t even need fancy ML to get computers to play chess well.
But for a very long time, people thought that chess was a problem that inherently required thinking - because humans who play chess well not only think, but come up with complex strategies, leverage knowledge about the opponent’s play style, mentally model what an opponent might make of a move, engage in deception, etc. As late as the 1970s, Douglas Hofstadter was speculating that by the time computers would be playing grandmaster-level chess, they’d also be advanced enough to sometimes refuse to play chess because they’d rather do something else.
It turned out to be a lot easier than that - rather than trying to get the computer to do all the difficult-to-model things a human chess player would do, you could just basically brute-force it.
I don’t know how many of the things that require creativity and critical thinking by humans turn out to be doable by machines without those things. Sure, I’d love to believe that our field is special and won’t get destroyed by AI, but what I would love to believe is not necessarily the same as reality.
(Gods, this is so depressing.)
FWIW, I hope you’re right! I would hope for nothing more than to spend the coming few years thinking about things like compiler optimizations, programming languages, memory safety and software security, rather than thinking about how to remain housed and fed when all skills I can sell (and, indeed, my entire “kind of person”) have deprecated to worthlessness.
Problem is, my hopes are directly opposed to those of a trillion-dollar industry and the governments of several very large economies.
The well paid chess players are still human, and human chess is booming. Chess engines actually help people learn about chess and make it fun in new ways. So it wasn’t “destroyed by AI” and seems like it was the opposite. If the method of argument used here is to just extend that to “our field,” then maybe things are looking good, actually?
To argue otherwise would be to claim programming is somehow “special” compared to chess… Which you said we’d love to believe but have to contend with reality.
It is entirely possible that recreational programming and competitive programming will experience a kind of boom. After all, the internet didn’t kill HAM radios, and there are people who maintain and drive 19th century steam locomotives.
They can simulate thought which confuses people. They mimic human text which tends to express thoughts.
I’m not sure about that. The issue of trust, security, and more generally alignment of human subject matter experts and LLM agents has a good chance of being unsolvable.
Honestly AI in general seems to send me off in the wrong direction way too many times by giving incorrect information. What is the point of using info you can’t trust 100%?