Your LLM Doesn't Write Correct Code. It Writes Plausible Code

49 points by kitschysynq


amw-zero

This is also true of humans.

marginalia

I think the problem with code, any code no matter who writes it human or program, is that the solution space is very large, and there are exponentially more bad convoluted solutions than good simple ones.

"If I had more time, I'd write a shorter letter" and all that.

AI models, at least right now and I think for the foreseeable future don't really have the ability to the grasp the bigger picture of anything they're working on, unless it's some trivial green field widget. This makes them extremely prone to append-only coding.

You can coax them into producing good code by pointing out all the mistakes they are making and all the inefficiencies they are introducing, but this is a fairly tedious process that often takes orders of magnitude longer time than the actual implementation.

Whenever we've improved programmer tooling, with higher level languages or IDEs or whatnot, we've only seen projects get larger and more complex. This trend seems to continue unabated into the Claude age.