AI Is Like a Crappy Consultant
14 points by krig
14 points by krig
I submitted this story to hopefully have a conversation about this.. personally I don’t and never have used any kind of LLM coding assistance. I don’t even like using Intellisense/autocomplete, I find that it tends to interrupt my flow. I know that I am in the minority here though, so I don’t expect most people to agree with me.
However, this part of the article really bothered me:
I did find one area where LLMs absolutely excel, and I’d never want to be without them:
AIs can find your syntax error 100x faster than you can.
They’ve been a useful tool in multiple areas, to my surprise. But this is the one space where they’ve been an honestly huge help: I know I’ve made a mistake somewhere and I just can’t track it down. I can spend ten minutes staring at my files and pulling my hair out, or get an answer back in thirty seconds.
To me, this seems like it would really be a bad idea in the long run. Those situations where I feel stuck and really have to dig deep to figure out why something is broken are exactly the moments where I feel that I am gaining a deeper understanding of the problem I am trying to solve, the tools I use or the programming environment in general… If I have a cheat code that lets me skip all the hard parts of the level, won’t I just remain a beginner forever?
since this is specifically about syntax errors, what deeper understanding are you hoping to gain from spotting them? is this not something that any editor with syntax highlighting makes obvious anyway?
Well, if the syntax highlighting makes it obvious it’s not really an issue at all and I don’t see what the LLM is providing in that case either, but for those situations where you really have to look hard to figure it out.. yeah, I feel like that’s where the issue really tends to be that I haven’t quite got the mental model in place, and that figuring out why things are broken is the path to really understanding what is going on.
The kind of syntax error I’d expect LLMs to help with are the ones like missing a closing brace.
If you use a conventional parser, it will probably tell you that the error is at the end of the file. It’s often completely syntactically valid to keep opening and closing brackets (or braces) within a block, but it’s wrong. If you run something like clang-format, it will incorrectly indent everything after the point where you missed the closing brace, but that isn’t always obvious.
In contrast, a probabilistic model that’s trained on a load of code with braces in the correct place probably has a pretty good idea of where the brace should be. If you asked it what was wrong, I wouldn’t be surprised if it could, with high probability, point to the missing close brace location.
My thing is, why allow broken sytax at all, ever?
Broken syntax is one of the things that makes programming actually-not-easy for-noobs. The moment you most need help from your tools they hang you out to dry.
In my formulation, the IDE just doesn’t ever let you make unbalanced braces (or any invalid syntax)
But then you get the nightmare where you type an open brace and the IDE inserts a close right away. Which is just terribly confusing and slows me down every time.
My feeling is that using an LLM is like having access to a walkthrough when playing one of the old 2D adventure games.
It becomes too tempting to quickly bail out and ask the LLM to solve even simple problems like a missing brace, but the price being paid is to have a shallow experience, which will come back to bite later.
But maybe I am wrong about that.
If the tool helps you with this I say great, but in the long term it isn’t a use case for LLMs.
They just kind of shine an embarrassing spotlight on the fact that we don’t have good enough non-AI tools to find our own syntax errors yet.
An example where an LLM found a typo and figuring it out on my own wouldn’t have given any extra insight: a Python function (scikit-learn’s KDTree constructor) expected a keyword argument that’s either a list or None.
I had passed in False instead and everything ran fine but the clustering algorithm returned just a single cluster. The typo was obvious once I saw it of course.
This resonates with me. I’ll often know that an LLM will spit out the answer I need immediately, but I’ll figure it out myself because I worry those skills will atrophy if I don’t use them. But it’s conflicting because I’m intentionally wasting time doing something that’s been automated.
This resonates with me. I use coding assistants, but I always read the output completely. Recently I am using it mainly for the frontend of a personal project. My colleague (who has only basic programming knowledge) wanted to try vibe coding the most of it. Initially the output was much better than I expected. It looked nice and somewhat functional. But then he quickly hit a wall where it wasn’t able to do very simple tweaks such as fixing visual bugs or changing the layout slightly.
So we gave up on it and I started writing it myself with using a coding assistant. After a while, I found random suggestions very annoying. It saved time some in some cases, but most of the time it suggested things I didn’t want, which broke my flow since it distracted me. Eventually I settled on disabling autocomplete and only using it via inline chat or as a regular chatbot. The most value I got from it was when I decided to migrate from a component library to another. In that case it sped up things a lot, though it also hallucinated some things and used random libraries. I always read the output myself and never trust only the observed program behavior.