claude code is not making your product better
20 points by carlana
20 points by carlana
This is something that also AGI-singularity-fearing people don't seem to understand: Complexity scales exponentially (probably even faster). Eventually even the smartest person/IQ/model/agent hits a steep wall of complexity as idea/system/project/codebase/feature-set grows. That's why reality at large is computationally irreducible.
Every software project goes relatively smoothly early on, until the exponential growth in complexity takes off and dwarfs everything. Good architecture, design, quality just delay the complexity takeoff moment. So if you have competent people who did a good design and took care of quality, you might be able to to hold on to maybe x10 more size/features/performance/wow but even they will eventually reach a wall.
LLM assistance allows producing a lot of features/code at certain (arguably average) quality much faster. Which just means you are going to reach the wall much faster. Which is great e.g. for growth, experiments, and taks that were relatively easy (low-complexity) but time consuming, but it is not what allows you to "build things that were not done before" and/or "large and complex projects". For that you need the the "keep complexity at bay" improvements which LLMs don't really provide right now.
It’s a classic problem. It’s easy to see the immediate benefits of something when there’s a direct cause and effect. It’s much harder to see the negative effects of something when there is time dilation between cause and effects.
http://bastiat.org/en/twisatwins.html
The cost of using code agents is not direct, it’s from the loss of collective understanding for how a system works, among many others
That said, hierarchical organization does provide a path towards managing complexity at scale. We can see incredibly complex structures emerge in nature this way. A system can be built out of independent units that compose together to build larger structures.
A robust system needs to be resilient to shocks and able to adapt on its own which means that parts have to be able to fail independently with localized damage. Organizing the system into nested subsystems creates cells that talk to each other to do their job. They don’t need to know the internal processes of other cells, and act as stable subassemblies. Each level can self-organize and maintain resilience within its own domain because it’s not bogged down by what’s happening elsewhere. This is basically a way to form abstractions where you encapsulate incidental complexity behind an interface which encodes the semantics. A good way to look at hierarchies is to treat them as connective tissue between components of large systems.
Erlang OTP is a good example of this approach in software where large systems are built out of isolated processes that pass messages to one another. These processes can die and have errors without bringing the whole system down.
if using claude code gives you a genuine product velocity advantage, and anthropic had it exclusively for 7 months, the gap between claude code and every competitor should be unbridgeable. codex would be irrelevant. instead, people are still actively debating which one is better.
This does not look like robust thinking to me.
Claude Code is good software, but it's not some kind of weird AGI magic that instantly makes you impossible to compete with.
In addition to that, Codex is vibecoded too, so the comparison is a little silly that way. Having a few month head start doesn't mean a lot when many of the features being added are the result of what people in the wild are doing with the tools. It's not like Claude Code started off with a complete roadmap and the primary barrier was code velocity. The primary barrier is ideas. Everyone is making it up as they go.
Beyond that I can't comment in depth on the post because I mostly skimmed. Lack of sentence case nonwithstanding, it's largely LLM written prose which I don't love reading.
It's not like… the primary barrier was code velocity, The primary barrier is ideas.
I think that is exactly the point of the post.
So far agents make mistakes that tend to accumulate multiplicatively.
When they make an API that technically works, but requires a bit of extra code when using it, it results in more code, which creates more places for duplication, divergence, and bugs, which like a fractal cause more not-so-ideal code to be written.
I've had an agent design a struct with an optional id field (necessary for one of constructors it had written). It then tirelessly wrote all code twice for when the id was available and awkward fallbacks for when it was absent. The fallbacks were obviously complicated and fragile. It infected almost every use of the struct and everything depending on it. The no-id constructor wasn't even used. Half of the codebase could have been easily deleted.
I'm genuinely not sure whether it's fixable with enough of "stop digging when you're doing stupid things" instructions, and we're a couple hacks away from coding being "solved", or whether that's a long-term problem with stochastic parrots that will keep requiring exponentially increasing costs for linear improvements (as long as mistakes are compounding, compounding growth will get you).