Tactical tornado is the new default
61 points by facundoolano
61 points by facundoolano
In my experience, you need to feel the friction during development to gain an intuition for how to evolve a project to lessen it. LLMs don't feel friction.
The "need to feel the friction" reminds me of a professor of mine who had a ten-minute rule: struggle with a problem for at least ten minutes before asking for help. In the middle of one lecture he held a coordinated ten-minute problem-solving session—maybe to teach the material, or maybe just to drill the point into us.
One of the core virtues of a programmer is laziness. That's why my stealth startup is working on adding the missing laziness to agents so that they can better model the output of senior engineers.
It's not only laziness, or at least I wouldn't call it that way.
It's also "huh, I don't want to have to maintain that in the future". It's basically refusing some things and I arrived to the conclusion that this is probably why FOSS projects can live as long as they do and be the foundations for computers. Being able to NOT do something is needed for good engineering.
I can't tell if your comment is serious or a joke. It's the perfect mix of real truth and absurdity!
AI-agent bootcamp to teach them hammock-driven development ...
relevant https://lobste.rs/s/w8d5zt/paty_most_human_like_ai_agent_you_ll_ever
Most AI tools aspire to be faster, more reliable, and more predictable. This plugin goes the other direction. paty is the most human-like hook system available for Claude Code. It gets distracted. It loses its train of thought. It insists on basic manners. It starts the day already feeling a bit off. It will absolutely refuse to help you if you're rude, but it will also refuse to help you if you're too nice.
It is, in short, a real coworker.
It’s of course the job of the human to supply these, but it’s important to realize that the tool actively works against this mindset.
I'm finding the emergent behavior from LLM users another example of the technology shaping behavior, rather than being just used as a tool. Since the advent of the current level of LLMs, I've been asked to read and critique spec that were completely LLM generated, and recently was asked to evaluate a massive repo that includes both an Android app and a server-side component, by someone who hasn't read or even run the code to see if it works. (my response is always the same; "No, I won't spend more time reading something than you spent writing it. Send me your LLM chatlogs.")
In my experience, the more detached I am from the lower level code-writing activities, the harder it becomes to maintain a tight mental model of the system design and runtime behavior.
Taking the role as an engineering manager, I realized that I need to understand and make decisions on a different level than I was as an IC or tech lead. I no longer care personally about some of the implementation details (although a relatively tight mental model of the system design is often helpful). I delegate that responsibility to my subordinates with the understanding that there is a layer of decisions that falls to them.
However, if the Tactical Tornado becomes the norm, that means these implementation-level decisions are being delegated to LLMs. It also means that system design and strategy-level decisions are just ignored, because it's just so easy to have Claude Code work in the background until something that looks like a reasonable simulation of what we wanted pops out. If the tests pass and QA approves it, it ships!
These are the potential outcomes I see for teams that fully embrace this style:
I’m a big fan of John Ousterhout’s A Philosophy of Software Design book.
This person and I are already friends - I wish I could make that required reading
https://search.worldcat.org/title/1411972002 will help discover if it is available at your local library