How to think about Gas Town

39 points by k-monk


cflewis

One thing noted here is "rigor". I'm beginning to come to the conclusion that rigor as we know it will die with LLMs.

We (as in software engineers) can't keep talking about long-running agents, sub-agent fleets, agents changing themselves and writing tools for themselves at runtime etc etc and fool ourselves that this can all happen with any sort of upfront rigor check during CI with testing and such. The dynamic and non-deterministic outcomes prevent this.

I think we need to start running with the idea that we will be essentially yolo'ing things without humans in the loop to prod, and what that means for "rigor" . For me, I'm thinking what we are looking at is moving a lot of software verification to long running sandboxes and canaries. Is the agent doing what was asked? Are objectives being met? Are invariants being violated? Then we send it on to prod, and start acting with extreme prejudice against the process: as soon as it looks like it's going wrong, the process is aborted and started again.

eminence32

If you think Gas Town is just about agents doing work for you, then I agree that's inevitable (it turns out this is way more useful than AI-powered "tab completion").

But my understanding of Gas Town (from only reading about it; I've not personally used it) is that a defining feature of Gas Town is a full commitment to "vibe coding" where you don't really care if the AI solves your problem, you just care that the AI has done something.

And that feels very far from inevitable to me