AI tribalism

61 points by nolan


zetashift

I'm getting a bit tired of pro-AI posts that seem to only be aimed at refuting the "extreme anti-AI" takes on mastodon or bluesky. I'm glad one can use a tool as a tool and be more effective as a solo developer or in a small team, but these posts end up feeling (to me at least) dismissive of what are valid concerns, for example in this article:

And honestly, even if all that doesn’t work, then you could probably just add more agents with different models to fact-check the other models. Inefficient? Certainly. Harming the planet? Maybe. But if it’s cheaper than a developer’s salary, and if it’s “good enough,” then the last half-century of software development suggests it’s bound to happen, regardless of which pearls you clutch.

Why should "bound to happen" be any less of a reason to try to have responsible way to deal with LLMs? How do we define good enough in software development better so that we're just not pushing slop on to others at a crazy rate. The reasoning of LLMs are good because they make a developer productive, won't convince most people with a negative look on LLMs entirely, because just being producive isn't everything to a person, and what happens is that reasoning like the quoted parapgraph comes off as dismissive.

I like to compare it with social media in a way, it's great that people can communicate faster and easier than ever, but it doesn't mean that people now suddenly communicate better and it also introduced its own set of problems.

And since politics long ago devolved into tribalism, that means it’s become tribalism.

Entirely unsure how to interpret this.