Generative AI vegetarianism
28 points by carlana
28 points by carlana
one way in which i find this analogy falls short is the purity aspect. veganism for example asserts that no animal exploitation can be justified, in the absence of necessity. on the other hand the issue of "ai" is a matter of the scale of the system and the purposes for which it is used.
if "ai" remained a niche research project instead of a tool of mass exploitation, plagiarism, surveillance and war, then i suspect you would not see the same resistance to it that we have today
unrelated point: i find it funny that neither this article nor the ones it references are written by actual vegans (or even vegetarians), which i feel does a disservice to ideas explored my vegan academics about how the position against animal exploration intersects with the fight against capitalism, patriarchy, colonialism, etc., focusing instead on the more surface aspects of the comparison. i'd also like to mention the fact that modern day vegetarianism is not much of an ethical stance considering how much the dairy/egg are linked to the meat industry. and even if they weren't, they would still be built on the exploitation and abuse of sentient animals. in that same vein "ai vegetarianism" does little to mitigate or reverse the harms caused by "ai", spinning the situation as a matter of personal choice instead of a political conflict.
You put your finger on exactly what bothered me about this post. Framing structural and political harms as a matter of personal lifestyle is a very late-capitalist move—the same logic that told us to solve climate change by bringing reusable bags to the supermarket.
Individual abstention isn't entirely meaningless, of course—AI companies are still training new models, still exploiting labor, still scraping. But its deterrent effect is severely limited without structural change to back it up. What would actually move the needle? Things like regulations requiring that models trained on public data be returned to the public, or treating redundantly duplicated compute across competing AI labs as the waste it is and pushing to socialize model training at the infrastructure level. Without that kind of political intervention, “AI vegetarianism” remains a personal comfort more than a collective force.
The “deliberately AI-free” label idea is the part I find most telling. That's not a movement; that's a niche market waiting to happen.
I do think there's something worth salvaging in wanting a vocabulary for refusal. But the moment that vocabulary substitutes for actual political engagement, it's doing more harm than good.
Individual abstention
I'll see you and raise you:
15 February 2003 Iraq War protests are described as "the largest protest event in human history" and yet they did nothing to stop the invasion of Iraq.
Even mass abstention is severely limited without a leading organised political force. See Jacobins during the French Revolution and Vanguardism in the Leninist sense.
Fair point, but I think vanguardism is doing more work here than the argument can actually support. Lenin's case was built around a specific situation: confronting a state with a monopoly on violence, where workers couldn't arrive at revolutionary consciousness without external guidance. That's a pretty particular set of conditions, not a general theory of social change.
The situation with generative AI differs in at least two ways. The adversary is corporate power, which operates through market dominance, manufactured dependency, and lobbying rather than direct repression. And the “consciousness problem” Lenin was responding to doesn't really apply here. The harms of generative AI are visible enough that most people who encounter them understand them. The gap isn't awareness but institutional levers to act on it.
GPL didn't succeed because a vanguard party deployed it. It succeeded because the license itself functioned as a mechanism of social pressure, adopted individually, validated through legal precedent, normalized through practice. GDPR followed a similar logic. Neither required centralized political leadership in any Leninist sense.
On the Iraq protests: I'd push back a little on reading them purely as failure. They didn't stop the war, but they influenced Canada's non-participation and Turkey's refusal of base access. “Didn't achieve the explicit goal” and “achieved nothing” aren't the same thing.
GPL didn't succeed because a vanguard party deployed it. It succeeded because the license itself functioned as a mechanism of social pressure, adopted individually, validated through legal precedent, normalized through practice. GDPR followed a similar logic. Neither required centralized political leadership in any Leninist sense.
Aren't they rather counterexamples?
GPL succeeded thanks to FSF: they built great software that were immensely useful to a lot of people at the time without any alternatives (GNU project), and they had unwavering commitment to the free software ideals and morals that many did and would consider today to be too political.
If I dare say, "free software" was always more about positive freedom (i.e. "freedom to") whereas "open source" was about negative freedom (i.e. "freedom from"). For example, when asked about GPLv3 Linus Torvalds said:
Emm my argument for liking version 2, and I still think version 2 is a great license, was that, "I give you source code, you give me your changes back, we are even." Right? That's my take on GPL version 2, right, it's that simple.
And version 3 extended that in ways that I personally am really uncomfortable with, namely "I give you source code, that means that if you use that source code, you can't use it on your device unless you follow my rules." And to me that's, that's a violation of everything version 2 stood for. And I understand why the FSF did it because I know what the FSF wants.
For Torvalds it was important that no one, including himself, could tell you what you can and cannot do with his software (i.e. freedom from interference) beyond giving your changes back. For Stallman however, the software wasn't an end in itself but a means to liberating users (i.e. freedom to do things) so he had no problem with dictating through GPLv3 that "you shall not tivoize".
There is no right or wrong, though I'm firmly in the "free software" (i.e. freedom to) camp, and it's different interpretations of what freedom is and/or how it's best achieved. Stanford Encyclopedia of Philosophy goes into more detail if you're interested in the two concepts of liberty.
Similarly, I think GDPR was successful (if we can call deceptive cookie banners that) only because it had the political backing of the European Union (a very centralised political organisation).
Fair point on both cases, and I'll concede it more directly than before: yes, FSF and the EU were organized forces, not just spontaneous individual adoption. The critique lands.
But I think we're actually closer than the framing suggests. My original claim was against Leninist vanguardism specifically, not against organized action in general. FSF is an advocacy organization that built software and wrote a license; the EU is a democratic political institution. Neither operates on the logic of a revolutionary vanguard that must inject consciousness into an otherwise incapable mass. Saying “some degree of organization is necessary” is something I'd fully agree with. Saying “it must take the form of a Leninist vanguard” is what I was pushing back on.
The positive/negative liberty distinction you raise is genuinely useful here, and I think you're right that it maps onto the free software vs. open source split fairly well. For what it's worth, I'm also firmly in the positive liberty camp on this. The reason I find GPLv3's anti-Tivoization clause interesting is precisely because it's an attempt to close the gap between formal freedom (you can see the source) and actual freedom (you can run what you want on your own hardware). The same logic, applied to AI training data, is roughly what I've been thinking about—I wrote about it at some length here if you're curious.
But I think we're actually closer than the framing suggests.
I think so too!
Saying “some degree of organization is necessary” is something I'd fully agree with. Saying “it must take the form of a Leninist vanguard” is what I was pushing back on.
Exactly! I tried to emphasise the former ("some degree of organisation") by listing Jacobins as another example but perhaps having such examples were more harmful than helpful in conveying my point.
I wrote about it at some length here if you're curious.
I'll read it; thanks!
"Your change achieves nothing, so why do it?"
"Because it helps me keep a clear conscience."
"Well, what good is that?"
If you have to ask, I probably won't be able to get through to you.
I'm not questioning the value of a clear conscience. I'm questioning whether “clear conscience” and “doing something about it” are the same thing, because I think conflating the two is part of the problem.
The worry isn't that personal abstention is worthless. It's that it can function as a release valve: once you've opted out, the moral pressure dissipates, and the harder question of what structural change might actually look like never gets asked. A clear conscience is comfortable precisely because it doesn't require anything beyond the individual.
The “deliberately AI-free” label idea is the part I find most telling. That's not a movement; that's a niche market waiting to happen.
Pretty sure that exists and is called "organic" (i.e. "not artificial").
regulations requiring that models trained on public data be returned to the public
So like, redefine "public" to mean "proprietary"?
The organic analogy proves the point rather than refuting it. Organic food is exactly what a niche market for conscientious consumers looks like, and it hasn't meaningfully restructured agriculture.
On the second: I think you have it backwards. The argument is that models trained on public commons shouldn't be proprietary, not that “public” means “proprietary.” The GPL does something structurally similar: if you build on this commons, you can't close it off. A model trained largely on publicly funded research, open source code, and Wikipedia doesn't obviously deserve to become the exclusive property of a private company. That's not a redefinition of “public”; it's a pretty conventional understanding of what commons means.
in that same vein "ai vegetarianism" does little to mitigate or reverse the harms caused by "ai"
You're right, but the thing is this...
I've been vegetarian -- not vegan -- for 45 years now, and I'm an "AI vegetarian" as well. (Although I don't even use bot-created playlists. Hell, I'm so old, I listen to albums not playlists.)
Many of us feel a deep helplessness in the face of the almost-totally-hateful self-destructiveness of 20th and 21st century technological civilisation. I know I'm not alone.
We do what we can, and something like just saying "no, I will not do {this thing everyone else does} and I will keep not doing it. This is my line in the sand."
I know millions love McDonald's. I have never eaten a "Big Mac" in my life and I never will. The mere idea of mechanically-recovered meat profoundly disgusts me.
I know it's a guilty pleasure for many, but it is completely beyond me how it can be a pleasure. I can't imagine an amount of money I'd need to even take a bite.
Welp: I have the same feelings, weaker but there, for mechanically-recovered words, or code, or images. It's disgusting. It's slop.
I can't stop it. I can write about how it's bad, I can urge people not to do use it or consume it, but I can't prevent them doing it. All I can do is say, clearly and loudly, "this is bad, this is wrong, don't do it" and not do it myself.
If you think that's inadequate and so pointless, I welcome suggestions what else we veg*ns can do.
I welcome suggestions what else we veg*ns can do.
If you're not already, you can get into anticapitalist and antifascist politics and find practical things to do to build a better world for the people around you.
Joining or starting a workplace union is good. Feeding people is good. Try to find a group like Food-not-bombs. Opposing deportations is good. Getting socialists elected is good. Starting, joining or supporting worker co-ops is good.
All of these things build local power and make your community better able to resist the problems that exploitative technologies and exploitative politics are bringing.
i'm assuming you're familiar with the damage (non meat) animal agriculture causes, since you've been vegetarian for 45 years
knowing that you show strong disgust to meat but don't extend the same feelings to the rest of the animal exploitation industry makes it hard for me to offer suggestions, since i don't know what principles you're building that view on or what ethics you and i have in common. i'm not trying to be snarky, but i would rather not assume
The line in the sand is real, and I respect it. I'm not trying to take that away from you.
But you asked what else you can do, and I think that's the right question. The honest answer is: the same thing vegetarians and vegans have done when they wanted to move beyond personal abstention—push for regulation. Not “please be nicer” petitions, but concrete structural demands. Models trained on public data should be returned to the public. The redundant compute burned by competing labs training essentially the same model should be socialized. These aren't utopian asks; they're the kind of intervention that actually changes the math for the companies involved.
Personal refusal is a starting point, not a resting point. The danger is when the clarity of “I won't touch it” becomes a substitute for the messier work of asking who gets to decide how this technology is governed, and actually showing up for that fight.
I hear and sympathise with your arguments, and would like to add a thought.
To me veg*ism and abstention from AI use are not comparable, at least on a rational level; animal suffering is real, an "original sin" tainting all AI-related compute (training on stolen data, wasteful compute etc) is not.
Perhaps I'm being naive or utopian, but I think we should not pass up on a weird cultural technology that's sometimes genuinely useful. It will get smaller, it will percolate into basements and communities; it will not be an oligopoly forever.
There are people who are opposed to everything they perceive as "AI" no matter the externalities. (Though I've not yet found someone morally opposed to A*)
Also, there are vegetarians and vegans who are more opposed to economics or farming than to animal husbandry or hunting generally.
It takes all sorts
I recently came up with a similar analogy: vibecoding is fast food. You can cook at home (write the code yourself), you can go to a good restaurant (hire a programmer), or you can buy yourself a hamburger (use AI tools). AI works in short-term, but in long-term, it's as unsustainable and unhealthy as eating fast food for every meal. It creates code that nobody fully understands.
Or you can «cook» at home my throwing random grains with a bit of vegetables into a microwave. At least you know if you throw the same stuff as yesterday or not, which is more than what you can be sure about with a hamburger… Hm, that sounds like running a local LLM and having to actually fix minor bugs.
It creates code that nobody fully understands.
A person from MS Research was known to say in public that Windows can only be studied as a phenomenon for natural science methods, not as an engineered artifact, even from inside MS development teams — and that was decades ago.
Maybe the problem is not what mega-oligopolies do today, but what they are in perpetuity.
I really appreciate how well this post articulates the many reasons one might want to be an AI vegetarian, especially the "accountability sink" issue which I don't see well covered elsewhere.
Funnily enough, I have recently become a literal vegetarian, not by choice - though there are many good reasons to be one, and I'm not mad about it - but because my body doesn't let me keep down most meat products anymore. I have an analogous mental reaction to most generative AI products.