I am disappointed in the AI discourse
163 points by steveklabnik
163 points by steveklabnik
Yeah I know this place is generally super anti-AI. But I figured it’s dishonest to not also post it here. I’d love to see more nuanced posts on this topic here.
There’s been something bothering me about the discourse too, which is difficult to place my finger on. I agree it has something to do with being told what to think, from any given angle.
I think it would feel a lot different if we were all experimenting with the technology in a more decentralized manner, following scientific curiosity rather than choosing whether to get on board the narrative shoved down our throats by a small handful of tech giants obviously looking to profiteer, with no anti-monopoly regulation in sight (source: blatantly converting a non-profit to a for-profit with no consequences; general reduction of regulation by current executive administration in the United States).
So if I were to guess I’d say that the push-back to this lacks nuance because it’s effectively political. Do you accept or reject the new AI regime? To say anything positive about AI is to accept, and to signal acceptance to the public. To say anything negative is to reject and to signal that to the public. And with social media lacking nuance, everybody has become a minor politician.
I think it’s become difficult to escape this on the Internet today. It’s easy to have nuanced discussions with friends and even family, but quite difficult to do publicly on the Internet. In conclusion, my hypothesis here is that the divisive political climate has caused people to take sides.
To clarify my personal perspective I can’t stand circle jerking / echo chambers. Gives me the same feeling of disgust to see someone claim something obviously false, regardless of which “side” they’re on.
It’s easy to have nuanced discussions with friends and even family, but quite difficult to do publicly on the Internet
Yep. And AI drags in a lot of other issues that are important to people: questions of identity, potential of threatening their livelihood, big questions of ethics, centralization of power, and environmental impacts. Thus, when we talk about AI, sometimes we’re not just talking about AI.
AI can feel like a manifestation of forces that we already felt pressing down on us, whether that’s Late Stage Capitalism (tm), obsolescence, Big Tech, etc. The fact it seems to have widespread industry backing only exacerbates that sensation. To be clear: I am not trivializing any of those feelings by listing them. I’m arguing that it is difficult to separate AI from the things it attaches itself to in our minds. To me, AI can feel a bit like Moloch.
Hell, I think I have a low-key midlife crisis going on from the identity shift it is forcing. I’ve felt it all pretty much, and it’s still going, 8 mos in (or so). What do you do with the fact that the world seems to be cheering for the downfall of the profession you’ve trained your whole life for? How do you deal with the vague, shapeless grief over the time lost in self-teaching software development? How do I learn to further decouple my professional identity from the personal one in a demanding industry?
There are no answers, only questions to be held for as long as necessary.
I 100% feel the anxiety and fear of AI / LLMs and their ties w/ big tech. Perhaps I’m more able to rationalize this alongside my (growing) usage of LLMs due to coming up pre eternal September. I was first online via terminal on modem (VT100 baby!) and usenet and BBS’es around 90/91. It was like nothing else, especially for an introverted, awkward adolescent. Then over the years I saw everything magical get turned into a complete shit show that I barely recognize.
Maybe its just a coping strategy, but I try to accept the good that these tech and cultural changes bring, and limit my exposure and support for the bad. The good: LLMs as a ‘always-on’ pair programmer, or a way to learning new languages or concepts, or exploring general ideas or history. Hell, searching the web via Claude or any modern LLM is already so much better than Google. All those things feel like magic. It truly feels like a ‘bicycle for the mind’ in a way that Jobs could not have imagined.
The bad: most (not all) AI art. AI created memes. Deep-fakes. Souless tech bros using AI to accelerate their wealth and power, ignoring the value of anything beyond a strict definition of ‘intelligence’. Dumb companies forcing their employees to use or sell AI, when it often does not make sense. The idea that you could ‘vibe code’ a product or project and never take the time to read, review, or understand the thing yourself.
I feel okay about paying Anthropic for their products. I don’t want to support OpenAI or MSFT. I continue to explore local LLMs, but it just isn’t there yet for me.
I’ve been programming 30 some years, and I’ve been enjoying it over the past couple years much more than I have in a long time. I’m also pretty freaked out about what AI means for humanity’s future. My country (the US) can’t even decide if we should assist the poor, homeless, disabled, or incarcerated people in a healthy economic state with full employment. God forbid we have an industral-revolution scale shift in our world, but I think that is what coming. And possibly much sooner than we think.
Genuinely one of the harms I care most about from AI right now is the psychological affect it is having on people. There is a very real feeling of despair out there.
In terms of impact on software development careers: The more experience I get with LLMs, the more convinced I am that the software development profession is not under threat at all. Engineers armed with these tools become more employable, because they can deliver massively more value. Non-engineer “vibe coders” won’t be able to deliver a fraction of that.
We’re seeing the cracks start to form in the absurd “now non-engineers can ship production code” idea already.
I do think that AI-assisted development will drop the cost of producing custom software - because smaller teams will be able to achieve more than they could before. My strong intuition is that this will result in massively more demand for custom software - companies that never would have considered going custom in the past will now be able to afford exactly the software they need, and will hire people to build that for them. Pretty much the Jevons paradox applied to software development.
How can AI-assisted development drop the cost of producing custom software, when we are not paying the full price of the technology. OpenAI lost $5 billion dollars on $3.7 billion dollars of income in 2024, and I have not found any evidence that the cost of running these models is becoming cheaper.
I am not aware of any AI / LLMs making significant more money than they cost to develop or run, and that is without paying the owners of copyrighted materials that they have used in their training.
One of the DeepSeek models is claimed to have cost less than $10m to train.
Claude 3.7 Sonnet - until recently Anthropic’s best model and a lot of people’s favorite, cost “a few tens of millions of dollars to train”.
I run an increasingly large collection of genuinely useful coding models on my laptop.
Just because OpenAI are blowing billions of dollars a year in this stuff doesn’t mean it’s as inherently expensive as that would suggest.
Meanwhile, the prices that companies are charging for API access to models have cratered over the last couple of years… and I have credible sources that at least Google and Amazon aren’t running inference at a loss (though obviously that’s not counting training costs, which are astronomical.)
I wrote more about that here: https://simonwillison.net/2024/Dec/31/llms-in-2024/#llm-prices-crashed-thanks-to-competition-and-increased-efficiency
To build on this — the larger (32B+) open-source models are profitable to run with stock vLLM at market prices if you have enough throughput to saturate an H100 / H200, including the cost of renting the GPUs through Runpod. Inference doesn’t lose money at scale. (Getting the traffic is the hard part! Saturating an H100 takes a lot.)
The tiny models (8B and smaller) usually aren’t as easily profit-driving if that’s all you’re running; I suspect they’re being used to generate revenue from spare capacity leftover primarily from the larger models… Or are being run on CPUs with fast-ish RAM.
The same way the aftermath of the dotcom bubble made ecommerce cheap and accessible for so many more businesses. Sure, pets.com and its ilk were terrible ideas ca. 2000-2001, and many of the people who invested in the ideas took an absolute bath. But as a result of that investment, useful infrastructure got built out, and after the businesses whose expected growth funded that couldn’t return enough on that investment on the time horizon the investors wanted, the infrastructure got sold off for pennies on the dollar. It took years, but that absurd overinvestment resulted in infrastructure becoming so cheap that even, say, a single location pizza shop could set up online ordering and expect to make money off of it.
That, FWIW, is the only feasible payoff I see for the current level of investment in AI. The amount invested in it for the current generation cannot pay for itself, but the firesale strikes me as very likely to have an effect similar to what @simonw predicts wrt the cost of and demand for producing custom software.
Yeah… my dad used to think that his career (painting signs, doing lettering and pinstriping on cars) was not at threat from vinyl sign printers, etc., because - let’s face it, his work was infinity better than the vinyl signs, etc. It lasted longer, he could do better-looking work, he could do a whole variety of things that those machines simply could not. They only delivered a fraction of the value that he did.
But, “faster, cheaper, more” tends to win. By the end of the 80s he was doing far less work. By the mid-90s he was reduced to doing package delivery.
What LLM/AI lays bare is how badly companies want to devalue human work. It’s really as simple as that. Companies love nothing so much as paying for anything but labor. As a human who stands approximately zero chance to benefit from the fruits of AI and almost certainly will see my work devalued, my feeling of despair is warranted. I’ve seen this picture play out before – so have many other skilled people over the years – it never ends well for workers. Never.
As someone who writes software I’ve dedicated my entire career to automating things. As an open source maintainer I give code away for free which increases other programmer’s productivity - how many programming jobs were destroyed because Django meant that developers didn’t have to spin up request handling, database management, form handling and template management from scratch?
For some reason LLMs feel different from open source in terms of job destruction, but I’ve not been able to qualify why that is.
Open source was largely promoted as a commons maintained by skilled programmers for other skilled programmers to use. LLMs are explicitly promoted, by the people creating them, as a way to destroy jobs - programmers’ jobs first, then everyone else’s.
The problem with the big AI labs is that often their valuations only make sense if their total addressable market is judged to be all salaried work.
This makes any claims they make about job replacement inherently unreliable in my opinion.
Sure - but that might still be part of why it feels different.
During the rise of open source, Bruce Perens didn’t regularly come out and claim that open source would soon render all human work obsolete, and the Debian forums weren’t full of weird people salivating over “population reduction”.
(I don’t know whether the claims they make are in good faith or if they’re just straight-up lying. Judging purely from vibes - as is the way of the times, I guess - it seems to me that Dario Amodei is a true believer, and Sam Altman is a cynical grifter. Of course, neither motivation says anything about whether they’re right or wrong.)
Completely agree: open source was never sold as “let’s put people out of work”, and as far as I can tell the professed values were genuinely held by everyone involved.
If I had to guess, Open Source is seen as a gift. Among detractors, LLMs are often seen as a trap?
I’d frame it a bit differently. Open source is a gift that the developers choose to give. LLMs are created based on what some consider theft (using content as training data without consent), for the profit of those who create them.
I’m curious about the overall effects of AI-integrated workflows however. If I recall correctly, Meta stopped hiring juniors, and current seniors are going to retire at some point, who will take the baton then? I hope we can correct the course, otherwise there’ll be an irreversible loss of knowledge that we can’t recover easily.
I’ve lost sleep over those questions for since March 2023. I don’t think I’ve ever been this miserable in my entire life, to be honest - this time, the one thing I’ve always clung to when life got too hard to handle is being destroyed, and I have nothing left to hold on to.
Maybe it’s just for preserving sanity, but my line of thought has generally been: learning to be a good programmer requires a certain amount of intelligence, which means that I could retrain for something else when necessary. Maybe it helps that my education had a detour (three years of studying philosophy), which makes it easier to see that I can do other things and I can enjoy them. From a satisfaction perspective, it will always be possible to do programming without an LLM as a hobby (similar to how people are still writing NES games).
That said, my experience so far has been that LLMs make me faster by doing the boring work, but that a lot of domain expertise is required to steer it in the right direction, and I still end up polishing things by hand. Whether it can do the harder/domain specific work depends a lot on whether we are on an S-curve and if so, where we are on the S-curve.
Problem is, what will you retrain to when intelligence is precisely the thing that is being made economically worthless? Whenever this discussion comes up on HN (which I’ve stopped reading entirely as of a week ago, because I finally realized that it was taking a terrible toll on my mental health), people start talking about the trades. Well, I’m middle-aged, disabled, and not at all in physical shape to do very well as eg. a carpenter or a plumber - and who’d want to hire me over a healthy 21-year-old? I’m fortunate to live in a country that has a fairly robust social safety net - but the economic basis of that is the large, well-paid middle class that pays comparatively high taxes, which is exactly what the “industrial-revolution scale shift” mentioned upthread is promising to wipe out, so a handful of US billionaires can become even richer.
Re programming by hand as a hobby: That is what I keep telling myself, but it feels so meaningless now, that I have a hard time finding motivation. Sure, the journey is its own reward and one must imagine Sisyphus happy … but does that still apply if there’s a ski lift right next to him?
As I said, I haven’t had a full night of restful sleep since march 2023, with the sole exception being when I’d had surgery and had a head full of morphine.
(as a footnote, I have had little success with LLMs at my actual job, where most of the actual programming I do is either fiddly low-level embedded stuff or security functionality, and a lot of the really boring work is already automated away using old-fashioned code generators. But there’s no a priori reason they won’t become good at those things, too, with more time and money.)
There’s a lot of jobs out there that require intelligence and require you to do custom skilled work in the real, physical world, and I think we’re still a very long way from AI-brain robots taking all of those jobs. Think of things like being a plumber, a handyman, an electrician, a mechanic, etc. These are all fields in which intelligence is required and rewarded, and if you can write software and you’re physically able to do these jobs, you can learn the trade skills and practices. In these kinds of jobs, there’s a whole lot of minor variance in every situation you run into daily, enough variance that I think it will be a long time before you can e.g. tell a TeslaBot to autonomously and reliably do things like:
(and often the ones making all the money in these fields are middle-aged or older. It’s not just a young person’s game).
Sure - and likely as not, a TeslaBot doing that sort of work is going to have to be substantially teleoperated anyway (and I’m not sure how well it’s going to handle working in filthy, dusty and/or greasy environments) … and at least for the time being, it’ll be much more expensive than a human. So some of us nerds can probably reskill to the trades. Learn to plumb!
(Personally, I’m going to be screwed nonetheless - after all, “physically able” was load-bearing in what you just mentioned.)
I think I have a low-key midlife crisis going on
I’m 39 now, and have had several other very large changes in my life happen lately. I’ve wondered if it isn’t a significant factor in this being distressing for me too.
I’m in my mid-40s. I can’t really tell if I’m having a “midlife crisis” or if I’m just unfortunate enough that my midlife has happened to coincide with a rapid succession of crises: Pandemic, a war a couple countries over triggering an energy crisis and a national security crisis, a looming trade war, a world superpower making threats, and AI.
I’ve pretty much had to live in “crisis mode” since 2020; the pandemic completely wiped out what little social life I had, and now the only thing I have going for me is rapidly being made worthless.
Agree on the political thing.
FWIW I think the core issue is that a lot of techies (perhaps Klabnik here included, perhaps not) are realizing that they’re on the wrong side of history for the wrong reasons, and are having trouble reconciling that with the evidence clearly in front of their faces.
The “are you against AI” litmus test is incredibly correlated with political stance in my observation, and like so much other political positions in the last few years it serves to halt further critical thought.
FWIW I think the core issue is that a lot of techies (perhaps Klabnik here included, perhaps not) are realizing that they’re on the wrong side of history for the wrong reasons, and are having trouble reconciling that with the evidence clearly in front of their faces.
What is very funny to me is, I am not sure which “side” you’re referring to as being on the wrong side of history.
Personally? I tried ChatGPT the day it launched. I found it amusing. Sorta useful, sorta bad. I considered myself fairly anti-LLM, or at least, let’s put it this way: i had/have some deep philosophical questions about things like knowledge creation and truth that at least made me skeptical. The environmental stuff… I still need to actually engage with that more.
Decided I should check back in to make sure my opinions were still valid, and to my surprise, things worked a lot better than they did! Obviously I was aware that that’s supposed to be the case, but going from “yeah this can’t do this” to “oh it clearly can do this” was a big jump.
So now I find myself using AI tools and generally enjoying them. Also running hard into limitations. And want to learn more, but am struggling to do that amidst all of the grand claims. Obviously this stuff does work to a degree, but it’s not an instant “writes all my code for me” thing either.
So, I don’t know what side that puts me on, or which side of history will be right or wrong. I’m just trying to learn some stuff.
I’m not sure if this was friendlysock’s intended meaning, but I thought the comment was the most accurate if you took “the wrong side of history” to refer more to their stance on politics in general than their stance on LLMs in particular. LLMs are rightly associated with big tech. Big tech is rightly associated with their CEOs and who they literally stand behind.
Which… isn’t necessarily dispositive of whether or not using them is ethical… certainly I think very few people could claim to have disassociated themselves with the rest of big tech’s products… but it’s a piece of the puzzle.
So, right now, there’s a few things that are weird:
If you’re a communist–which I think Klabnik was, and possibly still is–there’s arguably a weird question about how to treat AI w.r.t. (at least in the strict Marxist sense) the socially necessary labour time component of value. The most common argument I’ve heard kinda-sorta amortizes that value over all the training data, but that seems unfulfilling to me. AI is maybe a perfect realization of the concept of dead labor, but again my Marxist economic theory is weak. All this to say–as good comrades we need to view the AI coding tools with suspicion, but there’s clearly a lot of theory work left to explain why.
If you’re anti-fascist–which I think Klabnik was, and possibly still is–there’s no question that you have to object to anything that helps companies and authortarian governments. But, again, the same technologies are very useful in being a force-multiplier against those companies and governments, and in rapidly fact-checking their lies and propaganda–and for creating propaganda of your own. It’s also an awkward place to be in because the legal frameworks being used to limit this AI almost all favor incumbents and those same governments…and even more hilariously, pushing back the “accessible” training date to turn-of-the-century copyright means that we’ll have even more conservative and fascist-friendly models.
If you’re broadly progressive–in particular in the sense of supporting the poor and downtrodden–it’s difficult to ignore all of the accessibility benefits of AI for blind folks, deaf folks, folks that benefit from transcription and translation, and so on while still claiming to be true to your roots.
If you’re in support of bringing in new people to the field, removing roadblocks and arcane knowledge silos is something you’d reaosnably support, and that’s something that the current AI stuff seems to help with a lot. Having an LLM carefully explain things to you without being biased against you in a way you can’t control (notice that final part, it is load bearing!) is potentially a huge leg up for traditionally underrepresented groups. However, it also might remove the starter jobs those groups need. Further, if the tribal signalling is “you have to hate AI” you have to say it’s a bad thing.
~
I think that the “right side” of history right now is probably more AI for more people with more openness. I also suspect there’s a pile of ways that the “right side” could turn out to be wrong. And yes, as evidenced by the grifters and lazy, there are plenty of ways to be on the “right side” for the “wrong” reasons! We can’t forget that!
I think there are absolutely respectable and intellectually rigorous ways of being on the “wrong side” (e.g., “we must reduce AI as much as we can and impede its spread”), but I also see mostly people being on that side for the “wrong” reasons (e.g., tribal signalling and lack of critical thought).
We’re also only a few years into this actually being something even really worth worrying about, so who knows what the future holds.
If you’re a communist–which I think Klabnik was, and possibly still is
What’s funny is that the answer is basically between a “yes” and “I dunno”, like I am pretty politically disengaged at this point in my life. I am not sure how much my opinions matter. I am certainly somewhere “far left of the dems” on a personal level.
there’s arguably a weird question about how to treat AI w.r.t. (at least in the strict Marxist sense) the socially necessary labour time component of value.
It’s so funny to me that we have a rocky personal history, yet you are the FIRST PERSON I have seen bring this up. Yet it is also a thing that’s been intensely on my mind. It’s on my list to go back and re-familiarize myself with the details, but:
The most common argument I’ve heard kinda-sorta amortizes that value over all the training data, but that seems unfulfilling to me.
I also think that’s not exactly right. My current understanding/recollection is something roughly like “even if you have robots producing things, humans still need to maintain and build the robots somewhere up the causality chain” but this is exactly what i need to go back and re-read.
AI is maybe a perfect realization of the concept of dead labor, but again my Marxist economic theory is weak.
You are at the very least much closer to the issues than a lot of people who are Marxists, so, kudos. I agree that there’s at least a lot of theory work to be done.
If you’re anti-fascist–which I think Klabnik was, and possibly still is
Yep.
there’s no question that you have to
Ehhhh. Or at least, you’re right in presenting this as a basic conundrum, but I don’t think it’s any different than any other technology building. Tool makers always enable some people to do bad things with their tools. That doesn’t mean we should stop producing them. I know for a fact that things I have built have been used to do things I vehemently disagree with. And it’ll keep happening. I don’t think it’s really AI specific.
I think that the “right side” of history right now is probably more AI for more people with more openness.
All this makes sense, thanks.
I am certainly somewhere “far left of the dems” on a personal level.
Cynically, I’d bet a beer that fuckin’ Nixon is left of the current day dems, but that’s a conversation for a different venue.
My current understanding/recollection is something roughly like “even if you have robots producing things, humans still need to maintain and build the robots somewhere up the causality chain”
Right, that’s how I’ve generally seen people get at it–but we’re probably within a generation or two of it being robots all the way down–at which point, my comrades typically say we’re in fully-automated luxury gay space communism and kinda just throw up their hands and segfault.
but I don’t think it’s any different than any other technology building. Tool makers always enable some people to do bad things with their tools. That doesn’t mean we should stop producing them. I know for a fact that things I have built have been used to do things I vehemently disagree with.
The common position among lefty techies I’ve seen is that they seem all too eager to throw out the tools and get really upset with the viewpoint that a tool is a tool and represents no moral statement unto itself. The usual line of argument is “there is no such thing as an apolitical tool”. This may or may not be a valid argument–different, longer discussion there–but it does lead to the curious result that in these cases they are closer to Kaczynski than to thee or me.
EDIT:
Because I don’t think I made this clear…I mentioned your political leanings not as a “gotcha, you evil red!” but as a “here is user that is a member of a group that I’m seeing having trouble navigating, in an intellectually consistent way, the implications of these technologies”.
I have fewer examples for the conservative/right side because, frankly, I don’t see as much pushback on these tools and (globally) I see even less attempt at a coherent policy position (at least in America) beyond unconditional hero worship and nostalgia.
I’d bet a beer that fuckin’ Nixon is left of the current day dems
I did literally laugh out loud at this one. For sure.
my comrades typically say we’re in fully-automated luxury gay space communism and kinda just throw up their hands and segfault.
Yeah, I’ve also seen some critiques of FALC, I should find those again too…
but it does lead to the curious result that in these cases they are closer to Kaczynski than to thee or me.
Yes. This isn’t exactly new on the left, anprims have always been a thing, but the neo-luddite (in a bad way) tendencies have ended up a bit weird.
Because I don’t think I made this clear…I mentioned your political leanings not as a “gotcha, you evil red!” but as a “here is user that is a member of a group that I’m seeing having trouble navigating, in an intellectually consistent way, the implications of these technologies”.
It’s all good, I didn’t take it as a slight.
I have fewer examples for the conservative/right side because, frankly, I don’t see as much pushback on these tools and (globally) I see even less attempt at a coherent policy position (at least in America) beyond unconditional hero worship and nostalgia.
Yes, other than, well, I have seen some attempts to react to the Uncle Ted tendencies and try to say “the parties are switching: republicans are the party of tech and progress,” but this is starting to get a bit far afield, so I will leave it there like you did with similar things in your post. Coherence is less of a virtue over there, which is very helpful to them in some ways.
But, again, the same technologies are very useful in being a force-multiplier against those companies and governments, and in rapidly fact-checking their lies and propaganda
..how do you use an LLM to fact-check information? That seems fundamentally unsound.
Use one that’s hooked up to a good search engine and check its references yourself.
I use o4-mini for this all the time. It can run a dozen different searches faster than I can. I still do the fact checking myself but it saves me a bunch of time on figuring out exactly what to search for and then iterating on the options.
This only became viable about two months ago.
How can you be sure to get „real“ facts, when the companies behind the models explicitly try to inject their political agenda?
Even if you check the given results, how can you be sure that nothing was omitted?
To be fair you can say the same for search engines. At some point it all comes back to trust.
Yup, but I don’t see why I would ever trust an LLM-based search engine more than a regular search engine - especially if we’re talking about combating misinformation. Thus I wouldn’t claim that they help you with fact-checking.
/me puts the tinfoil hat on
Relying on a proprietary AI to fact-check disinformation from companies and governments that are closely tied to AI startups seems… unwise. I wouldn’t trust them as my only source of information, even with citations (a few citations could be cherrypicked) - so I would need to figure out exactly what to search for anyways, and then iterate on those options. I think that process itself is fairly important, as you get some more broad context - how much do the different sources agree with each other? When looking at forums, what is the discussions about that topic like? Is this something widely covered, or is it a niche topic? etc.
I’m not saying that as an argument against AI search itself, I just don’t buy the argument that they’re useful against propaganda.
I’m quite concerned that the “AI industry”, which for all the tens of billions spent still doesn’t have a viable business model proved out – they are all losing money at scale – will adopt something like the very successful Google model, where the customers are not consumers but advertisers; or more broadly, those who will pay to influence search results.
there’s arguably a weird question about how to treat AI w.r.t. (at least in the strict Marxist sense) the socially necessary labour time component of value.
Marx’s critique of Luddism is far more appropriate here, IMO:
During the first decade and a half of the nineteenth century, groups known as Luddites laid waste to countless machines in England’s manufacturing districts. This was largely a response to the use of the power loom, and it gave the anti-Jacobin government, made up of such figures as Sidmouth and Castlereagh, a pretext for carrying out the most violent reactionary measures. Workers needed time and experience before they could distinguish between machinery and the capitalist application of it—and thus also learn to shift their attacks from the material means of production themselves to the social form in which those means were being employed. (Capital vol. 1, 2014, 395–396)
I predict that China’s application of AI will have substantially different societal impacts.
If you’re broadly progressive–in particular in the sense of supporting the poor and downtrodden–it’s difficult to ignore all of the accessibility benefits of AI for blind folks, deaf folks, folks that benefit from transcription and translation, and so on while still claiming to be true to your roots.
I don’t agree. It’s not as if there is a consensus among disabled people that, for example, AI-generated image descriptions are a net positive; in fact, there’s a thriving conversation about this on the Fediverse right now, because some blind folks there are extremely unhappy with people switching from describing their own posts to having AI tools do it.
Stop using the abstract concept of disabled people to prop up your nonsense, thanks.
because some blind folks there are extremely unhappy with people switching from describing their own posts to having AI tools do it.
That’s a really interesting conversation to have! It’s fine to acknowledge that without ignoring the benefits, such is good discourse.
Stop using the abstract concept of disabled people to prop up your nonsense, thanks.
I’m legally blind without assistance, friend, and my vision is steadily declining with each year it seems. I have family with hearing damage and deafness. I have a partner that uses machine-aided translation to do their job.
Maybe try a different tack.
I think this is a good example of why individuals cannot speak for a whole community. I apologize for assuming you weren’t a member of the category being discussed, though.
I apologize for assuming you weren’t a member of the category being discussed, though.
You didn’t bother to ask first, or to give wiggle room on the off-chance that I was. You’re part of the problem.
Let’s leave it here, DMs are open if you feel the need.
You didn’t bother to ask first, or to give wiggle room on the off-chance that I was. You’re part of the problem.
If I am, you are, bud. I, too, am in the category of “legally blind without assistance with declining vision” - if I understand you correctly, that means your uncorrected vision is something like 20/200 or P/N -2.5 and getting worse. We both wear thick-ass glasses and someday won’t be able to see even with them. That doesn’t make it unacceptable for me to tell you that you don’t speak for every disabled person on earth.
I, too, am in the category of “legally blind without assistance with declining vision”.
Good thing @friendlysock didn’t insinuate otherwise like you did, due to your assumption that anyone who had lived experience as a fellow blind person couldn’t possibly arrive at the conclusion that there are hard to ignore accessibility benefits of AI for blind folks (he didn’t even say net benefit, which you imputed to him).
That doesn’t make it unacceptable for me to tell you that you don’t speak for every disabled person on earth.
As a small comment from the sidelines: while I appreciate your thorough explanation in your comment, I think the whole notion of “right” or “wrong side of history” is not useful on this topic. Because I doubt that (even in the long run) there will be a clear “right” or “wrong” side on this topic, but rather it will remain a technology with lots of positive and negative effects, and there won’t be an obvious and clear indication which of these sides outweighs the other.
Regrettably, the the “right” side of history is the one that’s still around in some number of years. The current push for the “wrong” side of history isn’t a more measured “well maybe let’s not AI all the things” but instead a position that bends towards “we should reject all AI and training and everything”–so, that position loses if any reasonable amount of AI remains in common use in a decade or two. Does that reasoning on my side track?
One of the most fun things about Whig history is that multiple contradictory and even antagonistic narratives can coexist, all feeding on the same decomposing body of evidence. In the absence of a durable (usually meaning effectively enforced) consensus, you might as well get used to cognitive dissonance.
Interestingly, you appear to have interpreted friendlysock’s “wrong side” remark to be about the relatively amoral question of effectiveness – whether the tech “can do stuff”. Nonetheless, that kind of rhetoric is more often used for moral questions.
This might be sufficient explanation. I suppose it’s possible that the OP is only paying attention to “the AI discourse” and somehow not noticing that it’s displaying the same forcefully-polarized patterns as “the discourse” in general.
Perhaps Poe’s Law needs an update, in a world where we can no longer tell gaslighting from sincere opinion translated through one or more filter-bubble barriers.
I’m for AI. But the leaders have to care for the masses rather than orienting around “ideas of wealth” such as yachts and “luxury housing” and building “a billion dollar company.”
Otherwise the society falls apart.
I think it’s become difficult to escape this on the Internet today. It’s easy to have nuanced discussions with friends and even family, but quite difficult to do publicly on the Internet. In conclusion, my hypothesis here is that the divisive political climate has caused people to take sides.
I have no problem having nuanced political discussions with friends on the internet. But more often than not it’s in the smaller communities that I have a lot more influence over. Because the moment a conversation starts to turn political, it’s always immediately shut down, because political discussions are “off topic” or whatever euphemism stands in for “I don’t have the attention to moderate heated people fairly”. Which, I get it, moderating heated people is hard. Especially when it’s a social faux pas, to correct “tone”, instead of content… when really that’s all you want to do is to turn down the heat if a political discussion becomes heated. The easy way out is to say, “stop talking politics, come back never”.
lobste.rs has the good rule, let them be wrong. It’s unfortunate that no online communities are have the social strength, to have “let them be angry”, which is probably why you’re nuanced discussions with friends are so much more productive, because you understand them (each other?) better, you’re able to let them be angry.
I agree with your other points, and appreciate you putting them into better words that I would have :)
I mean, we try our best. All things are balance.
I regret not writing other communities. Because lobsters is the first, [perhaps only] where tone is the only thing shepherded. My intent was to hold it as the example of how it actually can work, and the model I’m trying to copy and learn from.
I think how such discussions can work is kind-of an open research problem. Everyone’s experimenting with it, nobody has all the answers.
Hey, that’s just politics itself. And most humans orient around some idea where going too far out of what they know sets of mental restraints.
You’re probably speaking as someone who cares about what is, but politics rarely cares about the truth and wrong ideas can exist for a long period of time, like State-directed farms. By wrong, I mean ideas that are actively harmful to people, not just something they don’t like emotionally.
When someone creates words like ‘vibe code’ they choose to enforce a social-emotional rather than abstract-material conception of the world. People who want to be close to those with those ideas follow them, use their vocabulary, and signal them as part of this “pyramid.”
San Francisco is a pyramid: you might say things like optimism, talk in a slightly gay sounding voice (this is the appropriate description because self-identifying gays speak like that), believe in future accelerationism, and so on. Within each countries are different factions that orient around specific ideas, such as the Democratic or Trump idea world, or expensive clothes in finance, and in some way people are outward manifestations of these ideas battling for control of the country.
Mentally speaking, I don’t know how many are self-aware that they are doing so and able to live in multiple worlds. I’ve also experienced some people who are as positive as possible to the point where pointing out any flaw is uncomfortable to them, or having any different belief and nuance is hard. It’s not that I’m not interested in what others have to say; just that they tend to be the “base” of some other external idea and I’ve already heard it many times before.
I’m loosely able to accept the possibility of “human pig hybrids,” because real science requires that open-mindedness. https://www.macroevolution.net/hybrid-hypothesis-section-6.html
But some doctors who identify themselves as part of certain patterns of genetic ideas would say it is blasphemy. You can say God is fake all you want because it is no longer an organizing pattern, but go around saying the US Constitution is fake and just a social pattern of ideas while in the USA is blasphemy and would get you something’d. Maybe not arrested but not something good. So religion is just the organizing pattern of the leaders, diffuses down to the masses, and for the leaders useful and for the followers absolute truth.
I have some more sources and personal writing/thinking about this that I can share if you’d like.
Nuance for the topic died when the corporate behind these unethical tools decided to be, well, unethical. There’s really no reason to give them time of day due to how unethical the tools are.
I don’t care if the tools make me 100x programmer, the tools are still unethical, so I won’t use them and keep avoiding them.
I do not do the whole “ethics aside” thing because without ethics there is nothing useful, just hurtful.
edit: not to forget the double standards, where these same companies would hang us for downloading their copyrighted material without a permission, but they download everyone elses copyrighted material without remorse. Rules for thee, not for me. I have zero interest in this kind of circus and decide to just stay away from it.
Claiming we can do ethics afterwards is also a fiction. It’s not especially different to Uber or Airbnb: commit ethical and legal violations as hard as possible in order to capture as much market and amass as much power as possible, so that by the time people wake up to it and try to stop it, it will be too hard to hold them to account and unwind the damage.
I am not making an “ethics aside” argument. I am saying that “fancy autocomplete” and “can replace a human’s labor” deserve different ethical considerations. I am also not saying that is the only vector to talk about ethics regarding LLMs, only using it as one example.
For example, you haven’t said why you feel these things are unethical, so there’s no discussion to be had here: just a proclamation that this is bad, full stop. It’s not dialogue, it’s grandstanding.
Entertaining the hypothetical upsides of a technology in isolation from its downsides is intellectually dishonest, plain and simple. It is perhaps pleasant to imagine a world where LLMs are built from a corpus of documents which were all sourced above-board, where all potential users are well-informed as to their failure modes and deficiencies, LLM output was always explicitly identified to human readers, where system prompts are transparent, user sessions aren’t harvested as free QA, websites aren’t scraped without permission, where training and executing these models consumes only modest, renewable resources, and so on, but this is not the world we exist in. By brushing aside these very real negative externalities, any waffling about “nuance” is just rationalizing your own desire to engage with GenAI tools anyway. Centrism is not the enlightened stance it claims to be.
Entertaining the hypothetical upsides of a technology in isolation from its downsides is intellectually dishonest, plain and simple.
That is also not what I am saying. I am saying that you have to know what something is before you can talk about the ethics of the thing. These two things are inescapably intertwined. And you have to bootstrap the discussion somewhere.
It is perhaps pleasant to imagine a world where LLMs are built from a corpus of documents which were all sourced above-board,
You are already pre-assuming an ethical position that not everyone shares. That’s why discussion is valuable.
By brushing aside these very real negative externalities,
I am deliberately saying that I want more talk about them, not brushing them aside.
I am deliberately saying that I want more talk about them, not brushing them aside.
They’ve been talked about a lot, and some people have reached firm conclusions about their ethical and political stance on AI tools. Hours ago a friend told me they might give notice due to a management directive to do something AI-related. Why should they or similarly situated people have to do more Discourse about it? I think the briefing on the issues has been more than adequate.
I am not saying others should be forced to Discourse. I am saying I would like to participate in some discourse. There are also discourses that I do not want to participate in, and I also have some firm conclusions too.
Do you only want to have discourse with other people who haven’t made up their mind? You’re a public figure and I’d guess you have to have read any number of arguments that the LLM fad is somewhere between pointless and inhumane by now. Posting this and syndicating it to lobsters is coming across as a “fite me bro” provocation, requesting that commentators who take the opposite side of the Literal Worst People On Earth on this subject approach you personally (or at least your input streams) with better reasons.
Do you only want to have discourse with other people who haven’t made up their mind?
No.
You’re a public figure and I’d guess you have to have read any number of arguments that the LLM fad is somewhere between pointless and inhumane by now.
I have read a lot, but what I’ve found is more heat than light. In both directions. There are some things I have made my mind up about, and some things that I am less sure about. A lot of that unsureness is due to the heat.
I am mostly tying to have a meta-conversation here, but I’ll move to the concrete here, because I think that’s helpful: I am completely unsure about the environmental impact of these tools. I have read a lot of very extreme claims that feel on their surface wrong to me, I have read some claims that feel like are based on methodological issues. But I also have suspicion that people have an incentive to try and paint them as being more eco-friendly than they maybe are. Furthermore, I am far from an expert on electricity generation, usage, or impact. And even beyond that, I am like, probably in the top 1% globally of airplane travelers. I know I was in the top 1% of Delta customers several years running, pre-pandemic. I also eat a lot of meat. Is concern about the environmental impact of LLMs legitimate, or convenient, for me, personally? What is its actual relative impact to those activities? I rarely think about the environmental impacts of those two things. Am I being a hypocrite? Maybe? Probably? Can you even care about everything?
That is just one facet of an incredibly complex set of questions around these tools. And I chose it because I think it does a good job of illustrating things I was gesturing at, like, the interplay of what the thing is vs the ethics of the thing: an LLM that burns down the rainforest is a very different thing ethically than one that uses one AA battery a month. etc etc.
Posting this and syndicating it to lobsters is coming across as a “fite me bro” provocation,
I have had a long and contentious relationship with this community, but I figure if I’m posting it to my blog, I should also post it to places that I participate in.
Can you even care about everything?
This is a much more interesting question than anything to do with LLMs! The argument of the utilitarians and their assorted consequentialist peers and mutant offspring is that you can; I tend to believe that you can’t and that consequentialism is nonsense. Cf. e.g. generally Bernard Williams, Ethics and the Limits of Philosophy, Cambridge: Harvard University Press, 1986.
I agree that it is, broadly. I also think that this is one of the reasons why this stuff ends up being hard, “rust vs go” does not involve “consequentialism is nonsense”.
One of the reasons that I’m more focused on ability right now is that I need to bone up on my ethics for these ends of more serious discussions on the ethical side, it’s been a while and I should revisit them.
one of the reasons why this stuff ends up being hard, “rust vs go” does not involved “consequentialism is nonsense”.
I have to give this a content-free “lmao yeah” thank you
If you see this, and have the energy, please DM me and let me know where you go.
I agree completely, and this kind of mentality is what drove me from the orange site to lobste.rs back in 2012!
Once a post can be taken as a ‘provocation’, it’s clear that the party is very much over.
I am saying that you have to know what something is before you can talk about the ethics of the thing.
I don’t have to know what a cotton shirt, coffee, diamond rings, or tantilum based pigments, are to know slavery is bad.
I don’t think LLMs or GenAI are as unethical as most people seem to believe… but the idea you have to know what something is, to know that part of the process is fundamentally, and unsalvageably unethical, simply isn’t true.
I think “part of the process” is an important phrase in your last statement. There are at least two issues in question:
I agree that if you know that LLMs put some writers and artists out of work, and that those people suffer from resulting poverty, then you can already say that part of the process of building an LLM is unethical. This answers the first question.
However, I disagree that at that point you can call it “unsalvageably” unethical. The second question remains: do the good aspects of LLMs make up for – “salvage” – the bad aspects? To answer that, you do have to understand what LLMs are. For example, hypothetically, if you learned that LLMs would prevent war, cure cancer, and bring about a post-scarcity society so the writers and artists had all the material goods they wanted, you might think that building an LLM is worth it. To predict whether that will be the case, you have to understand what an LLM is.
However, I disagree that at that point you can call it “unsalvageably” unethical. The second question remains: do the good aspects of LLMs make up for – “salvage” – the bad aspects?
Oh, I was talking about the idea of slavery being unsalvageable, In the hypothetical where AI was created from slavery, or really anything that results or encourages dehumanization of individuals, It would be unsalvageable, and you’d need to know nothing else about it, to take the position that it’s unethical. Again, not talking about using, or deploying an LLM. Just the decisions and actions involved in the primary training of the base model itself can be ethically problematic.
Is there room for debate in that position? Yes, there is the argument that the ends justify the means, and then you’d need to talk about the ends. But importantly you’d also need to be right about the ends. Guessing wouldn’t be permissible in such a case.
I don’t buy that argument. The ends can only justify the means in specific and narrow cases, where the means are less bad then the ends.
For generalities, the ends can never justify the means. Let’s say that LLMs granted a huge benefit to humanity. But the cost was [something horrible]. There are plenty of arguments that you could construct where the ends might justify that horrible thing. But, I can invent a plausible world, where humanity is able to enjoy that same huge benefit but without that [something horrible]. The problem is more about the incentives created by allowing that kind of system. It encourages humans to dehumanize others “for the greater good”. If you can build a world without that. You should.
But there are plenty of things that, when added to an LLM, or any other thing or process. Taints that whole thing or process, such that you don’t need to know anything else about it, to declare it unethical. That is to say, in response to the assertion
you have to know what something is before you can talk about the ethics of the thing.
I disagree, because you don’t need to know anything about how an LLM works, to say, it’s wrong to do [horrible thing] to humans to create an LLM. You can and should say I don’t know what makes an LLM, and don’t know how LLMs work, but there’s no way to make [horrible thing] ethical.
I still don’t think LLMs are unsalvageably unethical. I don’t even think they’re as unethical as most people do. I just disagree that you have to try it to find out if you don’t like it.
LLMs have been replicated outside the big tech, so I think they can be considered as a technology separately from their current implementation (same as Microsoft being unethical doesn’t make computers unethical). Also I think there’s more nuance to the problems of the current implementation.
Copyright allows remixing. You can use GenAI to simply plagiarise some content (and I think “generate image in the style of (living artist)” is unethical), but you can also use LLMs in transformative ways that are way more than just laundering works. We don’t want to ask Nintendo for permission to write Luigi fanfic, but discourse around LLMs assumes a copyright maximalist perspective.
Energy cost of an individual person using an LLM is comparable to playing a AAA game. I don’t have a problem with that.
There are plenty of other environmental costs, and unfortunately LLM companies aren’t transparent about that, and ethics of that IMHO depend on the details. For example, Deepseek has been guesstimated to cost $6M to train, which isn’t that much on the planetary scale even if they spent it all on coal. I’ve estimated they’ve used similar amount of compute as rendering of a Pixar movie. If that’s true, I don’t find that problematic.
[…] but discourse around LLMs assumes a copyright maximalist perspective.
My main argument against LLMs is based on copyright, but I also don’t really like copyright laws in the first place. Those views aren’t contradictory. In short, it goes like this:
The discussion about copyright has already been done by major projects, for which it was an important matter. Even though I don’t care about copyright that much, I can piggyback my ethical argument off it, since the reasoning is very similar.
We don’t want to ask Nintendo for permission to write Luigi fanfic
With the view I’ve presented I think this is an unfair comparision. Fanfiction is (usually) written from scratch. It’s based in an existing universe, but you came up with all the sentences yourself. LLMs, however, cobble together stuff from unknown sources.
If you trained an LLM on Nintendo-published text only, and credited Nintendo… I guess I think it’d be fine with it? I’d be fine with a Markov chain trained on those works, and an LLM feels similar enough in this scenario - fanfiction wouldn’t really harm them.
For example, you haven’t said why you feel these things are unethical, so there’s no discussion to be had here: just a proclamation that this is bad, full stop. It’s not dialogue, it’s grandstanding.
Well there’s really no dialogue worth having, these topics are hashed over hundreds of times already.
But anyway, here’s some of the ethical concerns:
I’m sure there’s even more.
And yes, you could create everything from scratch on your own devices, from your own data. But we all know this is like 0.01% of the “AI” tool userbase.
I do not need to share my justifications further and you’re free to ignore them, in fact I know you will. I won’t waste any more of my time on this topic.
Every single one of those “ethical concerns” applies to the humans I’ve worked with in this field.
I do not need to share my justifications further and you’re free to ignore them, in fact I know you will. I won’t waste any more of my time on this topic.
This is not the spirit we should be having if we want to see any sort of useful discussion on this site.
This is not the spirit we should be having if we want to see any sort of useful discussion on this site.
Eh, you’re right, I am very tired of this topic and made a mistake to comment on it at all. I would delete my comments and go back to live in blissful ignorance, but it is what it is.
I blame my burnout for me being like this, apologies.
I’m glad for your comments. Thank you for making them. It’s also totally OK to be done with the topic, as far as I’m concerned.
I often feel like I am an asshole for even voicing any opinions, heh. Thanks for the kind words.
I try to do better as well. I am very emotional about this topic since I’ve seen my artistic friends affected by it, various FOSS projects affected by it, etc..
So it’s hard to not go full ham. I am merely human after all. Doesn’t mean I shouldn’t try though.
Well there’s really no dialogue worth having, these topics are hashed over hundreds of times already.
I don’t think anyone should be forced into a dialog they don’t want to have. I am saying that I want to have some. That doesn’t mean you have to.
Thank you for the list! I already think this is a way better post than the last one, because it actually says something.
you’re free to ignore them, in fact I know you will.
:/ not this part though. I was going to actually respond since you did me the favor of engaging, but also
I won’t waste any more of my time on this topic.
As I said, not trying to force anyone into something, so no worries.
I agree it was bad form from me, apologies. My excuse is that I am burnt out FOSS dev who has had to endure with all this stuff for a while now and I guess all my emotions turned into this mush of text that I made you read. :P
Like i said in another comment here: I try to do better as well. I am very emotional about this topic since I’ve seen my artistic friends affected by it, various FOSS projects affected by it, etc.. And it’s all been negative. I have not seen any positives. And I just can’t be convinced otherwise.
I think I’ve grown just to dislike technology: Whenever I see a new thing, I go “oh no what is it now” instead of getting interested and excited. I immediately think that how is it going to be used to make peoples lives worse. Because that’s all I see now from tech industry in general. At least FOSS is my refuge for this where I can feel like i’m doing something good while interested in tech.
It’s chill! Trust me, I understand burnout.
I think I’ve grown just to dislike technology: Whenever I see a new thing, I go “oh no what is it now” instead of getting interested and excited.
Yeah, sounds absolutely like burnout. I think this is an understandable impulse, but also, you’ll end up missing out on good stuff this way.
Tbh I think it’s fine to wait. Nobody needs to be an early adopter. The good will be sorted from the bad eventually and you can pick it up (or not) at your leisure.
I agree. I would gladly wait quietly for all the trash “AI” to be over and done, but since it’s affecting me and my friends, directly and indirectly, it’s really hard to do so.
I also think it’s fine to wait! It’s the “it’s bad” part that’s corrosive. That is, to make a bad analogy, Option<Opinion> is IMHO better than “impl Default for Opinion” returning Opinion::ItSucks
I was going to actually respond since you did me the favor of engaging
I for one would be interested in your responses to those specific points, and I doubt I’m alone in that.
Sure thing.
These tools add a lot of resource waste for output that is barely of any use for most people
Right off the bat, I will admit to being annoyed. Largely because this is coming in very hot, and is speaking in broad, general terms. It is barely an ethical problem, and more just a derogatory statement. That is, I feel like I have to almost entirely re-word it to get to an actual ethical issue, something like “I don’t believe it is ethical to waste resources on trivial things, and LLMs both waste a lot of resources and provide little value.” That first clause is implied here rather than stated, which makes me less sure I’m engaging with the author’s point.
To actually engage with this statement, I would like to see evidence of both halves before this is something I could agree with. “barely of any use” feels like it denies my reality. Maybe I am not “most people.” How is usefulness determined? How do we know what it is for most people? Furthermore, how many resources are actually wasted? The details matter a lot, to me anyway.
I do think that it is at least more concrete than “thing bad.”
These tools do not know right from wrong and generate believable output, which most non-techies do not realise is false
Sort of again, not really framed as an ethical problem. This one is harder though: is the problem with any tool that is fallible? What is the degree of fallibility that’s acceptable? Is the real issue that (supposedly) uninformed people are being presented something in a certain way? These are guesses as to what the poster is taking issue with. I’d prefer to know for sure.
Most programmers (myself included) tend towards very black and white thinking. I find that non-technies are far more capable of dealing with nuance than we are. I used to feel sort of similar to the poster, and then I realized a few things: first of all, I’m kind of being a jerk by assuming that programmers have some sort of special access to truth. We do not. Furthermore, in my experience, a lot of people’s interactions with technology is already a magic box that sometimes works and sometimes doesn’t. They’re not phased by the idea that they may get back something that’s wrong. That’s either explicitly something they can adjust for, or oops, they were wrong for a bit, and it’ll get fixed later. I hesitate to bang on the search example, but like, it’s not as if doing a web search, reading a bunch of stuff, and synthesizing the results is an infallible process when a person does it. They can miss things, misunderstand sources, or just plain make logical mistakes. Yet we all somehow still survive, and tools are useful, even if they’re slightly wrong sometimes.
But again, since there’s not a lot of specificity, maybe this is entirely unresponsive to their concerns. Maybe their concern is that users aren’t actively informed well enough of the possibility of fallibility. Or maybe they’re a hardliner around fallibility, and any tool that’s fallible is considered unethical to use. I’m not sure how tenable that position actually is, even a hammer breaks sometimes.
These tools rely on your feel of inadequacy to keep you hooked: “I don’t know how to do this, maybe the word salad machine does”
Also not really phrased as an ethics concern. I do not feel inadequate, or seek to resolve that feeling through requesting information from ChatGPT. If my therapist is somehow reading this, she does not have to worry that I’ll be replacing her with a robot any time soon. Jokes aside (well I’m not joking that she isn’t being replaced, to be clear), if we try to steelman this, it’s not clear to me how LLMs rely on this. You could say this about any tool that gives you information: Wikipedia relies on you feeling inadequate to keep you hooked. Not personally convincing to me.
And “word salad machine” is again, more invective, which is distracting from trying to engage seriously and in good faith.
These tools use copyrighted material without the permission of the copyright owners
These tools ignore licenses
Hey, now here are some actual meat and potatoes! These are actual ethical concerns, presented as such, and without invective.
I could probably write endlessly about this, to be honest, and this comment is already getting long. But since it’s the good ones, I feel like I should reward that with something, so I’ll say some more.
First of all, I also happen to think that they’re some of the more serious issues that I’ve come across when discussing the ethics of these tools. They are real and meaningful. However. For me personally… I’m from that “information wants to be free” vintage. I am much closer to an IP abolitionist than an IP respecter. I am also sympathetic to concerns about the livelihoods of knowledge workers, though, which is the only reason I’m not 100% on that side.
This is also somewhat of an area where I am supposedly on the affected side: if you go to https://www.theatlantic.com/technology/archive/2025/03/search-libgen-data-set/682094/ and type my name, you’ll find that Meta used works of mine that are copyrighted without my permission. However… I am fine with this, personally. I am here to build a commons, not to try and hold my works hostage for money. But see also about being understanding and sympathetic to the fact that we still need to make money to eat in our current society. I have made money from those works. I’m not sure Meta using them has stolen any money from me.
There’s also a bit of nuance here too: IP laws do provide the idea of free use. Conceptually, we as a society allow people to have a monopoly on certain creative works for a limited time, and the idea is that by doing this, we get more creative works. We also recognize that combining works together can be a new work, but it also cannot be. I can see how some people would argue that training an LLM on a copyrighted work is a free use violation, but I can also see how this could be considered a derivative work. I am not a lawyer, so I don’t claim to know the answer here, and I don’t think anyone will until we get some settled case law. All I’m saying is that I agree that this is a problem, but I’m not sure that the answer is known, for me at least. We’ll see.
These tools use information gathering tools (bots) to spam various websites to get their data, and DDOS those sites at the same time
This is also pretty good. I do agree that spam in general is a problem. I am not sure it is specific to LLMs. I also agree that avoiding things like robots.txt feels pretty unethical to me. I haven’t actually heard the latest on this particular front, I am vaguely aware that people say that traffic has gone up thanks to LLMs, but I need to investigate the details on this further to form an opinion more serious than “yeah sounds bad.” Gonna put it on my list!
Thanks for the thorough response.
So, prefixing all of this with the caveat that I am, of course, not the one who formulated that list of issues:
These tools add a lot of resource waste for output that is barely of any use for most people
I agree that this is the weakest one.
On the subject of usefulness, I find that LLMs are sometimes useful and sometimes entertaining, though I’ve been very tentative about using them in my work so far.
On resource waste, this of course is related to the environmental issue, which you discussed elsewhere in the thread. I think Andy Masley’s analysis is well reasoned.
These tools do not know right from wrong and generate believable output, which most non-techies do not realise is false
Furthermore, in my experience, a lot of people’s interactions with technology is already a magic box that sometimes works and sometimes doesn’t.
And this is something we should be trying to change, not raise to a new level. I find that the current heavy promotion of AI makes me want to turn around and run the other way on this point, in terms of writing software that’s reliable, deterministic, etc.
These tools rely on your feel of inadequacy to keep you hooked: “I don’t know how to do this, maybe the word salad machine does”
The “word salad machine” part, while typical of the invective in this discourse, is, I suppose, referring to the fact that these models are opaque statistical text generators. So to the extent that we depend on tools to help us (which is unavoidable, of course), perhaps those tools should be understandable, tying back to the previous point.
These tools use copyrighted material without the permission of the copyright owners
These tools ignore licenses
As you say, now we come to the strongest criticism.
I’m inclined to question copyright, too. But I think Drew DeVault’s position on this, as expressed in this Hacker News comment from 2022, makes sense, and applies to art as well as software. I’ll partially quote it:
I am a copyright abolitionist and a Copilot critic, and this worldview is not dissonant. An ideal world would not have copyright and would require all software to be free software, but we live in a world where copyright exists. FOSS projects throwing up their hands and saying “forget the licenses” is counter-productive because the commercial world will not throw up their hands and say “forget copyright, do whatever you want with our code”. But, at the same time, they will gladly consume FOSS work into their products. The war on copyright has not yet been won, so the weapons cannot be set aside.
On a more practical note, I worry that if I use LLM-based code generation in an open-source project, that code may be tainted in ways that would hurt its adoption. The mere fact of being LLM-generated would taint it in some people’s view. Beyond that, the code might actually have identifiable pieces of code that have been plagiarized without attribution – something I wouldn’t use if I was copying such code from other open-source projects myself.
re: Drew’s comment
I am a copyright abolitionist and a Copilot critic, and this worldview is not dissonant. An ideal world would not have copyright and would require all software to be free software, but we live in a world where copyright exists. FOSS projects throwing up their hands and saying “forget the licenses” is counter-productive because the commercial world will not throw up their hands and say “forget copyright, do whatever you want with our code”. But, at the same time, they will gladly consume FOSS work into their products. The war on copyright has not yet been won, so the weapons cannot be set aside.
and @steveklabnik’s
For me personally… I’m from that “information wants to be free” vintage. I am much closer to an IP abolitionist than an IP respecter.
I come from a similar perspective, where I’d want essentially the fruits of knowledge work (yes, defining this gets super tricky :) ) to be completely open and free to the populace. However, I fall more towards Drew in this case, in that the world that we live in is the world that we live in, and in that context, it’s have not set aside the weapons of licenses and such and must respect them.
Moreover, I don’t think the current scraping paradigm gets us closer to IP abolition. What’s happening now is essentially giving license to certain wealthy, large companies to skip copyright altogether, and then resell that access to information to people after it’s been potentially chopped and screwed by whatever chat interface you get it from.
imo that’s not IP abolition, nor a step closer to a world of completely free information. That’s just more consolidation of power and knowledge in the hands of tech companies, a world where we’re basically saying “these big companies don’t need to play by the copyright rules others do.” And then they get to sell it back to us, of course.
I think Andy Masley’s analysis is well reasoned.
That article made me angry right at the start.
“No, using a chatbot is not a climate problem. Training the models for the chatbots is a climate problem, but if we ignore that, everything is ok.”
That’s disingenious.
I don’t see anywhere the article says that the energy used to train chatbot models causes a climate problem or where it suggests ignoring an acknowledged climate problem. Rather, the article takes care to include a section “Training an AI model uses too much energy” that analyzes the cost of ChatGPT’s usage vs. the cost of its training. It concludes “For any popular model I’ve looked at, once the training cost was amortized it shrunk to a small portion of the overall energy cost of a prompt.”
Perhaps you were confused by the phrase in the intro “This post is not about the broader climate impacts of AI beyond chatbots”. As the footnote after that phrase explains, the emphasis in that phrase is not “climate” – it’s “chatbots”. Later in the intro, the author clarifies that their analysis applies to chatbots and AI image generators, but not to AI video generators, which use more energy.
Yes, but then there’s this:
Trying to figure out how much energy the average ChatGPT search uses is extremely difficult, because we’re dividing one very large uncertain number (total prompts) by another (total energy used). How then should we think about ChatGPT’s energy demands when we know almost nothing certain about it?
And this:
For any popular model I’ve looked at, once the training cost was amortized it shrunk to a small portion of the overall energy cost of a prompt.
So he’s guessing, but if he’s guessing enough prompts, the energy per prompt becomes negligible?
Why would companies stop talking about this? Here’s a tidbit from another article How much electricity does AI consume?:
The challenge of making up-to-date estimates, says Sasha Luccioni, a researcher at French-American AI firm Hugging Face, is that companies have become more secretive as AI has become profitable. Go back just a few years and firms like OpenAI would publish details of their training regimes — what hardware and for how long. But the same information simply doesn’t exist for the latest models, like ChatGPT and GPT-4, says Luccioni.
(That’s from last year but AFAIK the situation hasn’t changed in the meantime.)
I live in Denmark, where politicians and industry interest groups are currently trying to get the EU to place one of the five proposed “AI Gigafactory” data centres. By the proposed specs, it is expected to consume as much power as a full third of the entire country’s population (and yes, they’re open that that will mean much more expensive power for us meatbags).
So, prefixing all of this with the caveat that I am, of course, not the one who formulated that list of issues:
For sure!
I think Andy Masley’s analysis is well reasoned.
Thank you! Added it to my list.
And this is something we should be trying to change, not raise to a new level.
I fundamentally disagree. Abstraction is the thing, all the way down. You cannot learn every thing about every thing you interact with. It’s not that tech is special here, it’s that it is our specialty, and so we happen to know it well. There are always things we have to access outside of our specialties that end up being, on some level, magic. This isn’t a good or bad thing, it’s just a thing.
I find that the current heavy promotion of AI makes me want to turn around and run the other way on this point,
I understand knee-jerk contrarianism, but it also feels like a trap. It feels good, but you also miss out on a lot.
perhaps those tools should be understandable
They are as understandable as an RNG is. Should we never use an RNG because we don’t know what number is going to come out next?
Drew
I very often disagree with Drew, but I don’t mind this position. Just not one I’d take personally. I don’t think licenses are a good way to achieve software freedom anymore. I also don’t inherently believe that software freedom is in opposition to corporate use of software, though.
I worry that if I use LLM-based code generation in an open-source project, that code may be tainted in ways that would hurt its adoption. The mere fact of being LLM-generated would taint it in some people’s view.
This is why a lot of people who use LLMs just straight-up don’t talk about it.
I disagree that we understand these LLM systems as much as an RNG. Their capabilities are surprising, even to their creators. We know the fundamentals of how they are built and what they do at a low level of course, but we cannot explain why they can do what they do
As much as I know that online discourse is mostly pointless these days, and you could argue it kinda has been since UUCP days, I just have to speak my peace here.
Full disclosure: I’ve been disappointed with online interactions with you before, when I tweeted at you many moons ago saying “hey, maybe the hammer and sickle avatar might be off-putting to some people given what the USSR did” and your reply was simply “that’s dumb” but anyway, let me start off by saying that ethics is dead - the powerful need by many to eat or afford rent has killed it. People are somewhat aware that “they’re going to hell” but this is the age of nihilism and everyone be out there looking out for #1, so yeah I think your position is not unique.
To address specific topics:
Maybe I am not “most people.” How is usefulness determined? How do we know what it is for most people? Furthermore, how many resources are actually wasted? The details matter a lot, to me anyway.
This is a cop-out. If you place yourself as the “who am I to say” guy, you’ll always be cool with whatever happens because who were you to say that it shouldn’t? Who am I to say that Russia didn’t have territorial rights in Ukraine? Who am I to say that women dying of sepsis because of anti-abortion laws is wrong? Who am I to say if euthanasia is a right? You’re a member of a society, and by direct or representative action you are supposed to have a say in a bunch of things. Excusing yourself of them is a parlor trick of convenience.
What is the degree of fallibility that’s acceptable? Is the real issue that (supposedly) uninformed people are being presented something in a certain way? These are guesses as to what the poster is taking issue with. I’d prefer to know for sure.
Certainty is a luxury we can rarely afford in most tough calls for society. Again, sleeping better at night by placing the burden of 100% strict definition unto others.
And “word salad machine” is again, more invective, which is distracting from trying to engage seriously and in good faith.
“You see, I can only engage with this if people are nice and proper and I can’t in good conscience engage with people that might be frustrated about debating this for the 100th time with people like me that keep telling them they don’t see anything definitively wrong with it”
For me personally… I’m from that “information wants to be free” vintage. I am much closer to an IP abolitionist than an IP respecter
you’ll find that Meta used works of mine that are copyrighted without my permission. However… I am fine with this, personally. I am here to build a commons, not to try and hold my works hostage for money.
“Information wants to be free” was always meant to be a two-way street. If you can’t see the obvious problem here - Meta is taking YOUR IP for free and MAKING MONEY OFF IT and you’re OK with it. That makes you either a chump or clueless. And they’re doing it to millions of others. I mean, the fact you personally are OK with it just makes me think you’d be at ease doing it to others too.
Even if these tools were free, the enormous environmental impact still makes them shitty. Are we really to believe that tech companies looking to hire NUCLEAR REACTORS to power data centers is just business as usual? That POSTing text to a 500W GPU across the planet to get a response for every other line of code (potentially) is not going to be a big deal? Do you have no inclination to speculate a bit on it? Does it have to be laid out on a peer reviewed journal until you can allow yourself to do so? That’s just odd.
And as far as “I don’t understand how you can have an opinion on the ethics of something you do not understand” - I don’t need to understand what potential uses a tool has if I know beforehand that, say, a 1000 kids were killed to create it. It doesn’t matter what LLMs can do, there’s no question that taking copyrighted material to build a tool to make you money with zero compensation for the works that enabled said tool to exist is wrong. They might be useful, but so was Dr Mengele’s research on how much water a human body contains - just look past how he got that info.
I am not a lawyer, so I don’t claim to know the answer here, and I don’t think anyone will until we get some settled case law
Yes, because we all know how the justice system works 100% flawlessly and any decision will not be affected by selecting a particular court in Delaware or whatever. I can’t even.
It’s very hard for me not to think of your position on any of this as either willful cluelessness or just trolling. You’re not alone, though - I had a presentation at work about an AI tool and the guy presenting it said that it raised interesting ethical questions, before promptly moving on to the next slide about its monetization strategy and not engaging with any of them.
People are upset about this because we have to use these tools at work to remain on the labor market, knowing full well their existence is predicated on illegality. It’s like everyone showed up to work one day on some new drug created from the tears of children or whatever and you have to take it too or be left behind.
This is a cop-out.
I think you’ve misunderstood what I’m saying. I am not saying “there is no way to determine what useful means,” I am saying “without understanding what you think useful means, I do not know how to engage with this claim.” I am not a nihilist.
Certainty is a luxury we can rarely afford in most tough calls for society.
Same thing here. I am asking for more information, not taking a position.
“You see, I can only engage with this if people are nice and proper
No, I am saying it is exhausting to deal with people flinging mud. I am not saying that they shouldn’t fling mud, or that flinging mud is morally wrong or unacceptable. I am saying in this context having a discussion works better when you don’t include insults alongside facts.
Meta is taking YOUR IP for free and MAKING MONEY OFF IT and you’re OK with it. That makes you either a chump or clueless.
No, it means that I think that it’s okay to benefit others even if I theoretically lose out on some of the benefit myself. Super strong IP protection benefits large corporations significantly more than individuals.
the enormous environmental impact still makes them shitty
See elsewhere for my uncertainty on this point. You might be right! I legitimately don’t know.
Yes, because we all know how the justice system works 100% flawlessly and any decision will not be affected by selecting a particular court in Delaware or whatever. I can’t even.
I did not claim that. But I think that saying “it’s not clear this is illegal” is a fine response to “this is illegal.”
Anyway, have a good day. As you said, you are not interested in dialogue, and so I’m not going to try to make you.
Your justifications falls into the same kind of discourse that @steveklabnik bemoaned.
Of course you’d think that someone more open-minded would ignore them: the only people I can see agreeing they are sufficient evidence are people who would never touch AI and psychopaths who want to make a quick buck.
Your first bullet, for example:
has already filtered your audience to people who believe these tools are useless and wasteful, claims which I have seen contested many times.
Here’s what I think has happened. You have read things and perhaps tried it firsthand and come to the belief that so-called AI is unethical. Belief is a fuzzy thing and you can’t be expected to have perfect recall on all of which convinced you. Your list of justifications are things you believe to be true.
I don’t like that. If you’re so sure about something that you’ll list it as fact, you better be able to back it up. Claims are not evidence or sufficient justification to anyone who doesn’t believe them already.
There’s a fine line between saying “I have been paying attention to AI and I would never use it” which is an opinion you are justified to versus saying “There is no dialogue worth having: the facts are there, AI is unethical. Here is why” which is absolutely grandstanding. I mean you’re allowed to grandstand too.
“these tools” statements might refer to virtually any new technology until the disruption settles down? Examples: internet + piracy, internet + spam, internet + youtube + no attention left/feeling hooked, archival sites trying to do the right thing but legally are not (completely) protected speech.
How is this different than any other flavor of FUD?
Ethics are worth discussing before, during, and after. Dismissing said discussion as premature is a disservice if dismissing it means deliberately overlooking the externalities brought forth by said tool.
internet + piracy, internet + spam, internet + youtube + no attention left/feeling hooked, archival sites trying to do the right thing but legally are not (completely) protected speech.
You’re right and it shows how bad we have allowed things to get, how awful the tech space is. That’s why I try to avoid a lot of internet (even youtube) these days, and donate money to archive.org. Again, like I said elsewhere, choose your battles.
This is one more thing that I can now actively try to disencourage to make things worse. I know it won’t actually do shit, but I can’t just stand still either.
Anyway, I will now hide this thread and ignore it, so no need to waste your time to talk to me.
I think you’re woefully missing the point here.
First of all you’re starting out with an ethical stance: dialogue is always good. This axiom has exploited by bad-faith actors time and time again throughout history.
When you have one side saying that asking for permission to copy artists’ work would kill the AI industry, you’re already dealing with one party that is acting in bad faith. Are artists acting in bad faith towards the AI industry when they make art? Of course not: artists have been making art for far longer than the AI industry has existed. Are AI companies acting in bad faith when they copy artists’ work in bulk without their permission? Of course they are, it’s why they are using their largess to legitimize their actions. The AI industry needs to copy the work of artists without remuneration; artists do not need the AI industry to produce art.
It’s not dialogue, it’s grandstanding.
Have you considered that elevating dialogue above legitimate criticism of technology (and not for technical reasons but for sociological reasons) is itself a form of grandstanding? From your post:
I’m not linking to the post because I don’t want to continue to pile on to this person, because it’s not really about them. It’s about the state of the discourse around AI.
You could have omitted that sentence entirely, but you chose to put that in there. How is that not grandstanding?
What is breaking my brain a little bit is that all of the discussion online around AI is so incredibly polarized.
Indeed, sometimes things are polarized, and sometimes one side is wrong! If one side of an argument is, without provocation, causing harm to another side, for material gain, guess what: that’s polarized, and one side is at fault! This religious dedication to maintaining decorum with people causing harm exists only to serve the more-powerful party.
I am not making an “ethics aside” argument.
from your post:
For me, capabilities precede the ethical dimension, because the capabilities inform what is and isn’t ethical.
so you absolutely ARE saying “the ends justify the means”. You absolutely ARE saying that it’s ok to commit harm so long as the juice was worth the squeeze. The idea that you are neutral here is frankly ridiculous. You are not neutral, you are squarely on the side of the AI industry here.
First of all you’re starting out with an ethical stance: dialogue is always good.
No, I am not. I am saying “I want to engage in some dialogue to figure out my own feelings and opinions.” That does not mean that I think that it is always good.
so you absolutely ARE saying “the ends justify the means”. You absolutely ARE saying that it’s ok to commit harm so long as the juice was worth the squeeze.
No, I am not. At all. I am saying that you have to know what something is before you can figure out the ethics of it. Phrased the other way, I don’t understand how you can have an opinion on the ethics of something you do not understand.
I am saying that you have to know what something is before you can figure out the ethics of it.
You seem to have drawn a line around what the AI industry is that is limited to “how an LLM is implemented and what I get out of it when I use it”. This does not include (it perhaps willfully excludes) the economics of how it is trained, how the controlling parties decide which content to include (and exclude) in training, how the controlling parties obtain that content, the quality of the input content, who does and does not have the capital to train these models, the ability for centrally controlled LLMs to be behind hidden prompts that can be altered for personal and political gain (e.g. grok’s recent “kill the boer” thing), the effects of LLM output on the internet information ecosystem, etc. Are these not aspects of what the AI industry is?
I am saying that you have to know what something is before you can figure out the ethics of it.
That probably seems logical to you, but I don’t think that’s how people usually do it. We typically evolve our moral positions along with our epistemologies simultaneously, as we encounter the world.
Do you really think that scraps doesn’t understand what an LLM is, and is thus incompetent to have moral opinions about the issues? Or is this really about that rando in your socials who claimed that “ChatGPT is not a search engine”? Because those are two very different kinds of speech acts.
I don’t think that’s how people usually do it. We typically evolve our moral positions along with our epistemologies simultaneously, as we encounter the world.
I also don’t believe that beliefs should be fixed! I described updating my own beliefs in another thread.
Do you really think that scraps doesn’t understand what an LLM is, and is thus incompetent to have moral opinions about the issues?
I don’t know scraps. All I know is that they attributed a bunch of positions to me that I both do not hold, and tried pretty hard to get across in the post. This comment chain started with me saying that I need to figure out the capabilities of something, and the post says “for me,” not making any statements about other people.
Or is this really about
This is really about me trying to be like “I want to talk about this” and then people going “how dare you ignore the ethics component” and then me hastily trying to say “I am not trying to ignore the ethics component” and then people saying “why do you hate the ethics component” in response to what I wrote.
With a side of “I am trying to figure out why some people who are smart say that this thing is 100% useless while other people who are smart say that it is very useful while my own experiences are also somehow wildly divergent, but mostly on the side of ‘useful.’”
I definitely didn’t intend to accuse you of requiring fixed beliefs. My remark was about your preferred logical order, roughly “know what it is before you judge it”, which I don’t think is realistic. I predict that as long as you require the subject to be fully defined before any moral judgements related to it are appropriate, you will continue to experience much talking-past and “more heat than light”.
It’s interesting to me that people here are generally making ethical or moral claims, but you seem persistently disinterested in engaging with the substance of those. You’re much more interested in the effectiveness or “usefulness” of the technology, what its capabilities are. Perhaps you want to be completely sure that you know sufficiently what this evolving thing is, in some fixed and static way, before you’re comfortable considering the repercussions. Others do not require of themselves this same sequential ordering, but that doesn’t by itself make their opinions ill-informed!
My remark was about your preferred logical order, roughly “know what it is before you judge it”, which I don’t think is realistic. I predict that as long as you require the subject to be fully defined before any moral judgements related to it are appropriate, you will continue to experience much talking-past and “more heat than light”.
Is this really that unrealistic? If so, I feel very out-of-touch. How can I form a moral judgement about something if I don’t understand it? How can I form any sort of judgement in the case? Any opinion I would have would be completely uninformed and based on essentially nothing at all.
How would you even begin to form a moral judgement without understanding something? Where can that judgement even come from?
How can I form a moral judgement about something if I don’t understand it?
it depends a lot on what you mean by “understand it” and where you draw the boundaries of what “it is”. For example, mustard gas was used in WW1 but is now rarely used (never used because it’s banned? not sure here). Does one need an advanced chemistry understanding to form a moral judgement about mustard gas? Is what mustard gas does to the human body not a facet of what “it is”?
What a system does is not only a part of what it is, you could argue it’s the entirety of what it is. Programmers should know this, we use it all the time! If you have some object in your codebase named mysql
, it’s probably a thing that can parse MySQL queries and store data according to MySQL’s semantics. But it could just as well be MariaDB and you wouldn’t care, because you don’t care about the system’s identity or internals, you care about what it does. It actually is what it does. We use this property in programming all the time, we just call it Separation of Concerns.
I find both of those analogies very difficult to compare to the current topic.
Firstly, with mustard gas, this is something taught in schools precisely because it is an excellent example of why chemical warfare is banned. We learned Dulce et Decorum Est, we looked at the John Singer Sargent painting: it is precisely because of these examples of communication that we, who grew up generations after the First World War, are able to understand the horrors of mustard gas. If our lessons had just told us that mustard gas was used and later banned, how could we have formed any sort of judgement without understanding what effects the gas had? It might just have been an aerosolised version of the classic relish!
Secondly, with MySQL and MariaDB, I agree with you that separation of concerns is a useful tool, but I’m not really sure how it’s related to either AI, or to morality based on knowledge, understanding, and experience.
you can form a moral judgement based on the effects of a thing’s existence, even if you don’t understand its internal nature.
True, but even that is often lacking in the discourse about LLMs; there are a lot of people who are making broad moral claims about them without engaging in the reality of what they are capable of at present.
not refuting that, I’m answering this:
How can I form a moral judgement about something if I don’t understand it?
Sure, I agree with that. “Understand* in the original context was about understanding the effects and capabilities of AI, not necessarily understanding how it works internally. Although insofar as understanding the internal mechanics of LLMs can help us understand their effects, that can help us to form moral judgements as well.
You’re just being absolutist. I’m saying that we typically make (and update) our moral judgements along with our understanding. Even a simplistic understanding can ground a strong judgement. Reserving or suspending judgement is a skill, and a difficult thing to truly do well. It’s not what most of us are doing most of the time.
I’m not being absolutist here, I have a variety of levels of understanding and a variety of levels of ethical opinion to match those. That seems like the healthy approach to me. For example, I have (I think) a good understanding of how dangerous it is for a small set of billionaires to have control over most of the AI research that is going on, and therefore I have a strong moral opinion that that is bad. But I have a very limited understanding of how significant the climate impact of AI is, particularly in the medium to long term, so I try and avoid expressing ethical opinions on that subject.
But what’s important to me is that my morality and ethical decision-making are grounded in an understanding of the issues. So if there’s a subject that I don’t understand, how can I form or express a strong moral judgement about it?
Reserving or suspending judgement is a skill, and a difficult thing to truly do well. It’s not what most of us are doing most of the time.
This is again a very surprising sentiment to me. I largely agree with the first part - it’s very easy to leap to conclusions, or assume more knowledge than I actually have, or whatever else. But surely in that case, my goal should be to practice and get better at reserving judgement, to learn this as a skill and try to understand my own opinion-forming processes better?
So if there’s a subject that I don’t understand, how can I form or express a strong moral judgement about it?
You have, apparently, great confidence in your own assessments of the strengths of your own understanding. Do you have the same degree of confidence in your assessments of the strengths of other people’s understandings? We’re talking about “discourse” here, right?
my goal should be to practice and get better at reserving judgement
By all means, practice caution and all such forms of self-improvement.. on your self. Just don’t expect to be able to thereby explain the attitudes and behaviors of others!
Well this comes back to what Steve was saying in the original post, right? A lot of the “discourse” that he is seeing doesn’t seem to get basic facts right about AI. And if that’s the case, how can those people express useful opinions on AI? And certainly that matches my own experiences of reading about AI: many of the strongest and most aggressive opinions are the ones least backed up by evidence, and are often actively contradictory to my own experiences. This makes me struggle to trust those opinions.
For example, I’ve seen a lot of criticism of AI that suggests it isn’t even useful, because it produces such bad code and hallucinates so much. But I remember using Copilot on Koka code a while back when it first came out, and it was pretty impressive in terms of being able to make meaningful suggestions in a language with basically no sample code in the wild, and I’ve used ChatGPT to make queries and got mostly useful information that has been very easy to verify. On the other hand, I’ve seen a lot of praise of AI that suggests that it will be the main way we interact with code in just a handful of years, and yet I personally have ended up disabling Copilot in my editor because it’s not useful enough for me, and I’ve seen projects written with these Cursor-like tools and they’ve been completely useful.
So for any theory of AI to pass the sniff test for me, it needs to be able to explain all these experiences. And like I said, most of the extreme discussion around this just doesn’t do that, and presents a view of AI that doesn’t match my own experiences. How can I trust those opinions?
There are a variety of discourses to be had. If you’re only interested in “how well does it work” and relatedly “how can this help me as a programmer” then you’ll be frustrated by opinions that are oriented towards ethical considerations, regardless of how well-informed they are. But hopefully, as a human being and not merely a programmer, you’ll also be able to see that the tool-quality discourse is a very narrow one, considering the ethical, economic, and political issues involved in the technology.
I think silby pretty well nailed it in this comment.
I’ve mentioned a number of other ethical considerations in other comments in this thread, so I don’t think this is all about “how well does this work”. But many of the moral considerations I see talk about it either working amazingly well, or not working at all as cornerstones of the moral argument, and that makes no sense to me, which is why I brought it up.
EDIT: To be specific, I think “all modern AI implementations are based on theft” or “AI is too high a contributor to the climate crisis” are useful moral positions. I think reasonable people can still disagree about the specifics or implications of those positions, but they make sense to me. But positions along the lines of “AI doesn’t work” or “AI tools produce bad code”, which is what I see way more often, just don’t make sense to me.
Yeah. Like Steve said in their writeup, YMMV quite often. I’ve seen it develop a perfect docker setup for Kafka, which is something I hate to do properly. I’ve seen it develop SQL for me to learn to do things I didn’t know is possible with SQL, and I’ve seen it to implement code that generates this SQL with an AST tool written by me.
Then again, it fails really badly with certain tasks. Today I was spending my remaining Zed credits trying to make a high level Kafka consumer with native Rust based on a few lower level libraries. It started inventing really weird things, going in circles and just sucking the last credits until I pressed stop.
It depends on which version of an AI response you get for you to form your feelings about its quality.
I predict that as long as you require the subject to be fully defined
Hm, maybe there’s a disconnect here too. I don’t think it has to be 100%. I do think it should be more than zero.
It’s interesting to me that people here are generally making ethical or moral claims, but you seem persistently disinterested in engaging with the substance of those.
That’s because this post is about the meta level: it is about the ways we have these discussions. That alone is a hard enough conversation before we dive more into the details of the actual concrete specifics. I do want to talk about that stuff too, it just feels like something later. Same as with getting into the specifics of the actual capabilities.
However, I did just write a VERY lengthy reply trying to actually engage with one of those comments, since someone asked me to.
Perhaps you want to be completely sure that you know sufficiently what this evolving thing is, in some fixed and static way, before you’re comfortable considering the repercussions.
Sort of. It’s more that I don’t think you can come to productive answers without agreeing on some basic things, a shared reality. And I am struggling to find any sort of shared reality in the broader discourse on the topic. So my inclination is to try to figure out what that is, and why there’s a disconnect. Otherwise, it’s very likely that people just talk past each other.
Others do not require of themselves this same sequential ordering, but that doesn’t by itself make their opinions ill-informed!
Right, this is why I said “But I also know reasonable people disagree with me on that” in the post. I fully agree.
it just feels like something later
Indeed. That’s what I’m trying to interrogate. I don’t think everyone shares your feeling, or that it makes a good filter for who’s invited to the “good” discourse.
I did just write a VERY lengthy reply trying to actually engage with one of those comments
This one? I read it! And I think it’s rather more valuable than your original post. I would encourage more of that sort of thing: measured, good-faith engagement with the substance of the opposing claims. I think it will get you (and all of us!) further than the staged meta-level-first approach. Any shared reality is to be found there. You can see this entire thread is already full of polarized meta-level speculation, without much traction.
I don’t understand why leaded gasoline was useful. Should I therefore not have an ethical point of view about whether it should be banned?
Understanding exactly in which ways something is useful, and to whom, and how that affects the flow of money, will certainly empower one to better attempt to get it banned successfully. And yes it will also influence the ethical point of view. For instance, it’s important to understand why fluoride in tap water is useful before deciding whether to ban it.
Is it really such a strange concept that basing one’s ethics on reality is better than the alternative?
Well, “whether it should be banned” and “whether I should ban it” are quite different things! (e: possibly unless you are a Kantian?) Also, as an ethical agent I can have commitments that are based on incomplete understanding that are nevertheless still based in reality. In the instant examples of leaded gasoline and fluoridated water, my understanding is fairly dim in most dimensions, but I don’t suspect that my own points of view on which additives to which liquids are good and bad are deluded.
My impression is that leaded gasoline is a very clear-cut example: the benefits are largely irrelevant for modern cars, and the dangers are very clear and real and well-understood. There’s not much moral debate to be had at this point.
I wonder if a better example here is nuclear power. I understand very loosely that some nuclear waste will never be fully safe for generations to come. That seems bad to me. I also, however, understand that nuclear power plants can produce lots of power with very little danger to the environment. That seems good to me. I am vaguely aware that there are types of reactor that can recycle old waste, and that these reactors could therefore reduce the problem of waste significantly, but I’m also vaguely aware that nuclear reactors are expensive and slow to build, and maybe other sources of renewable energy would be more cost-effective.
That’s basically the sum total of my knowledge on nuclear power, and it is absolutely not enough to form a moral judgement. I have a very loose suspicion that if we’d gone all-in on nuclear power twenty or thirty years ago, that would have been a good thing, and an even looser suspicion that right now there are better options anyway, but I’m not certain about that. I can’t tell you what the right decision is for nuclear power because I’m simply not informed enough.
What should I do instead? How else could I form an ethical point of view if it’s very clear to me that I have only a minimal understanding of the subject at hand?
I think the point of a virtue-ethicist view of ethics, which I loosely ascribe to based on serious but limited reading, is that you likely do have an individual ethical point of view as to whether nuclear power is good or bad, based on your minimal understanding, and that (here’s the important part) you don’t have to defend that point of view as a categorical imperative or as a utility-maximizing function. Most of your value judgments in ordinary life aren’t subject to the scrutiny of a Kantian or Utilitarian thought experiment, and as a human you do have and are entitled to those value judgments. If in turn what you value is withholding judgment over things you don’t find yourself equipped to judge, and if someone asked you what you think about nuclear power and you say “I don’t know” that’s also an ethical judgment complete in itself.
And on the other hand, other agents will render their own judgments based on their own values and the information or understanding they have. The point is that people observably do this all the time. I don’t mean to glibly resort to moral relativism around LLMs or anything, just that agents don’t just have different ethical judgments, they also have different grounds for those judgments and different experiences of forming those judgments.
Virtue ethics sort of feels like the “secret third thing” to go alongside Kantian and consequentialist theories, though I’m sure there’s actually many more secret things in the literature.
First of all, you should do whatever you want to do.
I don’t understand the connection between the two sentences. I feel like you’re trying to split “what the thing is” into “its benefits” vs “its harms.” I’m not claiming that a good ethical discussion is based only on benefits. I’d argue it’s much more mainstream to focus on harms, though I prefer to try and understand both. Chemotherapy is also very harmful to people, but has tremendous benefits, so we still administer it in some circumstances.
I’m saying “should I” in the rhetorical sense of an ethical agent in a thought experiment. You may take it for granted that I will do whatever I want to do.
The point of my introduction of leaded gasoline is to take a thing an agent might reasonably, credibly believe to be bad with only a shallow understanding of what the benefits and harms of leaded gasoline are, and to whom. I suspect that if I posted a hot take like “Leaded gasoline isn’t even that useful and considering all the lead poisoning I don’t think we should have leaded gasoline around”, you wouldn’t find it a provocation, even if it’s clear I don’t know what the pros of leaded gasoline are or even why lead poisoning is bad. I contend that the anonymous poster you are reacting to made a similar kind of post, with a similar foundation, and that even if you went and told them “actually, ChatGPT can search the web”, it wouldn’t be unreasonable for them to say “well, whatever.”
You seem to be making a claim, or at least expressing a feeling, about what kind of knowledge is a foundation for a competent ethical judgments. I’m trying to understand why you say “I don’t understand how you can have an opinion on the ethics of something you do not understand.” What degree of “understanding” are you supposing your interlocutors need to have an ethical commitment having to do with the training, use, or marketing of LLMs? My mental model of an LLM is “expensive Markov chain text generator, not really worth engaging with”. Is this gap in understanding so severe that you think my related ethical view of LLMs is perhaps not just dissimilar to your own but incomprehensible?
I’m saying “should I” in the rhetorical sense of an ethical agent in a thought experiment. You may take it for granted that I will do whatever I want to do.
Cool, I just want to make it clear that I’m not trying to make moral judgements on others here.
What degree of “understanding” are you supposing your interlocutors need to have an ethical commitment having to do with the training, use, or marketing of LLMs?
I don’t think it’s 100%, I myself do not have a 100% understanding of this stuff yet, and I’m also not sure anyone can ever fully understand anything.
I said this in another reply just now, but it’s more about some level of shared understanding or agreement on what the thing is or does.
My mental model of an LLM is “expensive Markov chain text generator, not really worth engaging with”. Is this gap in understanding so severe that you think my related ethical view of LLMs is perhaps not just dissimilar to your own but incomprehensible?
I think it really depends on the details. I do think this is a great example though, because it’s lossy. That is, I both love and hate describing LLMs this way. I think for a lot of things, this is a shared enough understanding to be productive, but maybe on some others, a bit more separate. For example, I used to believe that LLMs cannot, definitionally, produce knowledge. I am a little more shaky in this belief these days, and need to spend more time with it to know what I feel about it. I might return to that belief or I might abandon it. We’ll see. But I think that it requires a bit more understanding than “markov chain text generator” to meaningfully engage with. That’s not an ethical question (directly anyway…), but more of on the broader topic of “having real, nuanced discussions about what this stuff is and what it means.”
But I think that it requires a bit more understanding than “markov chain text generator” to meaningfully engage with.
I think this is defensible as far as your attempt to cultivate a particular kind of discourse goes. You want to talk about the technology with other technologists who find it interesting to talk about. This would necessarily exclude me, because I don’t find LLMs interesting enough to refine my above-referenced mental model of what they are. (Which is fine.) (True fact about real me rather than a stylized fact about a rhetorical me: I haven’t used ChatGPT even once, not so much b/c of any ethical rule but because I’m not interested in doing so.) But what I think you’re reacting to is a discursive universe which (because of the combined effects of US politics, capital flows, and posting) is not even remotely trying to have that kind of conversation. It seems like you waded into it hoping to establish a calm discursive center amongst the rhetoric that reaches your feed. But you don’t want to be in that center, neither pros nor antis are actually arguing with you. And when you posit an equivalence between two sides where one side is (let’s assume for the sake of argument) aligned with labor, and the other side with capital, gestures like yours will get people (in these comments!) to decide you’ve aligned yourself with capital on this one. I don’t think you really needed to introduce the both-sides of it at all.
To argue by (rather weak) analogy again, if you were following both a bunch of pharmaceutical researchers and a bunch of anti-animal research activists for some reason and you announced you wanted to have a nuanced conversation about innovations in Rhesus macaque models for human disease, it would look disingenuous to complain that the animal rights activists you follow don’t know enough about dose-response experimental design when they complain that monkey trials are unethical. Or something.
where one side is (let’s assume for the sake of argument) aligned with labor, and the other side with capital
I think that’s literally how some anti-AI folks understand the AI argument, with those two sides corresponding to anti-AI and pro-AI respectively. That makes sense to me; after all, AI boosters are trying to exploit and then replace labor. So, in that framing, is it not better to stand in solidarity with labor, even if it costs us something, even if we miss out on some benefits that these tools could provide us?
Standing with labor in this case means, specifically, standing for toil and standing against the ability to find solutions that are more globally optimal. This is, incidentally, part of why manufacturing and shipping in the US is in the state that it’s in.
Labor isn’t inherently good, it’s just a different set of actors looking for a handout.
It seems like you waded into it hoping to establish a calm discursive center amongst the rhetoric that reaches your feed.
Kinda! It’s also a bit of… flailing around? The opposite of calm, in some ways. Working through some stuff. Writing helps clarify thinking. I also like having a record of what I’m thinking about. Just kinda really felt like I needed to yell this somewhere, get it out of my system.
Cultivating the discourse is a thing too, but I’m not 100% sure that this is the right way to do it, just kinda felt like I had to get it done.
And thanks for the analogy, yeah, I think you might be right. (And thanks for all of your other thoughful replies today.)
Chemotherapy is also very harmful to people, but has tremendous benefits, so we still administer it in some circumstances.
something of a false equivalence. Let’s use AI art as an example:
so it’s materially different. Using AI tech to find cures for cancer? Yeah that’s good. That’s generally not what people are complaining about, though.
I am not trying to say that chemotherapy is an equivalent for AI. I was just trying to find something that has both harms and good things, where the drawback is worth it. That doesn’t mean every thing with drawbacks is worth it. And even talking about this as equivalence, as you’ve done, requires understanding both the good and bad.
I think we have better analogies for AI, and we even have a very pertinent term to talk about this specific kind of drawback - one that affects not those benefiting from the product but various uninvolved parties or even society at large: Externality.
So here’s my version: The big tech AI companies are running a chemical plant upstream on the river and dumping their toxic waste in there. They face no consequences for this due to their immense wealth and political influence. Some of us are buying and enjoying the products they make there while others choose to boycott them. But we all suffer from the toxic river water.
Can you see why the debate would be polarized here?
Externalities are extremely common, though, and are not something unique to AI. I guess what I’m saying is, in those post you asset externalities are bad, and I’d agree, it’s basically definitional.
But we weren’t even talking about AI in this sub thread, we were talking about understanding and moral judgement. It’s not even clear to me which thing you’re saying about an AI is an externality.
this feels like you’re using a whataboutism to dodge the argument being made instead of engaging it.
I didn’t make an argument that chemo is like AI, so I’m not really interested in defending it. That’s not whataboutism.
the point is no more about chemo than it is about leaded gasoline. It was never about either of those things specifically, it was about the fact that harms done to the user and harms done to a bystander are different.
Externalities are extremely common, though, and are not something unique to AI.
Nobody is arguing that externalities are unique to AI, they’re arguing that the harms outweigh the benefits.
Okay. I believe externalities exist, for sure. I also don’t personally know the degree to which they exist. That’s why I’m trying to learn about this stuff.
Not to mention that using LLMs to cure cancer is, despite constant repetition, a frankly absurd extrapolation of the capabilities of the technology on a similar basis of realism to using Lego to land on the moon.
I hate to do this, but probably the reason that I came up with that example is because Friday is the anniversary of my father’s death, and I’ve been thinking about it a lot lately.
I certainly don’t mean to suggest that we should use LLMs to cure cancer, or that I think that will happen any time soon, if ever.
people presented with the choice to donate their imaging or diagnostics to help train a model would probably consent, and their data being in the dataset would be to their direct benefit if it improves the quality of treatments available to them. Saying “use an LLM to cure cancer” feels … not unreasonable to me, inasmuch as you also use pen and paper, calculators, etc; it is one tool among many. I don’t think (most) people are earnestly making the argument that an LLM can autonomously find a cure for cancer from first principles.
First of all you’re starting out with an ethical stance: dialogue is always good. This axiom has exploited by bad-faith actors time and time again throughout history.
So what? This isn’t “nazis deserve free speech too” and even if it were, saying it has been “exploited” is nonsense - I look around at the world that refuses to engage with nazis and I see a lot of nazis. Pretty sure the “deplatform everyone I don’t like” approach has failed miserably on all fronts at this point.
Anyway, saying “I think that the bar for dialogue on AI is too low” is not the same as “dialogue is always good”.
When you have one side saying that asking for permission to copy artists’ work would kill the AI industry, you’re already dealing with one party that is acting in bad faith.
This is creating a false dichotomy. It’s like saying that we can’t teach people critical race theory because “one side of the people who talk about race are nazis”. There are many sides. Discussing AI doesn’t involve taking the position that aligns with one specific entity.
so you absolutely ARE saying “the ends justify the means”.
So what? I mean, first of all no they aren’t, they’re saying that knowledge precedes moral judgment. Second of all, so what if they were? Other than it being a meme, it’s not like consequentialism is so vastly out of whack with societal ethical judgments. As opposed to what, natural law theory? What’s the ethical framework you’re holding up as being so obviously better than one that involves understanding outcomes?
You are not neutral, you are squarely on the side of the AI industry here.
I think I’m going to delete my account, honestly. This level of discussion is genuinely disgusting to me. This high roading, moralizing, rhetorical garbage is by far the biggest problem in the world than the people looking to discuss topics that you don’t like.
delete my account
A pity.
This high roading, moralizing, rhetorical garbage is by far the biggest problem in the world than the people looking to discuss topics that you don’t like.
I agree.
Jeez fricking cool down, dude.
I can train my small model to do something useful and you are not entitled to complain about it and if you do, you are the asshole.
So your beef is with capitalism and capitalism only. Stop yelling at fellow techies who have zero stake in “AI” “success”.
As for copyright, I would prefer it abolished. People routinely copy art. I mean we frickin encourage kids to draw Elsa and Olaf and hang it on the fridge.
If they enjoy it sufficiently, they then proceed to copy whatever they can get their hands on to eventually “develop their ‘unique’ style reminiscent of …” so clearly the laws are currently completely out of touch with the reality.
Which reminds me, our community actually demands mimicking style of other project contributors for consistency. In course of making a copyrighted work. Which some claim are works of art on the grounds of creativity being required.
Also, ends do justify the means as long as your “ends” include policy on “proper means”. For example making some AGI do all the busywork and giving humans UBI to steer its productive capacity would be OK policy by me. I.e. artists losing jobs would be a win as long as they continue being fed and housed and can freely socialize. Bad policy would be AGI doing all the busywork for Musk and rest starves to death. So again, your beef is with capitalism.
Stop blaming universal approximators.
I can train my small model to do something useful and you are not entitled to complain about it and if you do, you are the asshole.
that’s a straw man, nobody is complaining about that and you know it.
artists losing jobs would be a win as long as they continue being fed and housed and can freely socialize
If people had all their needs met, artists would still make art.
If people had all their needs met, artists would still make art.
Would any new artists even arise, though? Who’s going to go through all that struggle and effort when the Make Art Button is right there?
Who’s going to go through all that struggle and effort when the Make Art Button is right there?
You are confusing “consuming content” with “making art”. Lots of people actually enjoy making art.
That’s not what I mean. I enjoy drawing and sketching myself. But the reason I started drawing when I was a kid was that that was the only way to get things out of my head and turn them into pictures. I neglected it for many years, but eventually started again as an adult - and realized that a lot of old practice I thought I had forgotten kinda just came flowing back out of the hindbrain the moment I took a pencil to paper again. So did the feeling of flow and focus.
But I can’t say for sure if I would have ever reached for a pencil in the first place if AI image generators existed when I was a small child.
(and now, as an adult, AI has still completely demotivated me from many of the other kinds of creative expression I used to enjoy and find meaning in.)
when someone says “lots of people like $thing”, and you don’t like $thing, it’s not an attack on you; you don’t need to assert that because you don’t like $thing, nobody likes $thing.
We might be misunderstanding each other.
I’m not saying that nobody likes to make art - this would be a ridiculous thing to say, especially because I like to make art. I’m saying that I’m concerned that a world with ubiquitous AI image generators is a world where it’s harder to discover that you enjoy making art in the first place. I’m not sure I’d have realized that I like drawing if I’d never picked up a pencil, after all.
(FWIW: I’ve used an AI image generator precisely once in my life, and never intend to do so again. It felt, in Hayao Miyazaki’s words, like an insult to life itself.)
The struggle and effort is the point.
I agree. I’m not advertising the Make Art Button, I’m saying I’m concerned it is a chilling effect on discovering the joy in the struggle and effort in the first place.
I see. Maybe I just have some sort of vague faith in the creative spirit of people. That also may be misguided.
I think it’s worth being specific about the ethical concerns:
The purpose of AI is deregulation. It’s the only way investors can make a return. The rest is marketing for a multi-billion dollar industry.
That surely can’t be true. People already run open weight LLMs locally for real-world use cases, so the value they obtain is already greater than the costs. The value - cost margin can only improve from here.
Ever wear clothes that you bought from a store? Or use a computer with any lithium in it?
I respect the decisions of people who refuse to engage with LLMs because they disagree with the ethics of how they are created, over copyright, energy usage or treatment of the people who help train them.
There’s nothing wrong with being selective about your ethics, but it’s worth noting that it’s not as simple as saying “LLMs are created unethically therefore obviously I won’t give them the time of day.” - I think it can be more nuanced than that!
Ever wear clothes that you bought from a store? Or use a computer with any lithium in it?
something of a false equivalence. While it would be very difficult to not wear clothes, it is comparatively very easy to not use ChatGPT.
And if it does become difficult to not use LLMs, e.g. on the job, then that will be because of changing norms, which we can still fight.
While it would be very difficult to not wear clothes, it is comparatively very easy to not use ChatGPT.
Meh, perhaps those examples, but one don’t have to go far for a more to-the-point one: ever use a computer powered by electricity supplied by the US grid that affects the climate? Or eat beef or chicken? Or wear leather shoes?
Perhaps the easiest: ever use a petroleum based product?
Ever watch netflix or Hulu, Disney or ESPN? How much energy went into supplying those photons?
In any case, it doesn’t matter what one person does, they can’t save the world and they can’t kill it.
a lot of weird comparisons in here. I’m a professional computer programmer, I can’t not use a computer. You can’t really be a part of society without interacting with a computer at all. You can’t really interact with society by, ::checks notes::, not using electricity, which you are positing as a realistic option, and comparing it to not wanting to use AI products. It is deeply, utterly ridiculous to posit “don’t use electricity” as being side-by-side with “don’t use ChatGPT”. The beef and chicken thing is a weird example, loads of people are vegetarian or don’t eat beef or chicken. Wearing leather shoes is also easy to avoid. A lot of these things are things that are avoidable and people avoid them! And that’s exactly what people are saying about these AI products, they want to avoid them, but software companies keep putting them into products against people’s will.
I wonder if this “you don’t like AI, yet you live in society? Curious” tripe is an example of the sophisticated discourse many here are clamoring for.
I think a lot of the arguments boil down to “other people do not have the right to make me uncomfortable by asking me to examine the consequences of my actions”, and that’s basically the totality of it.
The bad argument that your quote would refute:
You say you dislike some aspect of society, and that I should make sacrifices to change it? But you participate in society, thereby indirectly supporting that thing. Therefore, you are lying when you say you dislike that thing, or when you say you think it’s worth making sacrifices to change.
But Simon did not make that argument. He made a more nuanced one, which, at the same level of abstraction as the above, I’d phrase like this:
You say you dislike some aspect of society, and that I should make sacrifices to change it? But I participate in society, thereby indirectly supporting various other aspects of society that I dislike. I (and many others, perhaps not including you) have already given up on making sacrifices to change every single aspect of society that I dislike. Therefore, even if you convince me that I should dislike another aspect of society, that won’t necessarily convince me (or those others) that I should make sacrifices to change this particular one.
I think this argument is valid. Bringing the argument back to the specific issue, the argument implies that even if we know for sure that LLMs are related to some in-of-itself unethical behavior, more analysis is needed to conclude that LLMs shouldn’t be used. By analogy, many people eat meat even though they know it’s related to the in-of-itself unethical behavior of killing animals.
Of course, that’s not the end of the discussion. For those who do not already abstain from LLM usage, that additional analysis could explain:
This is just Mister Gotcha, as far as I can tell. Yes, I participate in society.
The deleted sibling comment said the same thing (by quoting from the comic). As I wrote in my reply to that comment, I disagree. The argument in the comment you replied to is different from the argument Mister Gotcha is implied to be making.
It’s also strange how, when you state that hey these AIs were made with slave labor labelling data, use stolen data, allow for all kinds of abuses and have big environmental risks, some pro-AI people will say “oh it must be because they’re threatened that they’re against it”.
It just isn’t very interesting to discuss AIs merits to me when they’re made this way.
the tools are still unethical
The tools don’t have to be in any way unethical. So that can go out the window.
This is an interesting question for me, because in principle, I agree. I think that, for example, embedding-based search is a really good idea, and really useful. I absolutely believe you could build LLMs, diffusion models, and all the other stuff we refer to as “AI” these days without massive negative externalities.
But… where are the teams doing that? Who’s building small, energy-efficient models, trained using renewable energy, based on public-domain or opt-in datasets, without destructive scraping methodologies, with an emphasis placed on eliminating, or at least quantifying, bias present in the dataset and the model’s output? Certainly not Microsoft or OpenAI.
I think every cultural area should pool all its resources and all its content and train one huge base model using solar power and then give that model away for free for anybody to use or do with as they please. License this model in an open way and sue the other ones into oblivion for copyright infringement.
It all needs to become open source and commons based.
I think that’s a great idea.
I hope you recognize that there is almost nothing that could be further from what is happening in the AI industry than what you have described, because what you are describing is communism.
This suggests an inverse of Godwin’s Law which is that every thread eventually goes on long enough for someone to independently conceive of communism
Lots of innovation can through non ethical processes and that does not prevent us from using them today. Many products descent from slave labor or human rights abuses. You would have to reject a lot of modern medicine if you want to opt against innovation that at one point in their history did not do horrific things on a scale much worse than abstract IP theft.
This comparison is specious in at least the trivial sense that the people responsible for, say, New World chattel slavery, are long dead and cannot be held to account, whereas the people responsible for OpenAI are alive and doing the bad thing we don’t like on an ongoing basis, and could be made to stop.
The people responsible for the environmental and human rights abuses of coltan mining, which underpins a lot of modern electronics, are very much still alive.
Of course, it’s much easier to not use ChatGPT than it is to not use computers, but that doesn’t make the behavior less unethical, it just means that we’re willing to accept it.
“The bad thing” in case of OpenAI is the disregard for copyright. I remember a time when a younger me and others were clamoring for the abolishment’s of copyrights. I’m still quite fond of the idea of greatly restricting how long copyrights last.
That is one externality, it is not the sum of externalities, nor is it necessarily the most important externality for many people.
etc. There are many, many externalities to consider.
My chief gripe is simply that it sucks. I’m tired of reviewing terrible PRs and having to explain to my parents that something they saw on the internet isn’t real.
Copyrights are too long though.
“the Pyramids are cool though” isn’t a particularly compelling defense of human slavery.
I hate to nitpick on your wider point, but more recent research has shown that an enslaved workforce wasn’t the primary means of building the pyramids.
See for example the long quotes from this post https://www.lawyersgunsmoneyblog.com/2025/05/building-the-pyramids.
ok.
I hate to nitpick on your wider point
you don’t, though, which is why you posted that. “I hate to nitpick” is a stated value: a thing you say and likely believe that you value, but your actions directly contradict this. Through your actions, you demonstrate your revealed values: maybe you actually enjoy nitpicking but you identify as someone who doesn’t, or you value informing others more than you value not nitpicking, or you value being right, or you value being contradictory. I’m not sure which. The point here is not finding a very precise description of your revealed values, but only to demonstrate that stated and revealed values differ.
Why am I bothering with all this? This misalignment between stated and revealed values is in a greater sense the whole ballgame of this 200+ comment discussion, because programmers contemplating ChatGPT/OpenAI/Anthropic are forced to encounter the dissonance existing between their own stated and revealed values.
No, I simply meant that using the trope that slaves built the pyramids as an example of a great product being made by slave labor is incorrect, according to the latest research.
Your wider point that “we should accept something made by immoral means because we used to do so for other things we value” is not coherent is not something I’m arguing with.
How do you feel about intel icc? And its ethics regarding optimizing for non-Intel CPUs? Would it turn you off from using toolchains?
It’s a battle I have zero chance of fighting in, since I do not understand it, so I do not even start/join that battle.
Choose your battles. I can’t fix everything with my choices. Hell, I know my choices don’t fix anything: I do it for myself in the end.
I get the anti-AI sentiment, even if I don’t think statements like “AI is useless” are accurate or helpful. I think the sentiment is coming from a place of wanting AI to fail commercially, so that:
The harms above are not caused by AI itself but rather the hype surrounding it. I badly want to see the bubble burst, so we can get to a somewhat common understanding of what the tech is capable of and what it costs to run. Once that happens I’d expect the discourse around AI to improve.
Sadly. the conversation was poisoned from (more or less) the start. It’s impossible to ignore the massive, sustained, campaign that big tech has been pushing for the last few years. While it’s nice to think that people can rise above it and think about it objectively I’m not surprised that people are actively upset. People are generally people, not robots.
I suspect nuanced conversation will be difficult until the hype cycle dies down a lot. At the moment it feels a bit like when Google were pushing G+ into all their products, except this time it’s industry wide.
Yeah, if one side is pouring hundreds of billions of dollars into something and employing all your friends, and the other side is like, a subset of free software people & artists & curmudgeons online, who does positioning yourself above the fray from some imagined objective viewpoint really support? Like really. You’re in the game, you live in society.
I suspect nuanced conversation will be difficult until the hype cycle dies down a lot.
Right. I can’t air an opinion in the middle of a hype bubble in good conscience without feeling like I’m just contributing to the noise (and competing with the people with louder voices).
TBH, I am disappointed in discourse in general (not the software, software is lovely!), for much the same reason. Many things end up being about which side do you identify with, and only superficially about the actual facts on the ground. And I feel like Rust vs Go were the same basically, just to a lesser degree? Had to write https://matklad.github.io/2020/09/20/why-not-rust.html back in the day.
I tend to treat that as a fact of (human) nature: not much I can do in general about that, except actually noticing this tendencies in me and others and trying to steer towards more signal on the margin.
One of the few pieces of advice from PG that I think is worth taking to heart is “keep your identity small”.
And I feel like Rust vs Go were the same basically, just to a lesser degree?
Yes, like, to be clear, I don’t think these discussions are often better, but for some reason, they bother me less. It’s easier to be like “yeah that’s just some bad arguments” and ignore it, whereas this stuff is like, actually bothering me.