The pinnacle of enshittification, or Large Language Models
32 points by MaskRay
32 points by MaskRay
Not sure if I agree with the use of "enshittification" here, but I found myself agreeing with most of the post, particularly the part about distrust.
As one of the commentators said: I found hard to trust people anymore in the context of OSS projects, and that has an isolating effect.
We discussed vim/neovim and vim classic on the IRC a couple of days ago or so and it was therapeutic, because there's a few of us experiencing the same, and there is conflict between ethics and pragmatism.
The point is, however you look at it, LLMs are unethical. They may be useful, but they are poison — just like asbestos. They are trained in an unethical way, they are sold with immoral goals, and they are used to do a lot of evil.
That is spot on, IMHO.
My main issue is that by using LLMs, you're either outsourcing the means of computation to a few very large corporations who charge by the token and can change the access rules arbitrarily, or gating development behind very onerous hardware requirements.
Neither option is very FLOSS-friendly, in my opinion. It used to be you could contribute meaningfully to FLOSS with a second-hand computer, a programming manual, and a net connection. The price of entry has been raised considerably.
Right? This is why I roll my eyes whenever someone claims that LLMs democratize writing software. From where I'm standing, it's the opposite. I don't want my workflow to hinge on the continued benevolence of a handful of tech giants.
I'd argue that we've already outsourced our digital lifes to the corporations long ago -- let's take an average person -- they most probably have a gmail account (and even if we target a technical audience -- how many of us host our own email servers? I do, but I still have a gmail account, because, you know...), and their internet is a dozen (in reality probably less) of sites, e.g. facebook, x, threads and the like. Damn, Microsoft been dubbed evil and been doing shady stuff throughout its whole existence, yet Microsoft is still here. Crowdstrike stock is higher after their epic fail of shipping broken kernel modules than before it. People are forgetful, and forgiveful, and lazy. And the sad truth is that most people don't care about morals and ethics that much, especially when for being moral/ethical you have to trade a bit of your quality of life. Thus I'd say that the LLM battle is already lost.
The social media is such a recent thing. YouTube is 20 this year, so is Twitter. It took a bit more than that from the invention to the banning of leaded gasoline, and we are already seeing attempts to regulate the social media.
That is not to say that everything is going to be allright, not all hope is lost either.
We may have banned the leaded gasoline, but we are still wrecking the human-friendly environment for all other forms of gasoline, just look at the current events.
We've also allowed the corporate interests to mostly capture our governments. But the very political system that allowed this is not very old! Only a dozen of generations ago all of the world was under control of hereditary monarchs!
The social media is such a recent thing. YouTube is 20 this year, so is Twitter.
33% of all humans alive today on earth are 20 or younger
That statistic averages developed and underdeveloped countries, and the regions with the lowest average age also have significantly lower access to technology. Regardless, I still wouldn't know what point you are trying to make by bringing this up.
I've watched this video essay a few days ago and it has changed my views some. It echoes some of the points in this blog post, but it goes a bit more into "what can we do about it", proposing teaching "llm safety" and showing up to oppose new datacenters at local governance level.
However neither the video nor this blog touch on the specific case of what motivates people in our industry specifically to use LLMs. As I've commented, I think the motivation basically boils down to being trapped in a capitalist rat race. And that thought very much bums me out, because I don't see us getting out in my lifetime at least.
I'd like to replace the acronym "LLM" with "BGM" - Bullshit-Generating Machines, with relation to Harry Frankfurt's On Bullshit:
The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.
I don't want to quote the entire article, but there are a lot of nuggets of wisdom throughout. If we remove the anthropromorphization of LLMs, all we are left is with bullshit-generating machines. These things lack "concern" for truth, because "concern" is a human thing to have by definition. Non-animals don't demonstrate emotions. That's just a definition.
There's no intent behind a BGM. The only goal is to produce more words that are likely to go together. If they happen to be true, that's just a post-facto property of the statement. It wasn't created with an intent of being true, because true intent is something only living beings can possess.
It makes sense that these huge llm companies are hemorrhaging money when you consider the endgoal of replacing workers with machines for capitalists. The llms won't even need to be conscious or super intelligent for that, just good enough for the capitalists. This is why socialist movements are more important than ever right now.
It was always clear that this is a special use of “intelligence”, one far from what animals truly possess. This changed recently. When LLMs enabled chatbots to use human language, the misuse of the term exploded. Obviously, the marketing people loved calling it “artificial intelligence”.
I would challenge this. Arguable this might have been more true when ChatGPT first came out. People were somewhat more genuinely confused about how to think about LLMs, especially if they never encountered them before.
But most people calibrated their thinking by now: either by actually using LLMs, or by the fact that they are not using them and do not yet need to. If LLMs were that revolutionary, the discussion around it would be very different. I mean, nobody will miss the second coming of Christ just because they stopped reading Reddit for a few weeks.
On top of that, anthropomorphisms became commonplace. LLMs could be said to be “thinking”, “lying”, “hallucinating”, to “approve” or “disapprove”, “like” or “dislike”… [..] The problem is that there is a number of people who start actually believing that their chatbots are conscious. [...]
Reusing words like that is completely standard and appropriate: that is just how human language works. A programming language is not spoken, a "computer virus" does not infect cells, a search engine does not pull trains and an argument does not collapse like a bridge does.
While I am very open to be wrong here, I would be genuinely surprised if more than 5% of people thought that LLMs have independent agency the same way animals do, LLMs "lying" or "hallucinating" mean the same as humans lying or hallucinating.
I have definitely talked about LLMs with a 100 people since they became popular, but I am yet to meet a person who actually thinks they are already conscious.