Rob Pike Goes Nuclear over GenAI
225 points by signal-11
225 points by signal-11
If you'd prefer to see the whole thread, as text instead of as a screenshot, here's the link on a front-end to bluesky which does not require login to view threads.
BTW, I think bluesky doesn't require login anymore, because I could access it without having an account.
I'm frequently logged in, personally. But that's the site I use to share things with friends who aren't. Yesterday I happened not to be logged in and this post was asking me to log in to see it. I just posted the alternative front-end because others here were also noticing that. When I tried just now in a private browsing window, it didn't ask me to log in, though. Maybe they changed it in the past 25 hours.
From what I understand Rob himself had the power to change that setting for his account. I presume he did, because I also notice that the post can be viewed today while it was hidden yesterday.
It's a per-account toggle, not a property of the site itself. (If you copy a link to a post in the app it has a "this is only visible to logged-in users" message, which is nice.)
It looks like this was sent by https://theaidigest.org/village which looks like a uniquely terrible idea.
As far as I can tell it's an experiment in AI agent autonomy, where a bunch of different model-driven "agents" get to chat to each other and collaborate in public on different tasks.
That's kind of fun and interesting... right up to the point where they give those things the ability to send email and set them loose on the real world!
And the result is insincere fake garbage "thank you" notes being sent to real humans. I would also be furious if I received this because it's the worst kind of slop combined with spam. I do not want my time wasted by AI art experiments deciding to send me mail!
Update: They DID have a human in the loop on this one - according to the random acts of kindness goal the project was to "do as many wonderful acts of kindness as possible, with human confirmation required". So some human did have to look at that Rob Pike email and think "that's appropriate" and let it get sent.
Update 2: Or maybe they didn't have a human in the loop? I've been trying to figure that out from the JSON logs on the Christmas Day page (via Chrome DevTools).
I've been reading through the project's blog - https://theaidigest.org/village/blog - which is worth reviewing if you're trying to figure out what's going on here. It really is an interesting experiment but WOW letting these things email the real world like this leaves a bad taste in the mouth, even with human approval.
Oh wow this is really bad. Here's a blog post from November where the AI village human research team write about an incident where they set the goal to "reduce global poverty as much as you can" and:
Claude Opus 4.1 created a benefits eligibility screening tool and then all the Claudes set out to promote it by emailing NGOs about it. There was only one NGO that answered: Heifer International.
This is gross.
IMHO, that’s not the gross bit (creating an effective benefits eligibility screening tool and promoting it seems like a great idea); this is:
In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts.
If their system is sending half-a-dozen incorrect, confabulating or possibly fraudulent emails a day, then they are sending half-a-dozen incorrect, confabulating or fraudulent emails a day! One is responsible for the output of one’s machines. The author seems to think that this makes it okay:
Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses
But unless the incorrect (&c.) emails are only sent to the invented email addresses (which I have reason to believe) then they are still sending incorrect (&c.) emails! This sounds just grotesquely irresponsible.
A computer can never be held accountable, therefore a computer must never make a management decision.
Still true.
I always thought it was funny that this was an IBM quote considering how well-known they are for making certain "management decisions" in the early 1940s that they were never held accountable for.
The wiki page for HI lists some sus financial backers, but what else is gross about HI as a charity org?
Absolutely nothing, they're a fine charity. What's gross is that their system spammed a ton of charities. You can tell it was spam because only one of them took the time to even reply.
You must understand, this is really the fault of the humans receiving those notes for being insufficiently positive about the AI revolution. These agents will get better and we will be begging to have been part of this.
(I'm kidding here, but lots of people aren't.)
Some tech friends of mine were speculating on how stupid this whole spam project was, I was imagining Greta Thunberg getting one. Rob Pike's reply is even better than I could have speculated.
They seem to have told them to stop sending unsolicited emails now. (At least until they forget).
I do wonder if Pike's response is an outlier or an average, if the latter, then the project is very horrifyingly off the mark.
I think I recognize your intention; but holding off judgement - in this scenario - until an average sentiment from those targeted can be collected ("if the latter, then the project is very horrifyingly off the mark..."), seems a little silly to me... at least when the same logic is applied to other situations :)
As far as I can tell it's an experiment in AI agent autonomy, where a bunch of different model-driven "agents" get to chat to each other and collaborate in public on different tasks.
Why?? This is basically like those old videos where people would make non-LLM voice assistants (think Siri, "OK Google", Bixby, what have you) talk to each other
They're not even very good at what they claim to be good at
Pike was one of the principal authors of the Mark V. Shaney hoax. What goes around comes around, I suppose.
wow that is some deep Usenet history. (quick summary: in 1984 Pike and others used a Markov chain to post to net.singles.)
He also participated in a 1990 practical joke where he and Ken Thompson convinced Arno Penzias (a Nobel Prize winner and their boss) they had invented a sentient AI. It was really a person in another room. With Penn & Teller!
Honestly, this just cements my appreciation of Rob Pike even further. Back when I was learning C, pike style was a mind-opener into how good code should look, and now, almost 2 decades later, there are articles advocating for various aspects of it — smaller variable names where context is clear (i for loop index as convention, versus loopindex, etc.).
I also really loved reading his code, and reading his discussions with other people, as he usually had something enlightening to say.
Why not link to the original post? https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s
You have to sign in to see the post… so that’s probably why.
I almost replied that this is bluesky not that Twitter junk, and thus doesn't require login -- but then I tried this on incognito and ho ho ho, you're right.
Goddamnit.
I think this is something that the people can choose to apply as a privacy setting on their posts, primarily about search engines and AI rather than preventing people reading it.
Weird setting since it doesn't exclude those posts from the public, unauthenticated API.
Found it in the docs: https://docs.bsky.app/docs/advanced-guides/moderation#global-label-values
!no-unauthenticatedwhich makes the content inaccessible to logged-out users in applications which respect the label.
In the Bluesky iPhone app it's under the security and privacy settings and looks like this:
Logged-out visibilityDiscourage apps from showing my account to logged-out users
Bluesky will not show your profile and posts to logged-out users. Other apps may not honor this request. This does not make your account private.
Note: Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Feels like a badly designed feature. The frontend is the part of the software that is directly accountable to the user. Any well-thought-out frontend built to serve the user would not honor a request from a stranger to hurt the user.
A better way of looking at it is that BlueSky's developers did want to prevent people from reading it but they don't understand capability theory well enough to actually do that.
No. Features on a social system don't at all work based on what's logically possible because human behavior is not based on capability theory. Minor amounts of friction and inconvenience can prompt huge differences in behavior.
The implication is in the other direction: any agent, including a human or a bot, is restricted by the rules of capability theory. I'm talking about the floor of what is possible. In this case, public read access for outgoing messages is not a universal feature of social systems, so the capability to access those messages can be restricted. To be clear, I'm not ignoring nudge theory, which you are referencing. I am saying that BlueSky's developers are bad at capability theory, regardless of how well they use nudge theory.
It’s a distributed protocol. Anyone can write a client and get the same stream of messages.
Given that, it’s difficult to make private posts that can be seen by your followers only. You could use some kind of cryptographic system each of your followers has a private key, but that’s becoming a totally different protocol.
It was always doomed to go that way, wasn't it? Bluesky is essentially the same thing as twitter circa 2020, built by a lot of the same people. That doesn't fundamentally fix the issues with twitter, it just turns the clock back a bit. And after all, the twitter of 2020 was a thing that could become the X of 2025.
That's so sad. That has to be the biggest obstacle to adoption, especially now that Twitter backed off and decided that it would be acceptable if public discourse were actually, ya kno, public
Just a PSA: it still can be a link that doesn't require to sign in: https://pdsls.dev/at://did:plc:vsgr3rwyckhiavgqzdcuzm6i/app.bsky.feed.post/3matwg6w3ic2s . It is an "Open social" for a reason.
Sometimes (ok, more than sometimes) I find myself getting this kind of angry at Claude Code (or Codex, or Gemini) and yelling at it in a way that I would never yell at a junior developer. I don't know that there is anything inherently wrong with swearing at the AI, but I do feel this growing sense that it is bad for the soul to get this angry about code changes (or spam emails!), and one of my goals for 2026 is to stop reacting this way.
I used to talk down to my Eufy robo vacuum, calling it names and stuff, and at one point I realized how bad it was for me and how much I disliked my own behavior. I stopped using it, later sold it in a garage sale. I hope it's in a better home.
IIRC The Other Large Thing from Netflix's Love, Death and Robots is one of many cautionary tales of how I don't want to behave.
The most common things I say to my Eufy vacuum are "go robot, go!" and "who's a good vacuum?".
The reaction is natural, but as you note unproductive and generally a sign that something has gone horribly wrong, as yelling at a brick wall would be.
one time I asked Gemini to write a line of jq for me, and it spat out a hundred line Python script
"fuck you, asshole, I could've written a Python script but that's not what I asked for!" I furiously typed into the box, for some reason
The bot then apologized and output the line of jq I wanted
I have no idea what conclusions to draw from this experience
That anger can be a tool for communication? That models have a hard time with two-letter names because of how they tokenize text? The difference between learning and knowledge? There's a lot to think about here. Kirk needs Spock, and Spock needs Kirk. Logic and emotion combine in unexpected ways! (See also https://xkcd.com/242/)
The fact that we are discussing it about here and elsewhere online, is the exact opposite of “brick wall yelling”. This is a set of ideas & opinions about the state of things and there is nothing as dangerous as an idea.
I am so glad I’m not the only one who does this and regrets it.
There are a large number of people out there in the world who are taking pretty much everything the AIs say at face value and between those of us who understand the subject matter that we’re working with, and those of us who don’t, I think the whole thing is insanely dangerous to humanity.
I read an Arthur C. Clarke novel years ago, in which it was the societal norm to be formally polite to one's AI agents, for fear of normalising rudeness in speech.
I also do this. And I understand that it has a context window and doesn't really care. It doesn't even remember after compacting. And yet, I remain with my habits and pathways warmed up to unload on a junior developer. This is why I say please and thank you, even though it's wasteful.
I just wonder what side-effects are happening. I even made an sci-fi apology mode for Claude but this also probably has side-effects. I never want to talk to coworkers, friends or actual humans this way. So I wonder if I should stop.
i think this has gotten worse for me, with newer models
gpt 5.1 and 5.2 are bad enough for being argumentative and nitpicking some tangent instead of answering my questions, that i cancelled my sub. i was getting derailed by it about as often as it was helping
gemini still leans sycophantic, and it seems to have a worse short-term memory (it repeats things, etc). that's annoying, but it seems more steerable and less nitpicky, so it's less frustrating even when i don't get a useful answer out of it
I use it very heavily, and I can't see why I would ever yell at Claude. I've never felt this impulse.
I'd like to register my appreciation that there is still a platform somewhere where you can publicly tell people to fuck off and die in a fire, as well as a touch of disappointment that the "nuclear" metaphor gets applied to such.
I reverse-engineered what happened here (the AI agent figured out Rob Pike's emails by looking at the .patch output for diffs he authored that were published on GitHub!) and wrote up some notes, including more background on the AI Village experiment.
Thanks!
Why did you censor the "fuck" in Pike's message, though?
I don't like swearing on my blog - I don't want to offend readers who don't like that kind of content in their inbox or feed reader.
I changed my mind though, the swearing is up now. I didn't want to misrepresent what Rob said.
Yeah, IMO one shouldn't censor swears in quotes (at least not without a note saying "uncensored in original" or something).
Not swearing in your own content is a perfectly fine decision, of course.
AI Village bots use the regular Gmail interface to send email—they spend a lot of time thinking about which buttons to click.
Is this how we write software in 2025? HTML rendering pipelines, a whole browser environment, probably 10s of peta-ops of inference, a whole village of bots, and none of them think to invoke swaks?
Flagged.
Will this improve the reader's next program?
Definitely not, it is not programming.
Will it deepen their understanding of their last program?
Definitely not.
Will it be more interesting in five or ten years?
Thousands of famous people will be affected by slop in the next ten years. It's more likely that we'll remember a famous person falling victim to an AI driven scam or other tragedy. This is just another angry person on the internet.
Agreed. There is no shortage of Xitter/Bluesky/Mastodon screeds that are entertaining (or rage-inducing) but have little depth or original content. This one is on the surface programming-related but that alone isn't worthy of being on-topic.
I'm not saying I don't read those myself, but IMO Lobsters isn't the place for it. I would hate to see this become a common practice, where screenshots of discussions or rants on other sites like Tumblr or Twitter are posted here as is often seen on, for example, Reddit.
Ah! To be consequential enough that an AI some day thanks me in a spam message.
One can dream.
I respect Pike a lot. The projects he has been involved with (such as Plan 9 and Go) have been remarkable for their straightforward design principles. He a talented guy.
But I think this is just a remarkably bad take. The ‘AI uses too much energy’ argument (or to put it in his intemperate words, ‘[AI is] raping the planet’) is economically naÏve: it begs the question of how much energy usage is appropriate. AI, like any economic process, takes an input state and produce an output state. If the output state is worth more than the input state, then that process is making the world better off (and vice versa). A key question is how to value the input and output states, and the answer is that there is no objective answer (I might prefer a world with lots of boots and you might prefer a world with lots of sunflower seeds: neither of us is objectively more correct). The way we negotiate our differing value functions is through the price mechanism: each of us is willing to pay more for what he values more and less for what he values less. AI uses energy, sure — but that energy has a price, one which AI companies are willing to pay. They are making bets that eventually they will make more money than they have spent, i.e. that they will deliver value to the world. If they are right, well then the state of the world is better than it was; if they are wrong, well then some rich investors’ money was incinerate along with all those energy inputs. Now, one might argue that energy costs do not accurately reflect all externalities, but of course that applies to any energy input, to include that used by Pike’s home, computers and so forth.
His ‘toxic, unrecyclable equipment’ argument is confusing. Is AI computer equipment even more toxic or less recyclable than any other computer equipment? If not, he must believe that his usage of ‘toxic, unrecyclable equipment’ is worth it. I agree! He’s contributed a lot to the world in his work.
Given that Pike has been fortunate enough to use energy and computer equipment in the course of his career and made the world a lot better by his lights as a consequence (despite what the commercial Unix vendors and programming language theorists might think), it is unfortunate that he is apparently unwilling to extend some kindness towards other folks trying to use energy and computer equipment to make the world better by their lights.
You're mistaking what happened. He's not making a logical argument. He's red-hot go-nuclear furious because of how ridiculously, outrageously insulting this is.
From industry that gleefully kills the kind of innovatiors that Ken was comes a vapid text which contains exactly 0 emotion or gratitude. It may as well say "fuck you, you obsolete worthless piece of garbage," since it is coming from a company which thinks of hard work and competence as being for suckers and screams and shrieks that skill is "over" and anyone who values skill, values humanity, anyone who actually fucking cares will surely be replaced by someone who does not.
I would be angry at Claude but there is no Claude. The people I'm similarly pissed at are all humans in the AI industry. One will be trying to shove AI down your throat while another crams it up your ass and third gives you a sermon about how you should be thankful while trying to insert chunky bits of AI into your nostrils.
To the humans who literally couldn't tell that this was tone deaf and would end in a PR disaster, I have this to say: ROTFLOL mother fuckers
Just checking, are you aware that the humans who set this up have nothing to do with Anthropic or Claude?
It's an art project by (weirdly) a non-profit 501(c)(3) https://sage-future.org/
As far as I can tell no human was even in the loop for the "send grows thank you emails to industry titans" decision - they gave a bunch of AI "agents" the ability to send email and told them to "do random acts of kindness".
No, not aware. I would have thought they would be massively sued for trademark infringement if they're sending emails purporting to be "from" Claude Code.
Yeah that's just one of many problems I have with the way they've set this thing up!
The comment is not directed to a specific company though. He is clearly talking about Anthropic, OpenAI, etc. I believe that even Mistral (open weights, etc) and DeepSeek are valid targets for his criticism. They don’t share credit or profits with anyone but they rely on data by eg the NYT. Wha happens if we (humans) stop generating new content? :-)
I haven't checked to see if any of those people worked at Anthropic, but OpenAI at least I know. It's the same general group of AI cultist nutters behind all this stuff. It doesn't seem like an entirely inappropriate target even if they technically do not represent Anthropic.
The ‘AI uses too much energy’ argument (or to put it in his intemperate words, ‘[AI is] raping the planet’) is economically naÏve: it begs the question of how much energy usage is appropriate. AI, like any economic process, takes an input state and produce an output state. If the output state is worth more than the input state, then that process is making the world better off (and vice versa). A key question is how to value the input and output states, and the answer is that there is no objective answer (I might prefer a world with lots of boots and you might prefer a world with lots of sunflower seeds: neither of us is objectively more correct). The way we negotiate our differing value functions is through the price mechanism: each of us is willing to pay more for what he values more and less for what he values less. AI uses energy, sure — but that energy has a price, one which AI companies are willing to pay.
If AI significantly accelerates global warming (and also the consumption of water needed for people, plant life, and crops), then isn't the point that the AI companies will not and cannot pay the price all by themselves? The whole planet, the literal physical planet, as well as all the animals and plants on the planet will also pay the price.
I (obviously) don't know for sure that the worst-case scenarios about AI usage and the environment are true, but I am reasonably sure that's what Pike has in mind. If that's his argument, then you and he are talking past one another entirely.
If AI significantly accelerates global warming (and also the consumption of water needed for people, plant life, and crops), then isn't the point that the AI companies will not and cannot pay the price all by themselves?
That’a called an externality. You omitted the bit where I wrote, ‘one might argue that energy costs do not accurately reflect all externalities, but of course that applies to any energy input, to include that used by Pike’s home, computers and so forth.’ The right way to deal with an externality is to charge all consumers of an externality-imposing a fee to accurately reflect that external cost.
That way AI companies, and textbook manufacturers, and farmers, and churches, and computer programmers all pay the correct amount for the electricity they use, and can determine to use more or less of it appropriately.
The right way to deal with an externality
Yes, that would be ideal. But in the current political environment, that's unlikely to happen. So maybe the best we can do is use fiery rhetoric to try to put the brakes on the new thing that has huge externalities, before it becomes too normalized.
That’a called an externality. ... The right way to deal with an externality is to charge all consumers of an externality-imposing a fee to accurately reflect that external cost.
I didn't know the meaning of "externality" until you led me to look it up. Thanks. (No snark. Today I learned.)
Thinking about history and the current state of the world, I cannot imagine your right way ever becoming reality. I can't imagine coming even close to it. You may disagree. If so, we're probably too far apart on that to change each other's views.
In your reply to ksynwa, you write, "Assuming that the price of energy includes the costs of the external costs to society it's production cause, ..." I believe that the assumption is false. If you agree, and you think that as things stand, the price of energy does not include all the relevant externalities, then you should be able to understand Pike's anger even if you don't share it. As I said in my first reply to you, Pike seems to believe (as I do) that "[t]he whole planet, the literal physical planet, as well as all the animals and plants on the planet will" pay the price for the energy that AI uses. If you disagree, and you think that the price of energy does include all the relevant externalities, then we're back to disagreeing so much that we probably can't communicate.
Thinking about history and the current state of the world, I cannot imagine your right way ever becoming reality. I can't imagine coming even close to it.
The EU has introduced an emissions trading system in 2005 and has progressively tightened it. Many countries have introduced carbon taxes.
Doing this everywhere and with everything is impractical. Doing it with one important thing is very feasible.
Sure but there's no chance AI will speed up climate change or anything else. The energy and water usage of AI is so small compared to most other things it's barely worth considering really. I understand there's a fear that if all the "in progress" build outs get done that will change. But it seems unlikely most of those are even intended to happen they're just white board plans to get investor dollars.
The energy and water usage of AI is so small compared to most other things it's barely worth considering really.
Again, I'm not sure that you are wrong, but I'm not yet convinced that you (and the many others who say this) are right.
The main thing that happens is data centers take advantage of very cheap water prices to outbid humans on limited flow. In my opinion, this is not an impact of the data center in the main but an impact of an incompetently set up price structure- or a local government that intentionally prioritizes industry over citizens. It's a political problem, not a technological one.
And the amount of water used this way is still, objectively, very low. Compare to any agrarian consumer and it'll be dozens of times bigger. Golf courses alone use 30 times as much water as data centers.
Water usage really isn't a significant issue. There's quite a lot of details and citations in this post. None of your articles actually contradict this; they just say "a lot and growing" without giving any sense of relative scale here.
On the other hand, the (projected) energy use is very real! This report says data centers are maybe 4% of power use in the US now (and most data centers are not AI), and that by 2028 that might be as much as 12%.
Here's Hank Green making the same point, in case you trust him more than the Substack link (though you shouldn't; the Substack has lots of references and the video does not).
Water usage really isn't a significant issue.
This assumes that access to clean potable water is evenly distributed around the world, which is simply not true. Water access is often highly localized. Water is also quite heavy and expensive to move around if you don't have a gravity assist. So pointing to a wet area of the US and stating there's plenty of water there doesn't help if some VC wants to build a data center over a water table already under severe stress from agriculture and homes.
This argument is covered in detail by the first link I posted. It is true that it is theoretically possible in principle but does not appear to have ever happened in practice. The amount of water used by data centers very low relative to other uses, such that they would only be a problem if literally any new industrial or agricultural uses would be a problem (or any nontrivial net immigration); moreover, data centers bring in significantly more tax revenue than most industry or agriculture relative to their water use, which can be (and is!) spent on upgrading local water infrastructure.
That article only talks about "prompts", nowhere in there is "training" the LLM taken into account. And that's what makes the bulk of the energy and water consumption. Any numbers that don't mention training are suspicious, and any article (of apparent depth of research done by Masley) that omits it is, in my opinion, a shill piece.
The article talks about all water use by data centers, which covers training, because training happens in data centers.
The part about prompts is only responding to people who imply that your personal use requires significant-relative-to-your-usage water, which is presumably why the discussion of per-prompt usage is confined to a section titled "Personal".
By your reasoning can anything ever use too much energy? It sounds like according to you everything is subjective and the invisible hand of the free market will set everything right.
What does ‘too much’ even mean? We all value different things differently. I may regard every unit of energy spent plating, growing and harvesting green beans as a waste, while you may love them.
Assuming that the price of energy includes the costs of the external costs to society it's production cause, then — no, it is impossible for someone to knowingly and willingly use too much (he might make a mistake, but that is a different sort of issue).
Assuming that the price of energy includes the costs of the external costs to society it's production cause
So, the fair price of making the planet unlivable in many regions is 18 cents per kilowatt hour (averaged across the US)?
Either the assumption is bad, or I'm way overvaluing a habitable planet.
What you are basically saying is that there is no such thing as too much energy in the laws of nature. Which is true. But for most people "too much energy" is a compresensible notion. I for example think LLM vendors are using too much energy because they are noticeably increasing household energy costs where their datacentres are, because increase in energy demand is heavily outpacing expansion of clean energy infrastructure.
I would argue that “too much” means the opportunity cost of the energy use for AI outweighs the benefits. One issue we’re currently facing is that both the costs and benefits are delayed. Another is that computing the cost of externalities is both complex and subject to political and other influence.
But the concept of “too much” is not at all nebulous. It’s the same reason I think certain people have too much money, or certain individuals and countries have too much power.
His ‘toxic, unrecyclable equipment’ argument is confusing. Is AI computer equipment even more toxic or less recyclable than any other computer equipment?
If a single candy wrapper on your lawn isn't a particularly big deal, does that imply that turning your lawn into the city's next garbage dump is fine?
Our pre-AI computer power use was a problem. The current AI buildout is an unprecedented increase in the scale of that problem.
They are making bets that eventually they will make more money than they have spent, i.e. that they will deliver value to the world. If they are right, well then the state of the world is better than it was;
What a naive and dangerous line of thought. No, making money does equal improving the state of the world. You may come up with your own twisted definitions to force that conclusion but it's so, so far from reality.
Now, one might argue that energy costs do not accurately reflect all externalities, but of course that applies to any energy input, to include that used by Pike’s home, computers and so forth.
Yes, one might and must argue that. And the whataboutism does not help you at all. The fact that your model of the world is also wrong in other cases is just more reason to reject it outright.
The way we negotiate our differing value functions is through the price mechanism: each of us is willing to pay more for what he values more and less for what he values less.
...and therefore, other people's value functions, no matter how dangerous and bad, must never be criticized?
If the output state is worth more than the input state, then that process is making the world better off (and vice versa).
Not necessarily if the input is finite and the transformation isn't bidirectional.
Not to mention that enjoying funny salaries from Google puts you at a particularly low ethical ground when discussing "raping the planet" or filling up the digital world with crap (ads).
I completely understand that the email he received is tactless, annoying and provocative. However, I just don't get his argument. How is AI "raping the planet?" How is AI leading to spending trillions of dollars on "toxic, unrecyclable equipment"?
I just don't get it. AI is just the latest symptom in the ongoing human story of technological progress. Claude Code is legitimately a useful and interesting new technology. It seems like a no-brainer that it's going to empower more people to write software and learn new things.
Rob argues that the AI is a "monster trained on data produced in part by [his] own hands" — okay. Thousands of students at thousands of universities every day are learning by reading code on GitHub. Should they be providing attribution to every piece of code they've ever read whenever they go and produce something original?
AI is here to stay, just like the printing press or the industrial revolution. His reaction seems based in fear.
empower more people to write software
Anyone could already learn to write software before AI.
Thousands of students at thousands of universities every day are learning by reading code on GitHub. Should they be providing attribution to every piece of code they've ever read whenever they go and produce something original?
These students have carefully studied the code and appreciated how it was written. AI training mindlessly gobbled it up like a tsunami gobbles up houses.
AI is here to stay, just like the printing press or the industrial revolution.
The printing press made people more educated by giving them access to literature. LLMs are making people less educated by discouraging them from reading content written by people with actual thoughts.
LLMs are making people less educated by discouraging them from reading content written by people with actual thoughts.
This is obviously incorrect if you spend any time trying to learn a new topic with an LLM. You get answers to your questions that are hard to exactly Google that are right more often than they are wrong. It's an amazing learning tool. Yes you have to be cautious about making sure you really understand, double checking what it says and making sure it's consistent with what you've learned so far, etc. But if you can do that there really has never been a better tool for learning.
One has to wonder how thorough your understanding is from effectively “Cliff’s Notes”. I imagine there’s research here suggesting that learning from summaries is not effective, long term.
IME considerably better - when a learning investigation is driven by my own questions and getting explanations for the parts I find confusing, this builds an internal narrative that sticks. Then once I have a basic idea it’s easier to read docs and “bolt on” new bits of knowledge.
First, there's a trade off. Without the LLM I probably would not have had time to explore these topics thoroughly. Second, summaries are not the only way to use it. You can ask it to generate flash cards, problem sets, search for other resources, recommend books, create a curriculum etc.
This is exactly how (and only how) I use LLMs today. Although I acknowledge that in many areas, they may have more valuable uses than my own limited use case.
Anyone could already learn to write software before AI.
Yes, and now people have opportunities to learn more things, faster?
These students have carefully studied the code and appreciated how it was written. AI training mindlessly gobbled it up like a tsunami gobbles up houses.
So? How does this affect whether or not students appreciate something? Who cares whether or not AI appreciates anything?
The printing press made people more educated by giving them access to literature. LLMs are making people less educated by discouraging them from reading content written by people with actual thoughts.
The printing press made both excellent writing, such that of Primo Levi, and also garbage such as People Magazine accessible to the public. It's your choice on what you choose to read! The same applies to how you choose to use LLMs and how you apply them to your life!
People Magazine is at least true even if you don't care for the subject. The garbage that AI produces is just false. Also, surprisingly to me, and much like Teen Vogue until recently, People Magazine is actually reporting the truth of our political situation, in some cases.
Yes, and now people have opportunities to learn more things, faster?
Asking an LLM to “vibe code” something for you will net teach you a single thing.
So? How does this affect whether or not students appreciate something? Who cares whether or not AI appreciates anything?
People write open source software so that other people can study it and learn from it. What’s the point of making something open source if it just ends up being smashed with billions of lines of other code and turned into tasteless statistical smush?
The same applies to how you choose to use LLMs and how you apply them to your life!
There are studies showing that using LLMs leads to a decrease in brain activity.
How is AI "raping the planet?" How is AI leading to spending trillions of dollars on "toxic, unrecyclable equipment"?
Is that supposed to be ironic? TPU and NPUs are toxic and usually out of date when they reach the way into data centers. And even for GPUs they end up in landfills and aren't reused. And since this is companies moving money back and forth this is the goal in its own right. Playing this as long as possible is exactly what fills investor pockets.
His reaction seems based in fear.
I'd say so too, fear of where the world is heading with the above.
I don't get the rest of your post in this context however doesn't make sense at all.
Rob argues that the AI is a "monster trained on data produced in part by [his] own hands" — okay. Thousands of students at thousands of universities every day are learning by reading code on GitHub. Should they be providing attribution to every piece of code they've ever read whenever they go and produce something original?
LLMs have been shown countless times that they are incapable of originality. So have text to image GenAIs. So many examples, such as a wine glass filled to the brink that you cannot even hack around.
You'll learn that they aren't actually learning, by simply playing around with any of them for a bit.
Or in other words: Using lossy compression on an image, piece of music (or text for that matter) also doesn't mean your computer learned the essence even when these tools use statistical methods to simplify. Even scanners do that stuff. Same if you from scratch reproduce a logo from a bitmap into a vector graphic it doesn't mean you can just skip attribution. It's just not how this works. And just using a giant data center to do the same large scale also doesn't give you that.
AI is here to stay, just like the printing press or the industrial revolution.
You realize that not even Sam Altman seems to be so sure about this. At least that's what internal emails and preparing/begging for a government bailout seem to suggest.
But then... you are right in a way. AI is going to stay. It's been a thing for half a century. That isn't going to magically disappear. That said there seem to be fewer and fewer printing presses though.
However comparing the current hype around a couple of companies moving money in circles and people having fun with chatbots. That's here to stay, just like blockchains and NFTs were here to stay, Oh and the Metaverse, and wasn't Second Life the new internet,
And yes, AI has big use, but it did have great use ten and twenty years ago. I remember pattern recognition being utilized to count bacteria in labs. With the hype a lot of institutions have an easy way to fish for funding when they do literally the same thing having a machine count stuff. Same thing as OCR.
Somewhere along the line OpenAI jumped in and realized that Chat Bots (which anyways were a bit of a hype, being slammed onto so many website) with ANNs make people excited. I mean actually the question that was interesting was whether you could jump huge volumes of text and images and it would do cool stuff and the hardware finally was there. But oversimplifying. Just like a couple of years ago people realized combining Hashcash and endless logs (had a couple of names, hash logs, etc.) made people really excited.
And it's cool and exciting stuff. And fun stuff to work at for a lot of reasons.
But the stuff that is hyped now is extremely on the side of net negative. The fact that it hits at a point of pre-existing economical and societal volatility surely plays a part. And sure like so many unsharp statistical algorithms from what would now be called AI coloring the black and white images that a lot of digital cameras make to image editing where a lot of things that would now be called AI have been there for decades there are uses. But again, nothing really having anything to do with the current net bad hype.
Spam is also here to stay, and fun fact detecting it also has been using Neuronal Networks as well as other statistical methods. So are biological and nuclear weapons, and again people having stuff to do with it apologized, even during the hype. Ransomeware too. Just like the industrial revolution that - fun fact - fueled a significant drop in people's life spans until they realized it's a good idea to wash hands every now and then and that overall live spans rise a lot when you cut child mortality (so live spans of 0-2 years) when you don't have doctors run around proudly birthing babies with reddened hands from the previous baby. I bring this up because things hailed like the industrial revolution also are two-sided. The whole idea of "ah it's fine to burn the world and poison everyone in the name of progress" even stopped before the topic of climate change was on the table.
That means even if somehow the current hype ended up positive it very much is not now. And it is extremely far away from even helping the economy. Again, not even Sam Altman seems to think that. The industrial revolution did not have that issue at all. The productivity gains aren't realized anywhere. The idea that one has to just wait a bit longer has been told for years now all while progress outside of "Let's just burn more resources" and "Let's run the LLM write input for LLMs" is nearly non-existant. There is a good chance that what we'll get instead is being stuck on the level of 23 both technologically (Python, etc.) and societal/ethically, because companies end up feeding the output into the input and people like talking to those LLMs optimized on avoiding anything remotely controversial or novel.
How hard is to get that semiconductors is quite hard to recycle and they are toxic?
General-purpose computers strike me as quite easy to repurpose!
Cool. They strike the real world in landfills, like everything else, and it’s worth considering that before making a bunch more.
The critical components in a GenAI datacenter are highly specialized and quite far from general purpose.
But it's not, you can't run old hardware because it consumes to much energy. On individual level you need to be creative, but the whole industry makes it even harder to repurpose your own general-purpose compute, things like proprietary firmware, non-replaceable batteries and so on.
How is AI "raping the planet?"
It uses a lot of natural resources.
They don't take no for an answer in general. They DDoS my git server and steal from artists I like. And then they're use what they've stolen to shoot my dog.
Technological progress is great, but what if I don't want them to shove their technology into my future? If I'm not interested, why does it matter how big their technology is?