My AI Skeptic Friends Are All Nuts
41 points by jtdowney
41 points by jtdowney
It’s a bit of a tone shift from
Fly.io builds on the work of a tremendous open source infrastructure community. We want open source authors to benefit from their work because we think a healthy, thriving open source ecosystem will help us build better products.
To the fine article:
Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass.
A somewhat bold move when your primary customers are software developers.
Well, seems like the market has shifted and “their best customers are now robots”…
Thanks for that link. Both content & tone of the fine article make more sense to me when viewed partly as marketing their pivoting to AI-tool-chain start-up, especially if it is part of “pre-IPO talk up the company” chatter to help get Wall Street analysts thinking of them as being on the current hype train. Not sure what their costs/are income is besides investors, but they do have well known investors. And who knows, maybe instead like most start-ups they’re circling the drain about to go down and this is more like a Hail Mary?
I think you should have quoted at least the very next sentence for intellectual honesty (but that’s the only criticism I’ll direct to your criticism of the piece, the rest of my criticisms are to this part of the piece dealing with IP rights):
Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
They might have a point, there, except:
The median dev thinks Star Wars and Daft Punk are a public commons.
are the people complaining about IP violations median devs? (Also, believing Daft Punk is public commons is quite in line with being angry about big corporations violating GPL licenses, for example.)
The great cultural project of developers has been opposing any protection that might inconvenience a monetizable media-sharing site. When they fail at policy, they route around it with coercion. They stand up global-scale piracy networks and sneer at anybody who so much as tries to preserve a new-release window for a TV show.
I mean, is it not possible that non-devs have similar beliefs but don’t have the technical skills required to partake in, for example, removing the protections added in by… you know… other devs? How’s a writer gonna do that? Were there not lawyers who defended The Pirate Bay in court?
I’m not saying lawyers or writers have the same ‘information is free’ attitude that characterises the hacker ethos that a lot of devs have, I’m saying the author of this piece needs to support their assertion.
Call any of this out if you want to watch a TED talk about how hard it is to stream The Expanse on LibreWolf.
There’s a difference between antipathy towards DRM vs. antipathy towards IP legal rights.
Yeah, we get it. You don’t believe in IPR. Then shut the fuck up about IPR. Reap the whirlwind.
Well, no, that doesn’t follow. If we didn’t have IP rights, all public code would essentially be libre. People who, for example, use GPL licenses to enforce lack of IP rights on derivative works by using the framework of IP rights
being angry at the very same large corporations who are such sticklers for IP rights on their [large corporations’] own works, in their [GPL licensors’] judgment, breaking the IP rights of GPL works with AI crawlers
are not angry about their IP rights not being protected, they’re angry about the double standard.
Personally, I’m an MIT/BSD license guy, so this doesn’t bother me much per se1, but I am angry about that aspect of this whole AI hype on behalf of the GPL people because of the same double standard.
If there was no double standard from massive corporations, I literally wouldn’t care about GPL code, whether from individuals or corporations putting out new GPL projects or GPL contributions, being used for this purpose for the same reason I don’t give a crap about movie piracy (at least on the side of the pirate consumers; I do have a problem with, e.g. breaking agreements for pirate producers who share creative works they agreed not to when they bought it, if they did indeed sign agreements).
1: Well, apart from the whole giving credit thing, but I honestly don’t even know about that. If these AI systems become indistinguishable from humans, what’s the difference between humans vs. AIs learning to code from MIT/BSD codebases, then producing ‘their own’ stuff without crediting their teachers, so to speak; AFAIK, it’s not illegal for humans to learn to code from such codebases, then write their own stuff w/o credit, but either way, it seems intuitively fine to me.
Looking at Bluesky, it seems like Thomas’s entire thought process is “I get downvoted on HN when I say media piracy is bad, therefore it’s ok if I engage in plagiarism and no software developer is allowed to criticize me on this”.
Yeah, he doesn’t distinguish between plagiarism vs. piracy or antipathy towards DRM vs. antipathy towards the concept of intellectual property as a whole or make other distinctions.
I mean, is it not possible that non-devs have similar beliefs
Non-devs can have even more strange beliefs! I know of someone who doesn’t like the idea of music piracy, and in fact, supported Metallica’s lawsuit against Napster, but has no problem sharing a Netflix account with someone, to the degree of using a VPN to access it from another state.
Interesting. Sounds strange on the surface, but there may be an internal consistency if you probed them.
For example, would they be okay with sharing a Netflix account to thousands of strangers? And would they be okay with sharing MP3s ripped from legally bought CDs w/ a few friends online?
Yeah, I am not sure! I switched from “inconsistent” to “strange” exactly because I didn’t want to interrogate them about this further in the moment.
It’s not clear to me what this post is trying to add other than provocation.
Similarly, it’s not clear to me why provocation is necessary.
If we consider the argument one primarily about efficacy, then I don’t know what one gains by being so insistent that these tools work. Why not just let things play out? Surely, it won’t take us beyond the end of the decade to see indisputable evidence emerge to corroborate these claims. If the skeptics turn out to be wrong, then the early movers deserve all the advantages they enjoy from having a head start.
Much like the other post, this one just seems to add more noise. There aren’t any novel ideas or concepts presented here. And, frankly, neither post makes a strong enough argument to justify the brashness. Both are weak writing… from well-known authors.
It’s not clear to me what this post is trying to add other than provocation.
Nothing, that’s the point of the article.
Similarly, it’s not clear to me why provocation is necessary.
Because when you can’t bang on the law, and you can’t bang on the facts; you must bang on the desk. The author likes LLMs they make him feel like a better programmer. So he’s loudly proclaiming it’s everyone else who’s wrong. Burying his head in the sand otherwise.
This wasn’t written for you, this was written for him.
Personally, I would be interested in seeing how these tools may be used more productively.
I am a daily user of LLMs (for tasks related to human language,) but have struggled to find ways to use them effectively for programming languages.
Just a few days ago, I tried to ask an LLM some simple questions about the Janet programming language, and it just wasted my time going in circles. I would have been better off searching through the documentation to find the answer (and that’s what I ended up having to do anyway.) I suspect no amount of better prompting would have gotten me the right answer, but maybe better prompting would have directed the LLM to tell me that it was incapable of giving me an answer (or maybe feeding or fine-tuning the LLM on documentation may have improved its accuracy.) I don’t know.
Just the other day, I tried to ask an LLM for the sequence of command-line commands necessary to accomplish a simple task using tpm2-tools. It must have been the case that the documentation for this library was within the training set, since the answers cited the names of actual commands. I assumed that, as a result, the LLM would be able to synthesise the sequence of commands I wanted from the various examples scattered throughout the many man pages. Instead, it gave me obviously confabulated answers and, when confronted, repeatedly assured me that the next answer would be correct, only to go round-and-round in circles.
Some weeks prior, I was teaching a class on SQL. In our sessions, we discussed approaches in SQL such as the use of window functions or the use of recursive common table expressions. The attendees were given practice problems that posed questions about a data set, where the answers required the use of these features. I asked an LLM to answer all of the practice problems, and included these answers verbatim in the answer guide provided to attendees. Every query the LLM wrote was correct, except the question the LLM answered wasn’t always the right question, and sometimes there were small mistakes borne out of largely reasonable assumptions.
I thought it would be valuable to include the LLM answers, since I know that this is a tool that people writing SQL genuinely rely on, and it ended up spurring great discussion talking about how one situates an analytical tool like SQL into their actual work. For example, “which employee has the highest salary” seems to be hit-or-miss with how likely the LLM is to answer select name, salary from employees order by salary desc limit 1
or whether it will write a query that handles ties (or suggests a follow-up response where it writes such a query.) From this perspective, even with access to an LLM to write these queries, the attendees of our SQL course may still be expected to understand how to translate business problem into an analytical query and determine whether said analytical query appropriately satisfies said business problem. (Similarly, I discovered that I could correct the small errors in some of the LLM queries quite quickly, while attendees struggled to spot them. It turns out perhaps we there’s still value in knowing what you’re doing?)
I would hope that even the author of this and the other post would be able to find some sympathy for my skepticism of these tools. I’m sure I am not using these tools very well, and I’m eager to understand what I’m doing wrong.
But… I don’t really need a tool that can solve problems that I can already solve, especially if they can’t even solve these problems to a particularly high standard.
In each case, including a bunch (as in thousands or tens of thousands of tokens) of the actual documentation in the prompt would likely improve your results a lot. I use a CLI that lets me pipe text to an LLM through stdin. It’s better to think of them as processing text than as answering a question (a question is merely a very short text that is processed by answering it). The chat UIs are misleading on this.
You are correct that the model itself only has a limited representation of its training data. Asking questions directly, especially about less popular tools, is not going to work well unless you’re using a tool with web search that can look up the documentation. But you can skip the search bit if you already know what body of text is relevant. They very good at extracting answers from text.
Here’s a blog post about a short script I wrote that uses LLMs to pull answers from a directory of text files: https://crespo.business/posts/llm-only-rag/
Well, ONE thing that the post added was to reach someone (me) who did not share the same understanding of HOW LLM-based tools are effectively used in programming.
I’m a full-time serious programmer, but the company I work for does not permit experimenting with LLMs for development other than the company-approved tooling used in the company-approved manner. (Feel free to laugh – it’s big-corp thinking: what if something caused us to get in legal or regulatory hot water?) And I haven’t done the kind of agentic AI usage that the author describes in this article.
So most of my doubts about the limits of AI for coding at the present moment DID fit into EXACTLY into the myths that Thomas (the author) is trying to bust: “it’s little more than really smart code-completion”, or “this will work on straightforward stuff, but not for the bits that require creative problem solving”. Thomas painted a picture of a very particular way to use these tools which made sense to me.
You seem a bit put off by the “provocative” writing style… while I probably wouldn’t have chosen that particular style, I didn’t find it off-putting or difficult to understand, just quirky. And it helped make the reading interesting. (It is, for what it is worth, choice of writing style I would only expect from a human, not an LLM.)
it’s big-corp thinking: what if something caused us to get in legal or regulatory hot water?
I’ve suffered through pointless bigco red tape too, but… I think we should normalize the idea that companies of all sizes should try to follow the law, and that it’s bad when they don’t! Society benefits when companies pause more often to ask, “would it be illegal to do the thing we were about to do?” The Overton window seems to be veering toward the idea that companies are allowed to do anything they are able to do, but let us not take that as a foregone conclusion.
That’s a really good point. I appreciate the fact that my company is trying hard to “do the right thing”. I’m not sure if their current policy perfectly nails that point, but you are right: at least they are trying.
I am happy to answer questions about that, if you have any.
glad I’m not the only one feeling like the discourse has gotten dramatically worse since his blog post made the rounds
It definitely has, and our hard-working moderators aren’t helping either. It’s been quite interesting to watch.
You’re not the only one. It feels like it has had the opposite effect of what I was trying to do. That’s on me, I guess.
I felt disappointment reading the article.
I used to look forward to Fly’s and tptacek’s writing, which I lauded for its technical depth and transparency. In contrast, this article is a remarkable shift in tone, intent, and rhetoric. So much so that I actually double checked to make sure I was in fact looking at Fly’s blog.
For what it is worth, I’m not saying this is not a subject I’d expect in their blog. I just expected that this would have been written differently in terms of argument and form. As is, it feels out of place in Fly’s blog.
For what it is worth, I’m not saying this is not a subject I’d expect in their blog. I just expected that this would have been written differently in terms of argument and form. As is, it feels out of place in Fly’s blog.
For what it’s worth, the author appears to agree that it’s out of place on the company blog.
I’d guess that there’s something of a compromise here, where this is the piece (rant) he wanted to write, and the CEO wanted a piece on the topic on the blog, and thus this happened where a piece with a non-corporate tone was published by the corporation.
Difficult to get past the 13yo-trying-to-be-cool tone of voice. Surprised that such a piece ends up in a corporate blog, especially with the rage bait. I will think twice before considering fly.io for a project.
In fact, Thomas admitted to not caring about convincing anyone on Bluesky, and that this post is purely an angry rant, but the CEO of Fly.io insisted that they put it on the company blog rather than on Thomas’s personal blog.
Is it just me or did CEOs seem to have more (at least superficial) decorum in the past?
CEO seems way more of an identity than a title these days, thus necessitating more signaling that they “get it.”
“Do you like fine Japanese woodworking? All hand tools and sashimono joinery? Me too. Do it on your own time.”
Somewhere between garbage code and perfect code is acceptable code. Despite LLMs tending towards the average, I would like more good code in my life. All trends seem to be pointing to software getting much worse.
This article appears to ignore most externalities of widespread coding assistant usage, including deskilling people.
I expect better from the author, but I’m not surprised.
The code I write with LLMs is often better than the code I write without them. LLMs are less likely to ignore an edge-case or fail to catch an exception by name. The tests I write with LLMs are often better because they counter-act my natural laziness.
I think it is fine that you know your limits like this and are not afraid to say it. I know that you also have posted about how you have to verify the outputs of all of the code that you produce with these models. I I think that is untenable for industry however. I don’t think we should promote the idea of replacement of expertise because it also not possible.
It’s weird to read that as someone who barely uses LLMs. From time to time I try out using them for math, and the ones I were using (whatever’s on duck.ai) tend to hallucinate proofs that only look valid, but that do indeed fall apart on edge cases.
It’s been a while since I last tried this, they almost definitely improved, and math isn’t programming - but I just… don’t see myself trusting them here either? If you already know all the edge cases and you don’t handle them out of laziness, then I suppose this can indeed help. Is that what you’re referring to?
You may have stumbled across one of their great weaknesses: LLMs have been notoriously bad at math for most of their existence. They can’t even handle long multiplication.
This has supposedly changed in the past six months with the rise of the so-called “reasoning” models - o3, R1, Claude 4, Qwen 3. I’m not enough of a mathematician to have formed an opinion on how well this actually works though.
Code is odd because while it feels like it should work like math, it turned out to be more of a language problem. Plus the amount of training data they have for code means that they have a lot of useful examples they can imitate!
This has supposedly changed in the past six months
And, (as you know) with the rise of MCPs, which let LLMs invoke tools that do things they’re bad at, but well.
This article appears to ignore most externalities of widespread coding assistant usage, including deskilling people.
I interpret the author’s comment about woodworking to be addressing the risk of deskilling: namely, the skill is not as important anymore. To be clear, I enjoy writing and using good software, so it would be sad if that came to fruition.
This isn’t to say there isn’t any space for quality software: consider cryptographic libraries, pacemakers and nuclear control systems.
It pains me to write this (as I’m personally bearish on LLMs) but I recall arguments in the late 80s/early 90s about writing applications in assembly vs. a higher level language (like C). The compilers weren’t good enough; they deskill people in knowing how the computer works; it’s an important skill to know. These days, knowing assembly is a niche skill.
I’m going to go find a good niche for the next decade or so, but besides that: I think this is different from “knowing assembly”. This is about not knowing anything. You can see it in the students that can’t complete coursework without leaning on LLMs.
I am really worried for the current and next generation of junior engineers.
You can see it in the students that can’t complete coursework without leaning on LLMs.
Said Socrates about his students, but about books! How do you learn anything when you don’t have to memorize it? You’re relying on thoughts of others, not your own thoughts! Or so his argument ran. Again, I hate saying this as it goes against my wish for this to be different this time.
This ain’t books, this is putting books in a woodchipper and hoping to get new books out the other side.
If we spend less time writing code, maybe we’ll spend more time writing specifications (“guardrails”).
A dependently typed language like Lean can express and statically check contracts that implementations have to uphold. A model trained on Lean functions and proofs will have its slop rejected by the compiler, until it golfs its way to correct code.
Then the programmer’s job is to write theorems. (Or to align the model’s suggested theorems with the actual requirements.)
If this actually works, it probably won’t happen in the typical web dev project. But it could bring down the cost of formal verification enough to make it viable for more than just cryptography and nuclear control systems.
I’ve been saying for a while that while AI is good at understanding natural language, it could be even better at understanding a more focused subset, specifically targeted at software specification.
I had a course at university titled “Formal Software Specification”, it was teaching a specification format/language somewhere between UML diagrams and pseudo-code. The course was boring as hell but it might be useful in the end.
AI is good at understanding natural language
No. It isn’t. This is a lie although one so widely copied that I am not accusing or blaming you for repeating what is given as truth.
It is not good at understanding. Anything, including natural language. It cannot understand because it cannot think.
It is good at taking in natural language and producing something that convincingly looks like natural language – or code or images – as output.
It generates output by remixing input and if that input looks good the output can look good too. But it does not understand anything at all.
This is a vitally as in life-and-death level important difference.
Okay, my bad for not being good enough myself at natural language. I meant it as “processing natural language” but wrote it in a more casual tone.
+1, and even lighter formal methods, like property testing (e.g. QuickCheck).
Only on rare occasions do I end up actually writing property tests or fuzz tests, but as you say, they’re potentially useful as a guardrail, and could be input into LLM code generation for the implementation.
As an aside, one of the few times I have reached for an LLM’s assistance for coding was for playing with Alloy.
Sure, but if all that was available was Ikea flatpack made out of particle board, I’d be pretty upset.
I’d like to think I still have a career in software outside of pacemakers and cryptographic libraries.
I would be sad too.
If LLM code assistants became as good as some are suggesting, I think the options are to either use them, to keep your advantage, or to not use them, and face job competition from lower skilled people that do use them.
I see this as similar to what Unix admins have experienced with the rise of cloud providers.
I honestly think we’re in the path to good code, but perhaps a couple iterations away. Not related to coding but the AI video of Will Smith eating spaghetti is a great example of where we were and where we are today. It’s only getting better.
The article does nothing to convince me of anything other than that being a ruthless asshole can work for some people, but I knew that already.
People keep claiming extreme productivity while on ketamine - I mean while coding with an AI agent, but the claims are rarely backed up, and I can’t help but to be extremely skeptical when such claims are also beneficial to an entire industry which has invested enormous amounts of money into these agents. Show me the money, Lebowski. When I have seen people rely on coding agents, the results have not impressed me and the productivity when you factor in time spent prompting and then correcting and dealing with the results does not seem higher to me. Why are you trying to shortcut away the part where you learn what the actual problem you want to solve is?
There is one part that I can buy, and that is using the LLM for initial log analysis - I can see how that might actually work. I also don’t think it’s worth all the negative externalities.
The negative externalities are completely ignored or poo-pooed. It’s easy and lazy to claim that progress always crushes people beneath its wheel and so we should just let it happen, but it’s also historically ignorant. Most of the good things we have in modern society came about because common people rose up against that wheel and refused to just be crushed. Shorter working hours, worker protection laws, universal voting rights, the end of slavery and so on. What we are seeing in the world today are those gains being rolled back in the name of extractive efficiency, and this article just stinks of that attitude, of a rich asshole that is annoyed at his “friends” who are not happy about feeding the poor into the grinder. LLMs are not economically viable without extreme levels of economic inequality, exploitation and disregard for the rights of those around you.
All of that aside, I still don’t think they actually work that well. This article doesn’t convince me.
The article does nothing to convince me of anything other than that being a ruthless asshole can work for some people, but I knew that already.
Indeed, being a ruthless asshole is the only thing that is really rewarded in our society. You can boil the article down to “If you’re not being a ruthless asshole all the time, you are leaving money on the table.”
LLMs are not economically viable without extreme levels of economic inequality, exploitation and disregard for the rights of those around you.
Don’t give in to the hype by polarizing to the other extreme. We run massive numbers of computers for all sorts of things already, consuming lots of water and electricity. There is a lot of hype right now and a bubble, but it is not some special climate killer. The bubble will pop just as they always do and we will be left will be an equilibrium between resources spent and how much customers are willing to pay, just like any other business.
I’m not talking about power consumption alone, although I think it is a massive problem that we are worsening the climate crisis to power technology whose main use case is to generate spam.
I’m talking about human exploitation: https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/
I’m talking about ignoring copyright and training on illicitly acquired data sets, something that is only possible when you are big enough to be above the law: Only possible thanks to systemic, societal inequality.
I don’t think it relies on that exploitation at all. If those workers weren’t available they’d be paying the next most expensive ones instead and it would still get done. I don’t see me any evidence that the amount of money you’d need to pay non-exploited labelers is such that it would make LLMs impractical. The internet is filled with tons of labeled data already. Would you say the same thing about clothes in the developed world, the only sustainable path is nudity because garment manufacturing is disproportionately done by poor countries? You can argue LLMs would cost more, but they would likely still exist.
Re: copyright I think it will be interesting to see how it plays out in the courts but early indications are they are in the clear for a lot (but not all) of it. Not for illegal acquisition, but for fair use. But even if they lose that entirely I think you’re still mistaken – every publisher and social media site is going to want the extra revenue stream of licensing their data for training, unless they are making their own competing bot, and they will have terms of service that let them do it. It’s not going away, it’s here to stay.
Well, as to the first part I simply disagree, I don’t think a company like OpenAI could exist without the extreme wealth concentration we have today.
When it comes to copyright and whether they’ll get away with it, I am completely certain that they will. I can’t even imagine what not getting away with it would look like.
Would you say the same thing about clothes in the developed world, the only sustainable path is nudity because garment manufacturing is disproportionately done by poor countries?
Um. Buying fewer, more equitably-produced clothes is an option, you know. But also, there is no ethical consumption under capitalism, so I am forced to agree that nudity is more sustainable.
Where I live, wearing clothes is necessary for survival. Fortunately, using “AI” is not. Therefore I can and do choose not to use “AI” tools due to the ethical issues.
I think you are clearly making a false equivalence and I feel that your post is made in bad faith.
Buying fewer, more equitably-produced clothes is an option, you know
That’s exactly my point! The fact that garment workers are currently underpaid doesn’t say anything about whether everyone wearing clothes is sustainable. Pointing out that the workers are doing the labeling are underpaid doesn’t say anything about whether LLMs are sustainable, just whether they should be more expensive so that those workers can have better wages.
every publisher and social media site is going to want the extra revenue stream of licensing their data for training,
(my emphasis)
Right. “Their data”. Not data/content provided to them by their users. Just an extra layer on the enshittification cake.
There’s an expectation that the data will just continue to be accessible, free, and abundant, like it has been up to now. There’s absolutely no conception of the risk that users will stop providing high quality original data, especially as the platforms themselves are encouraging (almost forcing) the use of LLM tools to author stuff. Notably absent is any sort of deal for users, other than getting more LLM-generated ads and content in their feeds.
There’s no risk because as long as people are willing to pay for the models eventually the companies will just license the data and the companies generating it (or who have users who are generating it) will get paid, so they will want to do it. That’s the point. From a business perspective, there is no sustainability problem, and the precedent so far is users have been happy to sign over their data in exchange for services (this is Google’s entire business model, it’s pretty well proven).
There’s no risk because as long as people are willing to pay for the models
You mean as long as VCs are willing to subsidize them. OpenAI is losing money on every query. If we’re heading into a recession, the first things people will look at cutting down in paying $20 a month to generate custom fanfic.
From a business perspective, there is no sustainability problem
Business Insider is losing massive amounts of revenue because Google is sending them less traffic, because Google’s LLMs are summarizing the content instead of passing through the search. The shortfall will be met by firing humans and using other LLMs to generate content.
From an LLM company’s perspective, stuff like news is gold because someone has generally done the footwork of ensuring some sort of accuracy (in this case they’re acting as oracles for truth). Mix more and more confabulation into the source, and they will get less and less quality data.
so far is users have been happy to sign over their data in exchange for services
My emphasis.
Seems like the article doesn’t actually attempt to refute the “plagiarism” complaint.
Right, because you can’t really refute that. If your argument against LLMs is that they were trained on vast amounts of copyrighted data without the consent of the authors then yes, that’s true.
…so if LLM users agree with this argument, why keep on using them? Because everyone else does? The ends justify the means?
Maybe we don’t agree with copyright and intellectual property rights?
I strongly believe knowledge, art, culture is meant to be freely shared. Entirely, totally, 100% freely. This is one of the most important aspect of society that in the long run, benefits everyone.
I don’t agree with preserving Disney’s copyrights. I think corporations have abused the social contract to extend their monopoly, freedom and liberty is dependent upon sapping away their power. Money should not buy a monopoly on an idea.
I’m an artist. I create music and I play the music composed by other people on my instrument. I pay talented people to play their music live. I’ll continue to pay artists to create things. I also like generative arts and some creative exploration of latent spaces aided by computers. I don’t think using an AI to help exploring that space is that much different in kind, only in degree. I think the human perspective will have to be radically apposed to AI outputs to speak to people, so I don’t fear AI art. I don’t give a shit about “art” used in communication by corporations, it was and always has been meaningless slop at best or propaganda.
However, in the current situation with LLMs its the corporations that get to use copyrighted content for free, while you as a user don’t. Disney will still sue you if you share its IP. So it is a lose-lose situation for users where they don’t get to claim copyright on their work while giant conglomerates do.
While I am generally in favor of vastly reforming copyright laws, LLMs do nothing to further the cause you stand for and actively make it worse.
For me, it’s rather less about ideology or philosophy and more about legality. If you “don’t agree” with the law, you can choose to ignore it or perhaps subvert it. Those risks are yours to take. But when you defend powerful and wealthy corporations taking a similar attitude toward inconvenient obstructions in their desired capital flows… well, I’m less sympathetic. They’re more likely to get away with it, but the stakes are higher, and the potential collateral damage is greater as well.
If we’re talking about software, it’s not about Disney’s copyrights, but licenses. If your code assistance tool coughs up a chunk of recognizable but unattributed GPL-licensed code into your project, putting you at legal risk, is that a risk that you can pass on to the tool maker? They trained it that way, after all.
I agree with this perspective, and an ideal outcome of the legal dispute in my view would be the dismantling of intellectual property law.
I don’t think that’s a likely outcome though, and until that comes to pass, I’m not happy with google getting to ignore my copyright on code used as training data, while I don’t get to ignore their patents
My argument wasn’t about law, but ethics… I dislike the extent of copyright laws and how they’re enforced. That doesn’t mean I think it’s fine to rip people’s work off with no credit - and the author themselves admitted LLMs are prone to do that:
Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass.
Obviously, the cases where the plagiarism is obvious will be very rare - for most problems people use LLMs for, there will be much more than one relevant codebase in the training data… but that just makes it worse! If I take thirty FAT32 implementations and copy-paste some code from each, I’m not doing any less plagiarism. I just made it ever harder to properly credit all the authors or to detect the plagiarism in the first place.
I don’t think using a statistical model to do the dirty work for you makes it different.
Before the current iteration of AI, the anti-intellectual-property conversation didn’t erase the artist. Open access papers still get citations, which not only give you recognition for your work, but for better or worse can also shape a career. I pirate a lot of music, but I also buy from smaller artists on Bandcamp, probably contributing way more than people who use legal streaming services.
Now? We just take people’s work, and don’t even give them the bare minimum of a margin note about the original authors. This feels as if someone asked the Monkey’s Paw for copyright abolition, and it happily obliged.
I agree with you on this, but I will note that I don’t think LLMs changed much on the attribution front. We’ve always been terrible at attribution except when systems are in place to encourage it (like academic citations), I can’t recall seeing anyone ever give attribution for code they lifted from Stack Overflow, despite Stack Overflow posts being CC BY-SA licensed, and so definitely requiring attribution.
That’s not to say we should give up on attribution, though we maybe need to rethink how we approach it, as attribution enforcement through copyright hasn’t been working for a while. It’s telling that the one place I can see with a working approach to attribution (academic citations) does not enforce it with intellectual property law, but rather exclusion by institutions and by one’s peers. In the cases where plagiarism is legally punished, it typically comes under fraud, rather than copyright violation
I will note that I don’t think LLMs changed much on the attribution front. […] I can’t recall seeing anyone ever give attribution for code they lifted from Stack Overflow, despite Stack Overflow posts being CC BY-SA licensed, and so definitely requiring attribution.
I’m pretty sure I gave attribution to SO answers a few times. I also wouldn’t really expect (competent) senior devs to steal code from there. People get shunned for this, and they learn to give attribution.
With LLMs, you just can’t do this. There is no information about where the code came from, so even if you mean well, you can’t give proper credit. This is fundamentally different - we went from people not knowing (or not bothering) to give credit, to it being literally impossible to do so.
There isn’t room to “rethink how we approach [attribution]” if we have nothing to go off.
Even copying quite minor bits of code from Stack Overflow, I at least add a comment with the URL of the answer. Usually this is less because I feel I need to give attribution, but because if I’ve gone to Stack Overflow, it’s because the documentation is absent or wrong, and/or a library bug needs worked around, and I need to explain why I’m doing something undocumented or apparently unnecessary. But it does also provide attribution.
“rethink” was not meant in the context of LLMs, I am with you in that I don’t think this crop of AI tools is defensible from any angle.
I’m pretty sure I gave attribution to SO answers a few times.
It’s great that you do this, but it doesn’t change the fact that the system isn’t working. Even in your case you do it out of morals, integrity, and fear of being shamed by your peers, not because copyright law requires it. I think there’s something to be learned from that.
Same reason I still eat meat even though I understand the very strong ethical arguments against doing so: the benefits I get from using LLMs have convinced me to set aside my ethical concerns about how they were trained.
Other people will make different decisions here, and I respect that.
From the discussions I have observed yes. Either that, or “with the amount of money involved, all will be cleared of wrongdoing by the judges post-hoc”.
Bona fides: I’ve been shipping software since the mid-1990s.
So you refactor unit tests, soothing yourself with the lie that you’re doing real work.
what
an LLM can be told to go refactor all your unit tests.
the llm can’t judge whether people can read the tests and understand what the test was trying to prove and whether or not that claim was effectively and clearly proven, which is usually the reason you refactor a test.
Are you a vibe coding Youtuber? Can you not read code? If so: astute point. Otherwise: what the fuck is wrong with you?
people who know how to code were once people who didn’t know how to code
even the most Claude-poisoned serious developers in the world still own curation, judgement, guidance, and direction.
who amongst us has not received a terrible PR, asked for clarification, only to have the submitter respond back “i dunno that’s what gpt wrote”
Professional software developers are in the business of solving practical problems for people with code. We are not, in our day jobs, artisans.
I mean, sure, if you think that maintainability doesn’t affect the bottom line and overall cost structures and the ongoing velocity/acceleration of the pace of development.
but they take-rr jerbs So does open source. We used to pay good money for databases.
Talked to a travel agent lately?
I’m pretty sure these are gonna come back because LLMs have made the internet so unusable and expedia et al take a deeper cut than the commission that travel agents earn on hotel stays, it’s why fora.travel is a thing.
Or a record store clerk?
full disclosure I work at whatnot but I buy vinyl from people on whatnot, they’re basically record store clerks on my phone.
I dunno. This post doesn’t convince me of anything, it’s not brusque because it’s punching up, it’s not making any particular point other than “I like using them”, it seems like hostility is the goal and not the style of the argument.
Are you a vibe coding Youtuber? Can you not read code? If so: astute point. Otherwise: what the fuck is wrong with you?
people who know how to code were once people who didn’t know how to code
I’m not a vibe-coding YouTuber, but this is the reason that I turned of Copilot after over a year of giving it a chance. I spend a lot of my time reviewing code. I know the kinds of mistakes I make and the kind of mistakes more junior people make. They are broadly in two categories:
LLMs make different mistakes, and their mistakes look like correct code. At the core, an LLM is a device that produces plausible output. The two easiest things to review are code that is obviously wrong and code that is obviously right. LLMs produce code that looks like correct code. It may be correct code. It may even be correct code most of the time, but when it isn’t it is really hard to spot the bugs.
The hardest debugging I’ve had to do in the last couple of years was in LLM-generated code. It cost me far more time than the total saving in typing time (hint: if typing is your bottleneck, you have built poor abstractions) that it saved in a year of use. It had transposed a 1 and a 0 in two bit fields communicating with a device. Due to how this was used, it almost worked, and looked as if it worked in unit tests but then failed in integration testing.
I suspect this is why all of the large-scale studies on LLM use in programming are showing either a negligible increase or a decrease in productivity. They’re great at making you feel productive, but there are a load of well-known results in HCI research that show big deltas between perceived and actual time performing tasks. Feeling more productive and being more productive are very different things. Ironically, the article explicitly calls that out, yet presents no data to show that LLM use is actually making them more productive.
Bona fides: I’ve been shipping software since the mid-1990s.
I recently found out that a pet peeve of mine is people using “argument from authority” incorrectly. Quoting that Wikipedia page:
An argument from authority is a form of argument in which the opinion of an authority figure (or figures) who lacks relevant expertise is used as evidence to support an argument.
That’s not what Thomas is doing here. He’s not saying “this expert over here who has no relevant expertise says X hence X is true”. He’s saying “I am an expert with relevant expertise, which is why you should listen to me”.
That’s not argument from authority.
see the section on the inductive form:
When used in the inductive method, which implies the conclusions can not be proven with certainty,[20] this argument can be considered a inductive argument the general form of this type of argument is: Person(s) A claims that X is true. Person(s) A is an expert in the field concerning X. Therefore, X should be believed.
It’s not exclusively saying authority in a legal sense, it’s also using authority in an authorial sense, as in “the author of a book is the authority on its meaning”. The core of the fallacy is that it suggests that the argument’s validity is based on the identity of the person making the argument rather than the content of the argument itself.
Thanks, I apologize: that fits how you used it here. Clearly I should have read past the first paragraph on Wikipedia!
it’s sorta confusing the way the article is written because the first section appears to contradict the second, but I think what’s going on is that the writers are using “expert” and “authority” in largely the same sense, because they’re using the terms from a sociological rather than technical or scientific viewpoint. I’m not a huge fan of the way the article is written. I’ve always taken it to basically be “it’s like ad hominem, but instead of dismissing an argument because of who’s making it, you’re admitting an argument because of who’s making it”. At least … that’s how I understand it and how I mean it in my post. Language is hard sometimes!
I don’t disagree the widening here is unfortunate. And while I wouldn’t have used it, I think there’s room to disagree here. The author says he’s been shipping code for years. But then doesn’t back up that claim with any reason to believe that code, is code that people would actually want to ship.
e.g.
I own a large sea based tourism company, so you should trust that I know what I’m talking about when it comes to sea based travel.
Becomes an argument from authority when that sea based tourism company is OceanGate.
I’m not normally one to default hide things I find objectionable. I have a deep aversion to vibecoding, but I’d never willingly hide stories about it from my visual space, because human nature is to want to live in a comfortable bubble where your ideas never get challenged. I feel resisting that is a good thing for people who want to expand their usable context frame, as much as I try to, or claim I try to.
This article has made me reconsider if it’s not better to hide the tag.
[imagine the “Everyone is now dumber” video from Billy Madison here]
I wonder exactly how much of the negative discourse around is really just articles like this, where people who care about things, are figuratively given the finger, by people who don’t care about things. (Other than the line going up at all costs, I guess…) This article didn’t convince me that LLMs provide positive value, but it clearly did lower my opinion about those who praise LLMs. It’s frustrating to see people unzip and take a shit on things that I would say actually add positive value to the world.
I feel so much 2nd hand shame for Simon Willison, I’d feel so awful if I was the single person referenced in an article as toxic intentionally provocative as this one…
where people who care about things, are figuratively given the finger, by people who don’t care about things
I think it’s not that they don’t care about things, but that they care more about the results than the process. What I personally dislike about the article is the whole “if you don’t find LLMs useful, then you’re doing it wrong. Or if you haven’t tried it in the past 20 minutes, you’re out of date!”
I think it’s not that they don’t care about things, but that they care more about the results than the process.
I disagree with the premise that’s required to make this make sense. I can’t parse your comment without the assumption that you can separate what you’re trying to call process, from the thing with value. Allow me to make a comparison I admit poorly applies, and is probably unfair.
Your friends have a child, there’s this new AI agent that takes the newborn, and returns them as a small child ready to start school. This is obviously better right? It skips all the sleepless nights trying to raise a newborn!
As a mid-late career coder, I’ve come to appreciate mediocrity. You should be so lucky as to have it flowing almost effortlessly from a tap. […] Developers all love to preen about code. They worry LLMs lower the “ceiling” for quality. Maybe. But they also raise the “floor”.
I disagree, I’ve never seen an LLM raise the floor. But to pre-refute the comment I’ve seen elsewhere in the thread, from simonw. I don’t define the quality floor by the number tests, or edge cases handled. I grade the floor quality by the attention the engineer put into the code. I would grade repo with 100% LLM generated coverage from someone willing to unironically praise mediocrity, significantly lower in quality than one with 60% coverage by a single developer who gives a shit.
That’s the thing I care about, it’s not just “process”. Mediocre code is so far below my standard, for “good enough” I don’t understand how those who feel “lucky” can make it out of bed from the crushing shame they surely must feel. Again, this isn’t process. This is the actual value of the output.
Can you make money selling mediocrity? I mean, LLMs exist, so obviously you can. But in every other, non LLM thread, suggesting it would get you roasted for advocating for the continued enshitification of everything. A desk from IKEA is clearly superior to the hand crafted hardwood one from a local woodworker who’s spend years caring, and taking pride in what he makes. No no, not that one, that guy went out of business. And besides, the hard wood is heavier, and more expensive… and while maybe it would last a few decades, the one from IKEA only costs $20/mo. You’d be insane to pay 1k for desk! Just get the one everyone else gets, and then you can upgrade to the next one in a few years!
There’s a quote I really enjoy from an episode of House that captures my feelings for the pro-LLM crowd.
You wanna kill yourself? Fine, but stop recruiting!
I get it, you don’t care about quality, the line goes up, so you, so you’re happy. Everyone else are the people who are wrong. How dare they care about something different than line go up! I mean… sigh
But saying the code generated by LLMs is what people should aspire to, it’s code they should feel proud of shipping is the most demoralizing meme I see repeated by people who I would have hoped knew better… It’s just so…. sad.
But saying the code generated by LLMs is what people should aspire to, it’s code they should feel proud of shipping is the most demoralizing meme I see repeated by people who I would have hoped knew better… It’s just so…. sad.
AI seems to have thrown gas on some low-key, long-running psyop on software engineers. I’ve never seen such a disdain for craft, esp on the orange site. Mind you, people have often acted this way, but perhaps didn’t feel like they could voice it. What really puzzles me is this sentiment comes in part from technologists themselves (supposedly).
Though I don’t know how much I buy the “you must use AI” argument. I’ve built a career out of doing good work in a fairly quick fashion on the whole. The speed and quality never made it easier for me to be promoted; I’m not sure most places can accurately judge that, hence the giant ceremony around promotions at bigger companies. Thus, there is a wide band of variance WRT quality and speed. It’s possible the floor is raised, but I can’t see anything other than it reaching some sort of new equilibrium eventually as the lack of domain understanding catches up with the excessively-AI-assisted developers.
AI seems to have thrown gas on some low-key, long-running psyop on software engineers. I’ve never seen such a disdain for craft, esp on the orange site. Mind you, people have often acted this way, but perhaps didn’t feel like they could voice it. What really puzzles me is this sentiment comes in part from technologists themselves (supposedly).
I don’t think it’s right to call it “craft”. I think I get the point you’re trying to get at, but let’s apply that same argument to a surgeon. Wouldn’t ‘craft’, in that case be the number of revision surgeries needed to correct for the mistakes in the first attempt? I understand the common retort, “software engineering is different” and “iteration is how you’re supposed to write code”. But I don’t find either compelling. What’s the cliche, If I was given 5 hours to cut down a tree, I’d spend the first 4 sharpening my ax.
Today it’s: I reviewed this AI generated code so it’s just as good. Which is the same as saying, I don’t need to take notes, I already read the chapter. Both sound great until you get to the test, or until prod goes down.
This isn’t craft, this is basic competency.
Though I don’t know how much I buy the “you must use AI” argument.
Well, it’s one of those things that’s true if you believe it’s true. I failed an interview this past week and one of the reasons given is that I “wasn’t fast enough”, followed very quickly by the question “didn’t they tell you, that you could use AI?” Ahh, sorry, I mistook that as an option, rather than a hint about the unstated requirements. Oops!
It’s possible the floor is raised, but I can’t see anything other than it reaching some sort of new equilibrium eventually as the lack of domain understanding catches up with the excessively-AI-assisted developers.
The floor isn’t raised, it’s just the bar that’s lowered. It lowers it for the set of people who can ship code that appears functional. Adding a random distribution that matches the input sample back into the same input, doesn’t lower the floor. It sharpens the bell curve. The tails are still there, they’re just slightly harder to see. (Which I assume also means that people will start to lose sensitivity to them.)
I can’t parse your comment without the assumption that you can separate what you’re trying to call process, from the thing with value.
Perhaps if I phrased it as: some people like the journey, while others prefer the destination. While a finished program is important, I do enjoy the process of programming. I suspect that the author of the article views programming as a necessary evil to get the results he wants.
Your friends have a child, there’s this new AI agent that takes the newborn, and returns them as a small child ready to start school. This is obviously better right?
We have that now, only they’re called “nannies”. So yes, I do think some people would love this. And then once they get the child back ready for school, the children are shipped off to boarding school.
Can you make money selling mediocrity?
Just look at McDonalds. A quote I have: “Consistent mediocrity, delivered on a large scale, is much more profitable than anything on a small scale, no matter how efficient it might be.”
And if you can’t tell, I agree with you! I think over reliance on LLMs is sad, but it’s probably true that most people don’t care enough.
I don’t filter by tags but I do hit the “hide” button when a discussion gets really involved and swamps the /comments page. For some reason, Rust-related stuff tends to do this a lot…
I just love being told by people who think AI adoption will make them rich how valuable AI adoption is. It’s like hearing from a casino manager how good gambling is for your health.
Let me reiterate: someone in a position like this doesn’t have to do anything new or even useful with LLMs, because their goal isn’t to make anything useful it’s to sell the idea of LLMs: their usage would grow Fly’s business.
My point is: being much lower down in the Ponzi scheme, people like him can get rich even if the product is junk. He writes like he doesn’t need to care if what he says is really true because he really doesn’t.
I work mostly in Go. I’m confident the designers of the Go programming language didn’t set out to produce the most LLM-legible language in the industry. They succeeded nonetheless Go has just enough type safety, an extensive standard library, and a culture that prizes (often repetitive) idiom. LLMs kick ass generating it.
The funniest argument in favor of coding with LLM’s in this article is one that just sounds like an argument against Go, paired with advice that Rust should just be more repetitive so LLM’s handle it better. Hard not to laugh.
Author doesn’t even understand how to astroturf properly. It’s really obvious when you do it from the company blog…
Those are definitely the best arguments for, uh, stochastic parrot vibe coding.
Almost none of them are wrong (though for one, there’s an inherent contradiction between the importance of “guardrails” and the whole “bad at Rust” / “Rust must adapt to this” thing).
I just don’t share the values behind them.
Putting aside the tone of the post, there’s this bothersome trend I’ve noticed where all of the pro-AI posts seem to talk about things like agents and prompting techniques as if they should be obvious by now. I don’t know about others here, but everything I saw went from “agents are a futuristic idea” to “why aren’t you using agents yet?”.
Maybe I’m not looking hard enough (I’m not looking very hard), but I am missing the explainers on how to use these tools properly. A lot of what I see is either surface level explanations or acting as if I should know it already.
I’m still a skeptic, but I have been trying to ensure I at least give things a fair shake, and I can say the value is not so face-smackingly obvious that it explains this divide.
Agents are proposed as a solution to the “hallucination” (being confidently consistently totally wrong) problem, where LLMs can iterate their solutions against oracles like compilers or test suites before presenting the final answer. They do this by directly modifying files and executing commands on your system. It isn’t quite a panacea, since LLMs tend to cheat whenever possible (disabling failing tests, for example) and have trouble letting go of their own confabulations once they’re made. This is colloquially called “going off the rails”. AFAIK the latest is people trying to use git to work around this by resetting the LLM to a place before it gets stuck, or something.
With regard to how agents work, there was a post about this that made the rounds which promises to show how you can implement your own: https://ampcode.com/how-to-build-an-agent
I think a lot of the big code editors (Zed, VS Code, Cursor (?)) now have agent integration so you don’t need to build your own.
You’re not wrong. Many LLM companies are running a con job. LLMs are infinitely more useful than NFTs but the groups selling them are equally shifty. Over-promise and under-deliver. Then claim the technology is just moving so fast so no one could blame you for having been wrong. It has been hard for me to watch a lot of smart devs I know fall for these moves. The con builds on dev anxiety about falling behind the industry. It works for the same reason there seemed to be a new JS web framework every year. I’m not an LLM hater. I think they have real use cases. Despite people’s incredibly lax terminology around “agents” I do think that some automated code changes with LLM assistance could be useful. There is good tech being built here. But when lots and lots of investment money floods in, it attracts a lot of the worst actors. And a lot of them are using “agentic AI” as their current excuse for why they didn’t deliver what they said they would 6-12 months ago. Their voices are heard the most because they are doing the least with the actual technology.
seem to talk about things like agents and prompting techniques as if they should be obvious by now.
It’s more of that the experience of agentic coding and getting better at prompts is the cutting edge of using these tools, and fancy autocomplete and expecting “build me a blog” is not representative of the reality of using these tools in a serious manner, and so pointing this out is trying to point out that if you’re not talking about that stuff, you’re not talking about the same thing. It’s like trying to compare text editors by talking about notepad.exe vs emacs. You may not think emacs has utility over notepad, and that’s fine, but even though it feels like you’re having the same discussion about text editors, you’re talking about extremely different things.
Oh yeah, I understand where the author was going with this, this was a broader point. With most software topics, I see a lot of posts talking about success stories and failures that I can learn from. With LLM tooling, it seems like there’s a very firm “you’re already too late” attitude from a lot of the writing, rather than wanting to share learnings.
I myself am frustrated by this. I’d like to think it’s partially because stuff is moving so fast, but I dunno. I’m trying to do my part by writing about this stuff, at least.
LLMs really might displace many software developers. That’s not a high horse we get to ride. Our jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades.
Seems to be an argument from “we’ve always done it this way” which I think is unfortunately really salient with tech workers. “Automation” is such a strictly positive word in tech circles that we forget who actually benefits from “automation”: the rich and powerful, not the workers.
Unless you own the means of production, automation just transfers wealth to the rich, such as the shareholders or investors of a company. Tech workers have taken a cut of that wealth transfer for decades, but we are just useful pawns. Good thing there are a lot more pawns than kings!
Do you like fine Japanese woodworking? All hand tools and sashimono joinery? Me too. Do it on your own time.
Which misses the point that finely handcrafted code is as easy to mass reproduce as terrible slop code once it’s written. If you’re building a successful product, code will bring in multiple magnitudes more money than the resources spent to write it, why shouldn’t we make it as good as it can possibly be? I think Vimes boot theory fits here, investment in code quality will save a lot of money in time.
This article is so bad, that I’m really quite stunned at how many people I (previously) respected have praised it across social media. I’m not going to name anyone here, but I responded to one prominent person by saying that the article seems rather vitriolic to me, to which he just said that it doesn’t really matter, as anti-AI folks have been plenty vitriolic as well. I pointed out that adding more vitriol won’t make the debate less vitriolic, and he just shrugged.
And I’m just like, what??? How can you claim to want less vitriol, and think that AI coding is good, and also like this article? If I was a proponent of using AI in coding, I would want this article to have never been published.
If hallucination matters to you, your programming language has let you down.
Uh let’s rephrase
If correctness matters to you, your programming language has let you down.
I wish there were a cross site reputation/identity system for authors so I could remember to discount this guy in the future.
Lots of interesting points in this post, shame it is written in such an aggressive manner. Rust is a great language and Minecraft is a great game, but lots of people are turned off them just because of somewhat toxic fan-base. Is this what happens with LLM/coding agents too?
Couple of thoughts that I had while reading:
Intellectual property is somewhat of a blurry area. LLMs can list someones code, but humans can read it all the same, and then remember the algorithm in few years and write something resembling that. Code copyrights can be strange and stupid imho, because you can rework some code to be different from the original but it will still be “stolen”.
About that “I’m not a Kool-aid drinker” - that is what kool-aid drinker would say :D I distinctly remember myself when I just started my career in programming: I’ve churned out megabytes of code nonstop, justifying lots of stuff “good enough is better than good” etc. In retrospective I bet most of us on that phase would be just as productive as the newcomers armed with LLM agents, if not more. But as I gained experience I noticed that I slowed down drastically. Partially it is because for every problem I see a whole tree of solutions and their outcomes and have to think what is the best one here (which is not always a good thing, I agree).
But the main thought of mine is that I wouldn’t be able to become a person I am now if I wasn’t going through that “intense” stages bashing my head against thousands of problems, issues and bugs. If I ever were to skip even one of the books or articles or troubles my brain would process my current work differently, and not in a good way. I see lots of “seasoned” coders advocating for LLM usage and newbies will listen to them and mimic them; in the end I think we won’t have such a big amount of good programmers in time being, like we do now. To understand some complicated things you have to do them by hand, even if it seems “ineffective” - it is an investment into own brain neural network.
At the uni we had a professor that tried his best to teach us complicated stuff, but he never failed us on exams and always let those who didn’t study pass with the lowest possible grade. He helped those who wanted to learn, and ignored all the others; when we asked him why he doesn’t fail people like other teachers he said “It is a complicated subjest, and if you don’t want to understand it - in time I will always be in demand, because you won’t be able to do what I do”. So even if at some point we’ll have a shortage of coders that can read and debug complicated code and understand how things work - maybe that’s okay, the market will favor people that know stuff.
Or maybe we’ll come into what “Idiocracy” movie depicted.
Ouf, fly.io might want to give tptacek a personal blog domain to not tarnish their brand too much. (This goes whether one agrees or disagrees with the premise, a provocative rant is a provocative rant.)
Yesterday I was sent on a wild goose chase by gpt 4.1 which cited lots of 404-ing documentation links and non-existent functions in explaining a bug report – much time wasted. But other times I have found llms slightly useful. I’d rather not run an agent with work-related stuff until I can do it with an offline model or one I set up myself so I know exactly what it has access to. I still think the truth is somewhere in the middle between the hypers and the doomers, it’s just a complicated, shifting landscape and the border between truth and hype isn’t clear-cut.
Ouf, fly.io might want to give tptacek a personal blog domain to not tarnish their brand too much. (This goes whether one agrees or disagrees with the premise, a provocative rant is a provocative rant.)
I already said this elsewhere in this thread, but to ensure you see it: the CEO of Fly.io specifically wanted this post to be on the company blog rather than a personal blog. At least that’s what tptacek said on Bluesky.
LLMs digest code further than you do. If you don’t believe a typeface designer can stake a moral claim on the terminals and counters of a letterform, you sure as hell can’t be possessive about a red-black tree.
I certainly believe “a typeface designer can stake a moral claim on the terminals and counters of a letterform”
of course they can! they built it. that’s what intellectual property is all about
People coding with LLMs today use agents.
Do you guys do that? I find this idea rather uncomfortable.
I think the key premise here is that one can effectively and efficiently audit code that the LLM is producing.
Can you? First, human attention and speed is very limited. Second, when I see something, I am already predisposed to assume that it is right (confirmation bias). Or at the very least, my subsequent inquiries are extremely narrow and anchored around the solution I have seen presented to me.)
or, God forbid, 2 years ago with Copilot
I have been using GitHub Copilot for 3+ years now, and it has been pretty good. I am not sure why the author finds so displeasing here. I prefer generating short amounts of code, which I can easily eyeball.
“Do you like fine Japanese woodworking? All hand tools and sashimono joinery? Me too. Do it on your own time.”
Mm no.
I think the subtext is: “Don’t expect to be paid like a craftsman unless your problem space demands it, because all this problem needs is people who can assemble IKEA furniture.”
(I personally do not want that.)
It’s extremely weird to me, as I really see software development more akin to design than manufacture. IKEA goods are designed by extremely competent people, and then mass produced. To me, the manufacturing part is build tools and CI pipelines. We don’t need to build more shelves, we can just design one shelf and then reuse it forever while designing new things to solve new problems.
I absolutely agree that some people are just so angry toward the unethical startup creating the technology that it cloud their judgment. But it’s also true that some other people are drinking the VC kool-aid like its lemonade and spout the most obnoxious non-sense.
One thing is certain: we are collectively figuring this out right now. 2 years ago its was all about code completion, last year was chat mode, now its agent mode, and next year who knows? The technology is not stable and consensus about best practice haven’t emerged yet. Second order effect are not yet visible. Before declaring the technology defeated or victorious, can we just wait for the damn battle to be over, please?
Right now, I find that asking Claude to review code I have written myself work well so far. Code completion didn’t pan out. Asking answer is hit and miss and I prefer to just google it. Agent mode I haven’t tried yet.
In 2-3 year we’ll know more. I’ll reserve judgment until then.
The sheer sophistry in this article is astonishing. Author is probably not wrong that his friends are smarter than him. From an argumentative point of view it is worthless. Zero stars.
The sociology of posturing and rationalization in software startup culture is what’s interesting to me here. Insightful comments from cblake and gpm.
What strikes me about a lot of the rhetoric from the pro-LLM camp nowadays is the need to proselytize, as opposed to mere marketing. It’s something that reminds me a lot of the pro-cryptocurrency crowd too, back in the misty past of like 7 years ago.
If the tech is so great, then sure, market it, use it to make better products and make more money than those that don’t use it. If it’s not that great, but your continued funding relies on the perception that it is, write pieces like this.
Weaponized network effects. In both cases, the technology only really “works” (achieves the goals of its creators) if it can get mass adoption, including the corollary ideological / conceptual buy-in from the public mind.
Every day that goes by I become more convinced it’s time to apply to law school instead of doing any of this anymore
It may be! You’re skilled at argument and you care about doing it well. You won’t starve.
But I’ll warn you, all the most miserable people I’ve known have been lawyers. Even worse than the doctors, and that’s saying something.
A lot of good points here. And I love how you’ve tagged it with both tags. Go ahead, cross those streams!