The Majority AI View
70 points by drmorr
70 points by drmorr
One of the reasons we don't hear about this most popular, moderate view on AI within the tech industry is because people are afraid to say it.
If lobste.rs is anything to go by this is rather the opposite in my view. Saying anything positive about here AI/LLMs is met with heavy backlash and the most popular posts with vibecoding tag appear to be skeptical/satirical.
In my personal bubble people are hardly able to get access to GitHub Copilot, let alone be forced to use AI and the reasons are both managerial approval and engineers just not liking LLMs.
lobste.rs is not a statistically valid sample of anything, and trying to treat it as such will lead to tears.
Why wouldn’t it be a valid sample of anything? I feel like there are a lot of skilled engineers here, with a wide range of backgrounds…
For starters, this website is invite only. People will generally speaking invite people who they think will "fit" the website. What this exactly entails is different per person of course but it means that there people here are the result of that selection bias.
Then there is the fact that this website has "only" 19,097 users when I am writing this comment. 19 thousand people might not sound like a small number. But to put that in perspective, last year somewhere between 150,000 and 200,000 people got laid off from tech companies in the US. Even if every single user on this website did belong to that group, we'd only compromise at most 12%.
Not to mention that it is estimated that the amount of software engineers worldwide is estimated to be somewhere between 20 to 30 million people. Making the people on this website 0.06% to 0.1% of all engineers out there.
Finally, there are also factors like self selection. A lot of people don't feel the need to participate in communities like this one.
Skilled engineers and computer science researchers are definitely not a representative segment of the general population
I can assure you that large companies are requiring engineers to adopt AI. Personally I’ve been asked to add AI features into a product with literally no other specification; it simply needed to be something AI. Daily AI use is also one of the corners of my performance reviews, and every engineer has the same. Companies with billions of dollars invested in offering AI are going to do everything they can to ensure it is adopted and embedded in our systems. You have to get locked in now before the cost becomes apparent.
I can assure you that large companies are requiring engineers to adopt AI... Daily AI use is also one of the corners of my performance reviews, and every engineer has the same.
We have internal instances for regulatory reasons but I'm seeing that myself. I'm accepting it for chat, search, and email, while rejecting it for my actual work. I'm in a niche and use non-standard tools so I'll see how that goes. It does do a decent job writing tests but I'm not sure if it's saving any time. We recently had an outage that was the result of an engineer applying AI output instead of working through the details of the problem. Our management declined to include in the post-mortem that AI was involved.
At work we are definitely being pressured by our owners to adopt AI by all staff in a drive to “improve productivity”.
They are distinctly unhappy with people who report back that it is not improving productivity or reporting any issues/drawbacks with adopting AI - for example we have been reporting that Copilot is very hit and miss for automated code review (we’ve actually been tracking the hit rate of the review comments it raises and at the moment it is approximately 46% good/useful, and the bad comments can take a surprising amount of time to disprove). There is definitely an impression on my part that they are expecting AI to let them get staff to do a lot more individually, and/or reduce headcount.
I was a bit baffled by this disconnect as I would consider our management to be usually pretty rational with cost/benefit analysis until I visited our London office recently (where our senior management work), and the amount of AI-centric advertising in & around London is absolutely overwhelming. I don’t think it helps that they are basically constantly hammered by that advertising when they’re out and about (or even during commuting).
I appreciate the anecdote and I wish people would post more of them.
On the managers’ apparent irrationality: even if the tools end up eventually improving productivity, you would still expect there to be an adjustment period up front where the gains are unclear as people learn how and when to use them, figure out the right tools, etc. So the reports of problems don’t necessarily prove anything yet from their point of view, especially if they’re hearing different things from different people. A 46% hit rate is not bad for automated code review! It’s cool that you’re trying to track it rigorously. Is it going up?
We haven't been tracking for long enough yet to form a trend over time. 46% is on 1500+ comments, though - so it's getting to the point of being a decent sample size. At the moment, it is a simple "is this comment useful? Y/N" question.
One thing I'm trying to work out how to quantify is the degree of usefulness.
For example, for good comments, is it just a code styling tweak or a suggested name change, is it a small corner case issue, is it a fundamental issue?
For bad comments, is it something we don't want to do but might be a sensible suggestion in other circumstances, is it plausible but wrong and requiring some effort to debunk, is it the LLM badly parsing the context and suggesting something nonsensical, is it an outright harmful suggestion?
Anecdotally, I've seen a mix of unhelpful comments - mostly misunderstanding what the PR is doing, or the effects of a particular function, some bigger confusion around locking in particular (this is hard for humans to get right too, so this is actually understandable!), and a couple of cases where I've had to go off and read language or framework documentation to understand that it's actually a flawed suggestion. Those last two in particular can take quite a bit of time to disprove, and that's something I think it's important to account for. I've also noticed inconsistencies in how well the LLMs cope with code in different languages - for example doing better at JS/TS or Python than things like C, PHP, Perl or Bash.
I’m not sure this is the case. The backlash I see is generally about the framing of articles about AI. Not AI itself. People seem to misinterpret backlash against articles that frame AI as inevitable and make vague or generalized statements about AIs applicability and usability across all engineering and development, as backlash against AI itself. The negativity oft times is aimed at the message and the naivety of the authors framing of the technology and in some cases a clearly dogged religiousness in the way a person talks about AI. We all get uncomfortable when people start saying a single technology is the answer to all problems.
Lobsters is full of people who have built their careers on results based evaluation of technology. We read papers, do benchmarks, test and formally verify, measure outcomes, and evaluate industry adoption. When someone writes an article that eschews objective measures and evidence and references and makes statements of fact with no realistic or rigorous research to back it up, people will tear the article and arguments apart.
It’s a discernible pattern in many articles. LLMs and AI have a place and a usage. We’ve been exposed to the tech for long enough to see and understand its limitations. Papers are starting to be published that refute the productivity claims of AI. Papers that show decline in problem solving ability for people overly relying on AI. *Every technology has trade offs•. And at an innate level, any engineer with a significant enough amount of experience who is told that some technology is inevitable and perfect is going to immediately become defensive and point out the downsides in order that people can see the whole picture.
If an author takes that as an attack on AI itself, or worse on themselves, they should think long and hard about why they are taking such a personal stake in a single component of an much larger and broader technology landscape.
Honestly, I was highly skeptical of the utility of LLM coding assistants until I proposed a needle in the haystack question to the Windsurf IDE using one of the high reasoning models - point out any bugs that could have caused this kind of flash corruption - and it found and proposed viable fixes for three legitimate issues. It probably would have taken me at least a couple of hours, if not longer, to read and find the same issues.
I also quit out of waiting for it at least a few times because in the time it took to think through what the ask was, I could make the change faster by hand.
I don't recommend vibe coding but I do think there is an order of magnitude performance gain to be had, potentially, in the hands of people who are qualified to review and sign off on the output (for certain types of problems.)
Personally I think lots of people are mentally stuck in the era of ChatGPT 3.5. Fun toy, sometimes useful but definitely not for real work. Then the company gives them access to GitHub Copilot, probably the worst “agent” on the market and guess what: it’s a terrible experience.
Unfortunately inference is costly. Working with Sonnet on a (greenfield) web app can easily burn 75 USD in an hour of work. However, I think working with it for a while makes the utility of LLMs obvious to anyone willing to fork over the cash 🤷♂️
Recently I attended a conference for security professionals (reverse engineering, exploitation, etc). The CTF was absolutely demolished by a fully autonomous POC agent for the first half (at which point they withdrew). I don’t think most people are able to solve even the easiest challenges and I would have laughed someone out the room if they told me this two years ago 😅
A typical software engineer does let's say no more than 48 weeks * 40 hours = 1920 hours of work in a year. At $75 that's $144,000 a year. An equivalent salary in the US after accounting for 7.65% FICA tax, is about $133,750, not including benefits.
Maybe the net effect of really good coding agents is not to take jobs but to drive down wages (because if you won't work for cheap enough, they will in fact just replace you.)
... and the pricing to end-users is clearly not sustainable, even if literally every technology-using human on the planet becomes a customer. So the price will go up.
Will the value people get out of it be commensurate? It doesn't look like it, so far.
I want to believe this, but the staying power of companies that have never turned a profit is pretty baffling these days.
Obviously the price for free users is not sustainable without some revenue to offset costs, but would you say the price for paying users is not sustainable, and if so, why?
https://www.wheresyoured.at/openai400bn/
Ask Ed for his spreadsheet, but the numbers he throws around for OpenAI are all taken from their public statements, and their revenue numbers from their disclosures.
OpenAI likes to talk about datacenters in terms of gigawatts of power use, but you need to remember that a really big datacenter is on the order of 50-100MW, with the total for all of Northern Virginia's data centers being not quite 4 gigawatts. And NoVA is the largest concentration of datacenters in the US.
The physical buildings are the easiest part, and there's not enough construction capacity in the US to get them done in time, even if they were all under contract now... and they aren't, because you need a site before you can build, and no sites have been announced that don't already have buildings in progress.
OpenAI can't make its announced deadlines, needs more money to meet announced commitments than the entirety of worldwide venture capital raised in 2024. And they are running at big losses now.
In consideration of the above, I propose to you that the current prices for paying users is not sustainable.
It's just wrong. Zitron is misunderstanding publicly available info as usual. The whole point of the agreements is that other people are building the data centers and they do not have to pay that money up front. This was a good summary: https://news.ycombinator.com/item?id=45620292
Zitron asks: "Does OpenAI have $400B in cash?"
The actual question: "Can OpenAI grow revenue from $13B to $60B+ to cover lease payments by 2028-2029?"
The first question is nonsensical given deal structure. The second is the actual bet everyone's making.
I encourage people to read the linked HN comment. It's maybe true in isolation that OpenAI is gonna make almost twice the current revenue of Netflix in 3 to 4 years, but the fact that they're essentially getting credit from NVIDIA and Oracle is echoes of the infamous Global Crossing debacle during the first dot-com boom.
That is a very interesting comparison, thanks for mentioning it. It sounds like they had $1.5B revenue in 1999 (source). In 2001 they had losses of at least $10.9B on revenue of $3.2B (source). OpenAI is at $13B ARR (source), but I don't think it's publicly known what the trend on their losses looks like — the number for H1 is $13.5B (source) but half of that is due to some weird on-paper stock options revaluation.
A more substantive evaluation of your point would have to compare the actual deal structure in both cases. Would love to read that somewhere.
Here's a self-described investor running the numbers on datacenter buildout, so basically the entire industry
That's where I got Global Crossing from, to be entirely honest I had forgotten all about them.
you're saying "Zitron was right but in a way I'm nitpicking as wrong"
He says they need a bunch of cash in order to pay that cash to somebody else (it’s the headline), and that’s not true. This a frequent thing with Zitron — he makes several versions of a claim in the same piece, sometimes in the same sentence:
to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it.
in order to be able to evade any particular refutation. It doesn't mean he's right, it means he does not have a clear argument. Even "a lot of" here is a way of weaseling out of his own headline claim. And then it's either OpenAI or someone else.
So, let's say that they have a plan to grow revenue from $13B to $60B in three years.
They would either need to gain 3-4 new paying customers for each one they currently have, or increase the fees they charge to current customers by 4-5x. Oh, and that's in a competitive market where they might have first mover advantage but other companies are about as mature already.
Either way It's about the size of Microsoft's non-Azure revenue, though. That's a bet, but it's not one that a prudent person would make.
FT just reported OpenAI has 40M paying users. In April that number was 20M, according to The Information (Techmeme link because the real link is paywalled and doesn’t show the number). So they doubled in the past 6 months. Doubling again twice in the next three years will be difficult and a lot could go wrong, but it could happen. Presumably they also intend to increase their revenue per paying user. Nobody is saying it's guaranteed or even likely. Like the HN comment I linked, I just want to get clear on what the conditions for success and failure are.
Okay, was surprised to see something I'd also felt here.
There's people literally selling AI and saying we'll build space datacenters for it (in defiance of all engineering and physics principles). There's people very enthused about it, maybe drawn in by how it'll act like it's answered anything, or they've heard what the first group says it can do and never really had to stress-test its abilities. There's a lot of, I think, working backwards from naively treating existing stuff like it was people (a person who could quickly solve these coding exercises would be a good engineer, so...) or projecting the future based essentially on imagination about abstract intelligence-in-a-bottle.
And then there are most people who've actually tried to do practical stuff with what actually exists. One of the core things about it is that it isn't like people, it's weird. To get useful results from it, it needs to be on a pretty short leash. You get a feel (for example) for what kind of question or task it fits with vs. is just going to be useless for. And to these folks it seems beyond bizarre when people talk about "AI employees", or even that lots of effort has been spent on the idea that you could just @ an agent in a bug tracker and expect something to happen.
I think there's clearly a speculative bubble: running it for free everywhere can't last, pushing people who don't want to use it to use it thankfully won't. I don't think the whole thing will disappear when it the frothy investment goes away: it seems to be cheaper to operate than some of the more pessimistic estimates say. To some it might actually be more interesting after it's less bubbly.
One of the reasons we don't hear about this most popular, moderate view on AI within the tech industry is because people are afraid to say it.
What's implied here is that people who are publicly bullish on AI, are secretly having a different opinion on it. I find that rather hard to believe. If anything there are entire communities (this one included) who are very publicly leaning towards very visible criticism of AI. Within companies from everything I have heard so far, that is even more true. Many established companies are struggling with getting engineers to use AI and even the companies that have a huge push towards AI find themselves presented with internal push back from all ranks of the organization.
I don't think people are hiding their true opinion, it's just a topic where people genuinely hold opposing views and opinions.
What's implied here is that people who are publicly bullish on AI, are secretly having a different opinion on it.
That is not implied in the least. The implication is that critical voices are muted.
That’s not how I interpreted this sentence. But outright muting is even harder to image for me. In which way would people be muted? There are so many programmers constantly writing about AI safety, how it doesn’t work for them. There is no lack of dismissive content about the capabilities of AI. This website alone has a great collection of them.
I hate AI. I will never express that opinion at work. It would mean losing employment. This is in a company that doesn't really use AI and is really not in this domain.
I only express this opinion in private. Saying in public you think this is useless is bringing the wrath of the sealions on you
Same as with crypto.
My point is that this is not the majority opinion. Most people (particularly outside of programming) are adopting AI secretly against the wishes of their employer because it helps them with their work. You might very well feel that bringing up concern about AI is going to harm your job security (and that might very well be the case!) but compared to the average person out there, your situation is different. You work in an industry where AI has majority support by leadership, that is not the situation non programmers are in.
Most people (particularly outside of programming) are adopting AI secretly against the wishes of their employer because it helps them with their work.
Massive citation needed.
We get that you like AI from your past posts. It is great that it works for you and it has been cool to see a success story with actual code behind it.
That being said, the projection that you're making here is exactly what this article is about, not that you "secretly don't like it" - and, for that matter, I cannot possibly see how you could read the post that way given its content. It's a real stretch.
Why is it so hard for you to believe that this is really how most people feel? It certainly is for every senior+ engineer that I know personally. Literally, every single one that I've spoken to about it.
Are you worried about the time that you've spent on the tools? Does it upset you that other people don't like something that you do? Is there a part of you that thinks you might be missing some of the downsides, but that you don't want to admit it?
The work that people like you and simonw are doing really is cool and useful. But the way that you seemingly just cannot believe that most others do not feel the same as you is infuriating. You don't have to come to the rescue of LLM companies every time anyone talks about the downsides. Please, please reflect more on what this post is actually saying, because you really are playing straight into it.
We get that you like AI from your past posts. It is great that it works for you and it has been cool to see a success story with actual code behind it.
My personal opinion is not particularly relevant here to what others are doing. That said, I recognize that the people I get to interact with are probably quite biased (more tech adjacent, higher educated, US and western European).
Why is it so hard for you to believe that this is really how most people feel?
Because I have been very interested in AI for the last year and I have talked to probably more than a hundred people at this point about their experiences with AI, in and outside of the field of software engineering. You can also find public data that supports viral adoption (https://archive.is/yKkZi):
By the numbers: 42% of office workers use genAI tools like ChatGPT at work and 1 in 3 of those workers say they keep the use secret, according to research out this month from security software company Ivanti.
A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are. 20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals.
Or this headline: "Nearly a third of UK workers secretly use AI, research finds" (https://www.thetimes.com/business-money/technology/article/nearly-a-third-of-uk-workers-secretly-use-ai-research-finds-m8dsm0pt0)
Also AI is pretty massive. In Austria, 78% of people surveyed use AI: https://orf.at/einfach/stories/3399082/
My particular surveys and interviews might very well be very biased, because in parts it's because of who engaged with me the last couple of months but this includes me specifically looking for highly skeptical AI folks.
Are you worried about the time that you've spent on the tools? Does it upset you that other people don't like something that you do?
No, I don't quite care about what people do on their own, my personal feelings are unaffected by it. It however guides how I think about the present and future.
A lot of misinterpretation going on here.
The argument is not whether the tools are at all useful or if anyone uses them. The first sentence of the "majority AI view" that you're responding to is that they have utility. There's very little argument that they don't. Widespread usage is not surprising nor is it up for debate.
The question is whether that utility aligns with the hype, which most of us believe it does not (if you believe the article - I don't think there are any "real" numbers on this). The rate of adoption for simple tasks has right around 0 correlation with this question.
Further, the sources you've linked give no indication at all that people are using AI "against the wishes of their employer"; rather, the "secret" usage seems to be more related to issues of self-interest, such as a desire for an advantage or to be perceived as more capable than they are. These are very different and also have little bearing on the reality vs. hype question for the same reasons.
Your personal opinion and how you present it are exactly what we're talking about, so of course they're relevant. It's actually the main thing we're talking about, because you are the type of individual that the article is talking about. That isn't a value judgement. You can't remove yourself from the discourse and you should be willing to stand behind what you're saying. You don't speak for others, you speak for yourself, and anecdotes don't matter.
Your personal opinion and how you present it are exactly what we're talking about, so of course they're relevant.
Are we? My understanding of what the article was discussing is that the majority of people actually building technology hold a far more grounded/neutral view of AI than the public conversation suggests and that some mechanism/incentive structure etc. is making people talk more positively about it / suppresses negative sentiment.
What I'm suggesting is that from the interactions I have with people that does not at all reflect my experiences. You're now projecting things on my personal opinion on AI which is quite a bit more complex and nuanced than you suggest.
Further, the sources you've linked give no indication at all that people are using AI "against the wishes of their employer"; rather, the "secret" usage seems to be more related to issues of self-interest, such as a desire for an advantage or to be perceived as more capable than they are.
30% of respondents on the link I gave responded with "my employer has a no AI usage policy" on why they are using AI in secret. That's the second most common response after "I like a secret advantage".
Most people (particularly outside of programming) are adopting AI secretly against the wishes of their employer because it helps them with their work. [..] You work in an industry where AI has majority support by leadership, that is not the situation non programmers are in.
i’m going to lightly but firmly push back on this assertion. it was perhaps true at the beginning of 2024 but it is no longer the case today (and hasn’t been for some months at the very least).
LLMs are being integrated into Microsoft Office, they are being adopted by Westlaw and LexisNexis, medical providers are using LLMs in their EHR platforms, consultants (e.g. McKinsey) are advising clients on areas where they can augment (or replace) labor with LLMs, at least one member of US military leadership publicly admits to using LLMs in in his personal and organizational decision-making process, and the list presumably extends to things beyond what i’m aware of.
you would be very hard pressed to find a segment of the information economy that is not going out of their way to advocate for more LLM use within their workforce, let alone actively discouraging it.
you would be very hard pressed to find a segment of the information economy that is not going out of their way to advocate for more LLM use within their workforce, let alone actively discouraging it.
That is both true and misleading. Yes, stuff like Gemini shows up all over the place but people hide their actual AI usage that they pay for individually and companies still turn of this stuff.
This goes to ludicrous degrees. I know of an acquaintance who is part of a European insurance company that has a multi million dollar contract with Mistral and I believe AWS to use a blessed set of AI that should be fine tuned on the company code base. They are only allowed to use that tool. Yet multiple people in the company secretly use Codex and Claude Code on their own dime.
Same with a lot of non tech people. There is Gemini and Copilot all over people’s company accounts, yet individuals pay for ChatGPT and throw company data into their personal accounts.
Is my data representative? I don’t know. But over the last few months I talked to more than hundred people within and outside the tech industry and AI adoption is basically 100% and non blessed (or at least not completely blessed) adoption is happening for more than half the people I talked to.
There is no lack of dismissive content about the capabilities of AI. This website alone has a great collection of them.
The thesis of this post is that the critical content is not proportional to the claimed "majority view". I.e., the author claims that many more people quietly, or privately, hold the more bearish (but, note, not dismissive) view than one would think based on the public discussions. The claim is that many people do not feel safe to voice more critical views and/or don't have a platform to make a lot of noise about their views.
This certainly fits my personal experience: only a handful of the experienced programmers I know or work with have the bullish view of the all-in-on-LLMs crowd, but only a fraction of them will say anything about this explicitly at work or in public. This could just be an artifact of my bubble, but it fits what I am seeing on the ground nonetheless.
You ask:
In which way would people be muted?
There is a whole paragraph on this in the linked post:
And their extremism has had a profound chilling effect within the technology industry. One of the reasons we don't hear about this most popular, moderate view on AI within the tech industry is because people are afraid to say it. Mid-level managers and individual workers who know this is the common-sense view on AI are concerned that simply saying that they think AI is a normal technology like any other, and should be subject to the same critiques and controls, and be viewed with the same skepticism and care, fear for their careers. People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech, especially as tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.
The thesis of this post is that the critical content is not proportional to the claimed "majority view". I.e., the author claims that many more people quietly, or privately, hold the more bearish (but, note, not dismissive) view than one would think based on the public discussions. The claim is that many people do not feel safe to voice more critical views and/or don't have a platform to make a lot of noise about their views.
Maybe I'm dumb, but that's what I took away from it, and that to me sounds quite like "publicly bullish on AI, are secretly having a different opinion on it". If the nuance is "publicy not saying anything about AI, secretly having a negative opinion" is fair as well, but I find that most people are voicing opinions in one way or another on AI. Complete silence on AI is … pretty rare to be quite honest.
This certainly fits my personal experience: only a handful of the experienced programmers I know or work with have the bullish view of the all-in-on-LLMs crowd, but only a fraction of them will say anything about this explicitly at work or in public.
I'm not sure what circles you are going around in, pretty much everyone I talk to holds exactly the same opinion publicly as well as within a company when it comes to how useful AI is. I'm not dismissing that there are people who are very afraid of their job and will say something else to support their job security, but I really doubt that this is happening in numbers that would change the actual public opinion on AI. As I mentioned a few times now, I am mostly finding people have the opposite experience where they secretly use AI against company wishes (particularly outside of engineering) because they find use in it, and they don't want to get caught.
but I find that most people are voicing opinions in one way or another on AI. Complete silence on AI is … pretty rare to be quite honest.
I post a fair bit on this website and have only posted a few anecdotes about AI. I don’t think I’ve ever posted or professed a grand “opinion” on it (I’m not sure I have one). That doesn’t seem too unusual. Also, most people don’t post at all, so I really have no idea what those people would say.
For most businesses (excluding the ones selling AI chatbots or GPUs) this is easy to explain. The CEO's job is to make the company appear valuable. Selling things people want to buy at profitable margins is hard. For many businesses (software especially) labor is the biggest cost center.
Along comes a technology that appears to devalue labor.
This incentivizes execs to "go all in on AI". It's a good bet. If the hype is real, AI replaces costly labor. If the hype is fake, do a layoff (blame it on AI) then hire most of the same people back cheaper.
At least from my experience and environment there are more people using AI against their employer’s approvals than CEOs demanding AI to be used by their workforce. Most of that adoption is still underground and secret.
My point is CEOs/owners/shareholders have incentives to publicly hype AI, regardless of what they may think in private.
Could be, but I think it's much more likely that CEOs talk themselves into believing anything. Leaders quite often internalize even highly irrational beliefs because that's what gets them out out the bed in the morning.
Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
I think hype is a weak framing. The above isn't really a view about AI at all — it's a meta-view about views about AI. This is a pretty good article about this: Deflating “Hype” Won’t Save Us.
Hype seems to me far too psychological. It suggests that if only the marketing was more honest or bosses weren’t pushing AI on workers, things would be better. But if you think LLMs are bad for the world, things would not be any better. People who are annoyed by social media posts would just feel slightly less annoyed. The data centers would keep being built. Bosses would still have the power to lay off anyone at any time and make up whatever justification sounds nicest. People would still be falling over each other to use LLMs as much as they can in their personal lives.
Dash avoids making it all about feelings by saying hype makes people ignore legitimate critiques and focus on the wrong things. But then surely the "majority view of AI" should be stated in terms of those critiques and narrow use cases (which are not described, only gestured at), not in terms of how hard it is to make people listen to those legitimate critiques or stick to the good use cases.
Hype gets the causality backwards. People who think the bubble is about to pop any time seem to think people are suddenly going to wake up and realize this thing they've been using all the time is actually not useful to them. But the actual trend is exponential growth in the opposite direction, both in number of users (700M to 800M in two months at OpenAI alone) and in intensity of usage. It seems ridiculous to think this is caused by hype (a claim Dash is smart enough to avoid, but the Zitrons of the world are not). If anything, it seems to me the opposite: the hype is caused by the ridiculous growth.
People who think the bubble is about to pop any time seem to think people are suddenly going to wake up and realize this thing
Skepticism of LLM hype comes rather from the fact that the biggest LLM vendors consistently spend more than they earn and this trend reversing doesn't seem plausible in the near future. Even if the net income somehow becomes positive will it be enough to justify the valuation that AI-related companies have in the future? Will there be enough demand to justify the how many gigawatt-level datacentres are in the pipeline right now? What is the day on which investors start getting their money back gonna look like?
You are ascribing the hype to the end-users but I have always seen hype used to describe the temperament of investors. Investors are hyped about LLMs. Specifically about the speculative future where LLMs become much better than they are today. That is the source of the hype and from there it diffuses to the consumers sometimes because of the capabilities of the technology and sometimes because of the marketing apparatus.
I agree that hype is more plausible as a story about what actually motivates investors. Usually the role end-user or manager hype plays in the argument is that it explains why people are behaving irrationally, i.e., insisting on using something that is not actually beneficial to them — either they are using it out of FOMO or because their bosses are making them. But if you make investor hype the focus, then it does not work to explain away the growth in usage. The best you can do is say investor hype is what is causing them to spread money around to subsidize usage, and if only inference were priced to have positive margins, no one would be willing to pay for it. But this argument also fails because I believe it does have positive margins, despite what most people seem to think. I have not put my argument together yet on that but I believe that is the most parsimonious explanation of a bunch of publicly available facts. Here's a post from today from someone just a couple of hours ago finding they can easily make money selling inference on rented GPUs:
we're seeing if we run our own GPUs how cheap hosting a good model can get
with the traffic we've received in the last 17h we're seeing that we could charge 7x less than the current cheapest option and break even
the fact they're being forced on everyone
This part is alien to me. In the past year I've worked at both a large financial services company and a startup. At the former, people were given access to Microsoft Copilot and GitHub Copilot and encouraged (not "forced") to use them. Yet the power users who wanted access to Cursor or Claude code or ChatGPT were turned down because of [big company reasons]. At the startup we weren't "forced" to do anything; however, we were allowed to use whatever AI tools we wanted to.
As a sidenote, very few engineers I know IRL would agree with what's framed here as "an extraordinary degree of consistency in their feelings about AI". Nor are they "creepy and weird" about AI, they're somewhere between indifferent to and enthusiastic about it.
I think "forced on everyone" is partly describing the aggressive push for it in general, not just company leaders--lots of apps you use have probably had some AI feature announcement, sometimes pushing folks to use it in almost dark-pattern-y ways (e.g. it's easy to accidentally activate) with limited ability to opt out.
There are some companies pushing it on employees harder than the two you described. Glad the ones you've been working with were flexible.
I would love it to see more precise language in these conversations. Is easy to say, as I saw in another thread unrelated to AI, "All the complaints against [topic] are unwarranted..." Our own experiences rarely generalize.
(In the other topic, that comment got voted down. I could be wrong, but wouldn't be surprised if a similar comment in an AI thread was not downvoted.)
I'd also love to be able to differentiate, perhaps by different tags, perhaps using Simon's vibeengineering suggestion, between posts that are arguments about AI and posts that are about using AI.