Is Claude Code going to cost $100/month? Probably not—it’s all very confusing
43 points by simonw
43 points by simonw
There really is a significant difference between $20/month and $100/month for most people, especially outside of higher salary countries.
Strategically, should I be taking a bet on Claude Code if I know that they might 5x the minimum price of the product?
...someone pitched making Claude Code only available for Max and higher
There is still a real difference in "higher salary countries." I've shuffled and cut back on things, if it goes to $100/month, I'm probably out. I'll date myself by saying that current token pricing models remind me of the "Get X minutes for $Y", and "free nights and weekends!" cell phone and long distance plans of the 2000s. It doesn't sound like they've figured out a sustainable billing model yet.
I haven't seen data on how many inactive subscriptions balance out a super ultra agent power user or what the distribution of usage looks like. Tokens seem heavily subsidized based on the "used $5k in compute on $200 subscription" anecdotes.
It's hard to gauge spending and usage on something that seems very uncertain.
The villain in me wants to see the chaos that would unfold if they did something like change billing rates for off-peak hours ala old phone plans.
They actually did that last month!
I kind of wish they'd do that again because big commercial users in North America can (on average, I would guess) afford to pay more.
Yeh. If you switch to per-token pricing and try to use it like the subscription you quickly realise just how subsidised it is.
I set up a status line in Claude Code so it displays the session dollar cost counter. I asked Opus 4.7 to do a thorough code review on my code base and it spent 75% of what I paid for my subscription in less than 20 minutes in a single session.
I'm actually afraid of the coming rug pull.
For what it's worth, this is in line with my prior estimate, previously, on Awful, that Anthropic is selling their offerings at a 80-90% discount. This adjustment had to happen eventually; previously, on Awful, I did napkin maths to show that the underlying market for the large GPUs, considered as an asset market and contrasted with the cost of electricity, sets a floor for the cost of inference at scale.
Kimi K2.6 just came out and is being sold by pure-inference providers for <$1/million input tokens, which is 1/5th the price Anthropic charge for Opus.
The Chinese AI labs have been coming up with some very impressive optimizations, presumably inspired by those NVIDIA export restrictions.
I don't know if Anthropic will be able to optimize in the same way. Maybe they're incentivized to do so now.
To me it looks like none of the big GenAI providers have even bothered to look at computing efficiencies, because they have all been in a mad race to get raw compute online as fast as possible, and the companies selling them the hardware have been more than happy to try to supply them.
It's one of the few areas where I feel GenAI is less bad than crypto - there will be incentives to make the software more efficient, thus lowering the power input per token access.
I don't know if Anthropic will be able to optimize in the same way. Maybe they're incentivized to do so now.
Why can't Anthropic (or any of the LLM companies) just have their LLM optimize itself? That's the point of an LLM right? To do the tedious bits?
Anthropic has more adds for software engineering position than ever which tells you all you need to know about that.
They don't dogfood their own software? Hmm ...
They DO actually dogfood their own software. We've seen the source code to Claude Code and it clearly was written by AI. Just because they have AI writing a lot of the code doesn't mean that there is no need to also hire human software engineers -- after all, Anthropic is growing quite rapidly and has a great deal of money at it's disposal.
So which LLM company is claiming to CEOs that they can replace software engineers?
Is this a trick question? Because it's all of them.
It is our job to create computing technology such that nobody has to program. And that the programming language is human, everybody in the world is now a programmer. This is the miracle of artificial intelligence. —Jensen Huang
I mean, my basic assumption is that each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers. —Sam Altman
That’s why my prediction for 50% of entry level white collar jobs being disrupted is 1–5 years, even though I suspect we’ll have powerful AI (which would be, technologically speaking, enough to do most or all jobs, not just entry level) in much less than 5 years. —Dario Amodei
Mark Zuckerberg, in a bizarre turn of events, is replacing himself.
Presumably not Anthropic, considering Claude Code is their biggest success and it's marketed entirely to software engineers.
CEOs who hate their employees seem to be big into Copilot, so maybe it's Microsoft telling them that.
That's actually started happening now, at least according to Anthropic and OpenAI:
Probably because LLMs haven't constructed a generalized representation of LLM optimization that transfers between different contexts. I imagine that asking any LLM to do this today just yields output with superficial similarity to existing optimization blog posts. Even to the extent it's ever likely to be possible, I suspect that the best they'll get is application of optimization strategies from other domains to LLM inference or similarly remixed ideas (which could still be useful, but I wouldn't cancel human research over it).
So the LLM companies are overselling the capabilities of the LLMs they sell? You don't say ...
and is being sold by pure-inference providers for <$1/million input tokens, which is 1/5th the price Anthropic charge for Opus
I mean, once you consider
Of course there are notable efficiency bonuses to be had in some of these (e.g. DSA), but I expect the pricing is also just inaccurate there too.
pure inference providers still tend to price the hosting based on the official provider, not how much it costs to actually serve
I don't understand, are you saying even the pure inference providers are selling below cost?
More generally, I'm just saying that it's entirely opaque; since the pricing is usually tied to the original provider, we don't have insight into what the ideal pricing for other providers is would be. (Heck, we don't have much insight into the ideal pricing for the original providers either—the GLM 5.1 price hikes bode poorly, but I'm not sure how much of that is specific to Z.ai being overloaded and trying to keep up.) And even if the other providers could offer it at lower pricing while maintaining a profit...they don't, so we don't really benefit from them not having to also invest in the actual model's development.
I believe he's saying that their margins are selling are above cost. E.g it costs 0.2/m tokens to serve but the official provider charges 1/m tokens so they charge a similar amount.
I did napkin maths to show that the underlying market for the large GPUs, considered as an asset market and contrasted with the cost of electricity, sets a floor for the cost of inference at scale.
doesn't Anthropic heavily rely on Google's TPUs nowadays, rather than just GPUs? iirc this was rumored to be part of the reason behind the opus 4.1 -> 4.5 cost cut.
To me the most glaring thing about this is the feckless dishonesty and prevarication by Anthropic around this event. If you want to raise prices, raise prices, but why all this sneaky underhanded shit. Even after it came to light they are showing a total lack of transparency and trustworthiness.
Their goal right now is to stay afloat long enough to make customers dependent on what they provide -- raising prices and being transparent about it is going to go against that. It's going to be interesting when they actually start charging what it costs to provide these things + a profit margin.
This uncertainty is real bad and a preview of the enshittification to come. Every software engineer I know now is using these AI tools and mostly likes them. We're becoming dependent on them already. It's going to suck when they pull the rug on the subsidized pricing.
I keep an eye on local agent alternatives. The new Qwen 3.6 model is quite encouraging, for instance. But these things aren't anywhere near as capable as the big coding agents like Claude Code. It's also going to suck if we have to start delegating to a half-lobotomized assistant because we can't afford the competent one.
You know what's even more competent than an AI agent, and costs nothing? Original thought.
This is one of my many problems with going on the AI bandwagon: the majority of people are just signing up for Claude or whatever and there's no roadmap to speak of that can be depended on in terms of pricing or capabilities. You can't assume that the products will be available in 12 months, or do exactly what they do today, or cost roughly the same amount for the same service. Maybe they'll improve, maybe they'll decline, maybe they'll jump in price or change ToS so that you can only consume the service in approved ways...
Nothing this industry has done gives me confidence that they'll act in the best interest of ... well, anybody except themselves and investors. And I'm honestly not even 100% sure about the investors part.
I should note that I pay $200/month for Claude Max and I consider it well worth the money
Would you mind sharing your revenue generating projects that justify this expense? Your blog does NOT count, it has to be software you build and sell and earn >$200/mo from that you would not have the time to build otherwise.
That is the ONLY way such a huge expense can be justified. Even as a well paid software developer that is a big expense for any person. That's more than my electric bill, and LLM access gives me way less utility than the power company does. It's well out of reach for most people.
$200/mo is not a lot for a business, but you're just a guy. How do you define "well worth the money?" What is your measured ROI?
I have a hard time understanding the thought process of paying for the 200/mo plan but I think arguing that he MUST be making his money back to justify spending the money is not very persuasive. I spend more on my gym membership than my electric bill and I could cancel and do bodyweight exercises without equipment, but I will not be cancelling my gym membership any time soon despite the fact that it produces no revenue for me.
It's strange to make an argument that a tool that is supposed to unleash massive economic productivity to justify the enormous capital expenditure doesn't need to generate a financial ROI.
The non-financial ROI on working out is clear. Gym memberships, on the other hand, have hotly debated ROI and are known for scammy practices so that's a particularly apt analogy, although probably not in the way you intended.
It obviously needs to generate ROI to justify capital expenditures across all users, but it doesn't need to literally generate profit for Simon for him to justify purchasing it.
That's your opinion, and mine is that $200/mo is enough that if he says "it's well worth the value," he must qualify that claim. I didn't pose the question to you. Simon is here every day and is capable of answering my question if he wants to do so.
If the answer is that the ROI is some intangible value, that's still an answer and I'd like to hear him justify what that non financial ROI is and why it is worth so much money to him.
I didn't pose the question to you
Fair enough.
If the answer is that the ROI is some intangible value, that's still an answer and I'd like to hear him justify what that non financial ROI is and why it is worth so much money to him.
FWIW this reads as much less hostile and narrow than your original comment. I hope Simon answers.
That's your opinion, and mine is that $200/mo is enough that if he says he's getting significant value out of it, he must qualify that claim
You said "it has to be software you build and sell and earn >$200/mo from", which seems perfectly in line with dv's description.
Yes, because the financial part is what I'm interested in primarily, but it would still be interesting to hear him justify it another way. Doing that justification -- which could still be valid for him on a personal level -- would imply that there is no financial ROI, which is still interesting information to me, if someone like Simon were to say it.
I included the quoted portion to rule out his blog because he obviously makes money writing about benchmarks and stuff but that's not the same as creating value with an LLM directly and perhaps I shouldn't have tried to be specific
The line you quoted is not about "unleashing massive economic productivity". It says "_I consider it well worth the money". He says he's using the subscription in his work and teaching, nothing about economic viability.
I mean, if we go there, then why pay 200 for a mechanical keyboard, if you get the same value out of a 10$ cheap one from Amazon? We buy many many tools and they help us in work or life or both, and we value those tools differently.
I don't want to pay 200 for an LLM subscription. But it does not mean Simon shouldn't, as well.
A $200 investment in a mechanical keyboard is a) a one time investment, not a recurring monthly payment, and b) lessen the possibility of carpel tunnel syndrome, and thus avoiding a larger medical fee later down the road.
$200/month is about what I spend on streaming media plus getting food delivered when I don’t feel like leaving the house. Neither of which has a financial ROI, I just enjoy them.
Why doesn't my blog count? It's my primary income source at the moment, and features Claude Code built for me there (the sponsor banners, the infrastructure for my new guides section) have already paid for Claude Code Max subscriptions many times over.
Most of the code I write is open source, which makes assigning revenue to it a little hard!
What I value is my time. Claude Code saves me dozens of hours a month, which is time I can spend on activities that make money - mainly consulting calls and writing/researching for my site.
Blog (website) features count, I think! I meant using LLMs as a topic for the blog content, that seems indirect (and obvious). The blog "doesn't count" only because I wanted to know how you were getting a return using LLMs for things aside from researching LLMs and writing about LLMs -- using them to build blog features "counts." Sorry for being unclear.
Difficult to quantify time savings are also a completely valid answer, should have been expected even. Thanks for answering!
I'm as skeptical as the next person about the value of LLMs, but insisting that someone has to generate earnings to consider something "well worth the money" doesn't seem reasonable to me. If Simon had said "I spend $200 a month on carpentry courses and tools and consider it worth the money" I don't think it'd be reasonable to insist that he justify that money by showing he's selling more than $200 a month in wood furniture.
Yes, if you see LLMs as a hobby. If you see them as a productivity tool, then the money becomes investment, in exchange for which a return has to be measured to qualify this a good or bad investment.
Now I don't doubt Simon when he says that his subscription is worth the money in his case, but I also don't think his situation generalizes well given a lot of his current visibility precisely revolves around him using and evaluating LLMs.
Anthropic serving a free "we will uphold transparency and trust for our users" to OpenAI would be hilarious if these were humourous times.
There is similar confusion around OpenClaw usage: recent HackerNews thread on it. Even as a Max user, I have a lot of claude -p workflows myself so I've been watching with interest.
There is also Claude Design - which I from what I've got to use is an excellent product, but it was crazy easy to blow through the limit (it's notable that it's a new, separate limit to the others). I hit my week limit in a couple of hours.
My overall impression is that Anthropic is selling a lot of Enterprise licenses at the moment. My data sources are limited, but I believe companies are getting large bundles (e.g. MS-style "here is a bunch of Claude Code licenses too"). The big win here is I suspect that actual utilization is low overall. So I can see the logic.
At $100 Claude Code is still a bargain for a professional, but the lack of broad access would be a big loss in my view. Especially for the teaching case. We have non-Engineers in the team that build PRs direct w/ Claude Code. At that cost level it would probably be better to route via the engineering team. But that would feel like a regression for sure.
I have a lot of
claude -pworkflows myself
What sort of workflows? I'm still pretty new to Claude. Would love learning some cool stuff.
A bunch! I do things like generate release notes, post to the relevant teams in Slack, run the browser and generate screenshots, responding to PR comments, rebasing... you name it :)
Some of this stuff is being absorbed by Routines now, but I'm currently just a bit wary of too deeply integrating into a specific tool.
Pretty much anything you do manually is up for grabs.