I don't see any evidence of that, the current, live page clearly states:
> Model access: When using a Pro plan with Claude Code, you will only be able to use Opus models after enabling and purchasing extra usage.
And your extremely slow loading archive link is just the same page with the same text.
helsinkiandrew 1 hours ago [-]
> I don't see any evidence of that, the current, live page clearly states
The live page - last edited 15 minutes after your comment - no longer has that passage.
xer0x 2 hours ago [-]
Thank-you for posting this update. EDIT: However purchasing extra-usage for Pro/Max accounts is a new feature.
strictnein 1 hours ago [-]
It's been a feature on the Max accounts for a while now.
2 hours ago [-]
oefrha 1 hours ago [-]
I thought documentation is a solved problem and you can just have an agent keep all your docs up-to-date. /s
ukuina 2 hours ago [-]
Opus was consuming so much usage that it was basically unusable on anything but the Max plan.
colechristensen 57 minutes ago [-]
It was even burning out much too fast on Max, but I found moving from xhigh -> high thinking got me equivalent quality with fewer tokiens.
batshit_beaver 1 hours ago [-]
This is going to be interesting.
At least for coding, there's little correlation between token spend and the quality (and impact) of the resulting AI suggestion.
This is fine when inference prices are capped (eg via a monthly subscription plan or self-hosting), but rapidly discombobulates the relationship between provider and user otherwise.
It still seems like OpenAI has no moat and neither does anyone else, as the only reasonable way to use the coding slot machines is going to be via open source models on inference-optimized hardware.
Still better than the secret lobotomization they were doing on subscription plan models though.
Aboutplants 2 hours ago [-]
Turns out when the bills start rolling in you need the revenue to pay them. Here comes the bumpy ride
ceroxylon 28 minutes ago [-]
i've been using the API for a portion of my work to prepare for this and test to see how much it will cost in the long term, turns out that was the short term.
it is rough, but it has taught me to treat every prompt and process with care since i watch the pennies and dollars burn instead of tokens, which is a good habit to get into anyway
this_user 1 hours ago [-]
OpenAI and Anthropic are both planning IPOs this year. They are clearly trying to polish their finances before filing their S-1s. Because their advisors will have told them that it's going to be a very difficult sell at these valuations if they cannot at least present the idea of a path towards profitability.
pupppet 1 hours ago [-]
A few years from now we’ll probably look back and laugh at how AI used to be essentially free.
yrds96 4 hours ago [-]
Basically the title.
Anthropic is doing changes on their help support pages on what looks like it will be the next pricing change regarding how users will use Opus models on Pro Plan.
6Az4Mj4D 4 hours ago [-]
It's just matter of time that price become so high that Enterprise cannot afford these tools unless they self-host open models. Only problem is open models are coming from China and not many countries trust that to use inside companies. How will it play?
still_grokking 12 minutes ago [-]
Looks like the Chinese are, again, winning the long game.
There is simply nothing that could compete with their open models. At the same time more and more corps got "AI addicted", so they will either have to pay ridiculous amounts of money, or use the Chinese stuff.
tipiirai 1 hours ago [-]
Many people trust China more than US
kingleopold 3 hours ago [-]
US investors can't afford price being too high tho. This thing would wipe trillions if they make prices a lot higher. Interesting times ahead
arvid-lind 2 hours ago [-]
Yep, it is their responsibility to make all of this worth caring about.
ihsw 2 hours ago [-]
[dead]
thoughtlede 2 hours ago [-]
Investor funds have been subsidizing the inference costs so far.
Investors might move from funding the model providers to funding the enterprises that use those models. That is, they might move from funding the cost of the experiment to funding the value of the result. No funding if there are no demonstrable AI gains.
This is a reasonable shift if this happens. If enough gains have been demonstrated, then investors might go back to funding the model providers. Investors always move towards the highest leverage point.
As long as AI delivers, this would be the rhythm.
j45 2 hours ago [-]
Investors can try to subsidize while the cost of delivery, evolution, and efficiency improves, while at the same time completing market capture.
spudlyo 1 hours ago [-]
The messaging around what is and isn't allowed with the various Claude plans has been so very muddled as of late. Add to that declining model performance, changes to default reasoning efforts, expanded token usage, caching bugs, corporate denials and gaslighting -- I don't think it's overstating matters to say they've suffered some major self-inflected reputational damage.
As it stands now, there is so much FUD surrounding their offerings, I'm not sure what they could do in the short term to turn things around.
colechristensen 60 minutes ago [-]
It's just an organizational maturity thing.
They need to start shifting from "move fast and break things" to "move faster by slowing down". Their public communication, feature set, and organization as a whole needs to start matching the scale and level they're competing at. They won many hearts and minds by being better and are losing them by being chaotic. Different outcomes from the same internal behavior because they needed to change gears and haven't.
> apologies for the confusion
response from @thariq
> Model access: When using a Pro plan with Claude Code, you will only be able to use Opus models after enabling and purchasing extra usage.
And your extremely slow loading archive link is just the same page with the same text.
The live page - last edited 15 minutes after your comment - no longer has that passage.
At least for coding, there's little correlation between token spend and the quality (and impact) of the resulting AI suggestion.
This is fine when inference prices are capped (eg via a monthly subscription plan or self-hosting), but rapidly discombobulates the relationship between provider and user otherwise.
It still seems like OpenAI has no moat and neither does anyone else, as the only reasonable way to use the coding slot machines is going to be via open source models on inference-optimized hardware.
Still better than the secret lobotomization they were doing on subscription plan models though.
it is rough, but it has taught me to treat every prompt and process with care since i watch the pennies and dollars burn instead of tokens, which is a good habit to get into anyway
Anthropic is doing changes on their help support pages on what looks like it will be the next pricing change regarding how users will use Opus models on Pro Plan.
There is simply nothing that could compete with their open models. At the same time more and more corps got "AI addicted", so they will either have to pay ridiculous amounts of money, or use the Chinese stuff.
Investors might move from funding the model providers to funding the enterprises that use those models. That is, they might move from funding the cost of the experiment to funding the value of the result. No funding if there are no demonstrable AI gains.
This is a reasonable shift if this happens. If enough gains have been demonstrated, then investors might go back to funding the model providers. Investors always move towards the highest leverage point.
As long as AI delivers, this would be the rhythm.
As it stands now, there is so much FUD surrounding their offerings, I'm not sure what they could do in the short term to turn things around.
They need to start shifting from "move fast and break things" to "move faster by slowing down". Their public communication, feature set, and organization as a whole needs to start matching the scale and level they're competing at. They won many hearts and minds by being better and are losing them by being chaotic. Different outcomes from the same internal behavior because they needed to change gears and haven't.