Wrote about both the per-model math and the scaling question:
(1) https://philippdubach.com/posts/ai-models-as-standalone-pls/
(2) https://philippdubach.com/posts/the-most-expensive-assumptio...
EDIT: Removed the dot after et; bc apparently it's an entire word (the more you know..)
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.
So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.
Irrelevant. The model is the moat
> most companies don’t care about that.
Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.
> center a div
For sure a common use case, but is bot what the CEO is concerned about with AI.
For some tasks that matters. But for a lot of tasks, "good enough but cheaper" will win out.
I'm sure there will be a market for whichever company has the best model, but just like most companies don't hire many PhD's, most companies won't feel a need for the highest end models either, above a certain level.
E.g. with the release of Sonnet 4.6, I switched a lot of my processes from Opus to Sonnet, because Sonnet 4.6 is good enough, and it means I can do more for less.
But I'm also experimenting with Kimi, Qwen, Deepseek, and others for a number of tasks, including fine-grained switching and interleaving. E.g. have a cheap but dumb model filter data or take over when a sub-task is simple enough, in order to have the smart model do less, for example.
For models that run on general-purpose AI hardware, I don't know why the vendors would waste that resource on old models.
In terms of price, I can get 1m output tokens from Deepseek for 40 cents vs. 25 dollars for Opus, and a number of models near the 1-2 dollar mark that are increasingly viable for a larger set of applications.
Providers will keep running those cheaper models as long as there's demand.
What model? GPT4o certainly isn’t a moat for open ai. They need to keep training better and better models because qwen3, kimi k2.5 etc constantly nipping at their heels.
> Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.
It depends on the business. As much as I’d love to engage a PhD or an Einstein in my Verizon customer support call, it isn’t going to net the call center any value to pay for that extra compute.
My PhD vs Einstein analogy was bad. What I mean is stupid vs smart. Nobody is going to pay for a stupid model when they can pay a bit more for smart.
Even if true, this still doesn't bend the curve when paying for the next model.
> If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.
The market adoption has increased a lot. The cost to serve has come down a lot per token.
Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.
When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.
AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.
It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.
That's a pretty strong statement that would need some data or at least a mathematical argument to back it up. Otherwise it's like saying in the 1980s that PCs with 640kB RAM have reached their pinnacle in terms of what users can expect in real life benefits and there's no reason to keep pushing the tech.
This is such a poor argument for a number of reasons.
1. Three years ago is basically when the "AI race" really kicked off amongst the frontier companies. You're effectively comparing a car from the 1920/30's to a modern car.
2. Past performance is not an indicator of future performance. You can't just say that LLM's will grow and improve at a fixed rate for all time, that isn't how they or anything else works in the real world.
3. Since it's an open secret that companies like Anthropic and OpenAI are running their models at a loss, a static 99% cheaper every three years arc still puts these companies at a net negative position unless compute, energy and water all somehow start getting 99% cheaper every three years.
The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.
How many years total are you basing this on?
Ugh. Someone has to do this: https://xkcd.com/605/
From latin "et alia", abbreviated as "et al." - it's not a single word but an expression.
Yes, but there's a chance that actually training is done more or less for free by companies like OpenAI. The reason being that they do a gigantic amount of inference for end users (for which they get paid), but their servers can't be constantly utilized at 100% by inference. So, if they know how to schedule things correctly (and they probably do), they can do the training of their new model on the unutilized compute capacity. If you or I were to pay for that training, it would be billions of dollars, but for them it is just using compute that otherwise would be idle.
Why are we so against, in principle, to the current pre-training scaling laws? Perhaps, we'll require new innovations at some point, but the momentum allows us to reach to newer heights that we've never climbed before.
Those conditions are an IPO or reaching AGI [1].
Nvidia and SofBank will pay in installments.
Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].
[1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...
[2] https://openai.com/index/continuing-microsoft-partnership/
(Which for anyone familiar with your long comment history as a regular HN poster, is comically absurd to imply. You've been reliably adamant that AI will demolish this or that entire industry overnight for years at this point).
We'll see who's right. I never said "overnight". Let's check in at the decade's end.
Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?
It's coming for us too. I've written five nines, active-active systems that handle billions of dollars of money movement daily. These systems can work in those contexts. I didn't think we'd be here this soon, and I actually thought LLMs were a dead end. I was wrong.
I'm not trying to sell Claude Code. I hate the concept of hyperscaler companies. I want there to be viable open source coding models - there just aren't. I'm merely reporting on my findings.
I sit at my machine for hours now in a prompt, review, test cycle. It's addictive. I'm getting more done at a faster rate than any time in my professional career. I'm excited, and I'm also worried. I don't know what happens after this.
If you've seen how much I praise AI, then you've also seen how much I rail against monopolies. I am worried these giant companies are going to take the means of production from us. I don't think enough people are freaking out about this. It's a very real possibility.
I'm just going to keep building. But you should pay close attention to what's happening.
Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?
Being right at the wrong time is often worse than just being outright wrong, I've found.
Only one of this can be true. It's not a shame to say you don't bother reviewing it, in the future that may well be the norm.
I'm prompting every change set, reviewing the outputs, then reviewing the total changes.
I'm sitting at my PC all day doing this - I used to be productive in short bursts, now I'm productive all the time. It's addictive.
I used Claude a lot on a recent project where it probably wrote 15-20k lines in a month, and it was overall excellent.
I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..
Have you read Alexandrescu’s “modern c++”? It’s like a piece of modern art but completely not self aware. There’s just something about C++ that lures intellectuals in; like ice age mammoths to a tar pit.
Small wonder LLMs also fall victims to C++’s deranged ways.
Reviewing 1k lines of code an hour is a breakneck pace, are you spending 20 hours a day reviewing code?
Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.
If you're generating 20kLoC per day, you definitely aren't reviewing it!
I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Incredible, how an entire religion has sprung up around AGI.
Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].
[0] https://blogs.microsoft.com/blog/2026/02/27/microsoft-and-op...
[1] It talks about it, but links to a paywalled site, so I still don't know what it is: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
- the prevalence "How many |r|'s are in the word 'strawberry'?" esque questions that cause(d) LLMs to stumble
- context window issues
It would be naive to claim that there does not exist, or even that it would be difficult to construct/train, an interrogator that could reliably distinguish between an LLM and human chat instance.
[0]: https://archive.computerhistory.org/projects/chess/related_m...
The Turing test could also be considered equivalent to "can humans come up with questions that break the AI?" and the answer to that is still yes I'd say.
As for writing in general slop score is still higher than a human baseline for all models[1], so all a human tester has to do is grade it and make the human write a bunch, the interrogator is allowed to submit an arbitrarily long list of questions.
Yes and it's actually hilarious: a system that can perform most economically valuable work better than humans, or specifically when the AI generates $100 billion in profits.
Are they going to get stock for it or is it a PIPE?
Personally, I don’t think I want to get in on this at retail prices.
It can both be true at the same time that AI going to disrupt our world and that being an AI lab is a terrible business.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
Now all search engines suck and google's sucks just as bad or worse than the rest.
If someone were to follow the original google playbook and make a search engine that helped people find things (eg by respecting the query syntax rather than making 'helpful' suggestions and dropping words the user included in their query) and kept the ads separate and out of the way of results. They might well make a monster. But this is old tech so nobody cares and everyone thinks google is unassailble even while nobody likes them anymore. Is there /any/ money in search? I thought so but I must be wrong for it to get this bad.
There is no moat
On iOS with the Apple agreement, and on Android (though the question of hardware remains when considering beyond Pixel phones).
https://www.businessinsider.com/openai-chatgpt-vs-gemini-web...
"What's you number one piece of hiring advice?"
"Hire for slope, not Y-intercept. This is actually my number one piece of life advice."
-@sama, who I’m generally a big fan of. But the job is now harder
About 5% according to a news article a few months ago.
Will the other 95% stick around once ads or payments are required?
If market share is a moat, IBM should still be the biggest tech company.
When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.
What? "Other things"? This is really vague. Who says competitors have lower CAC? It's rather likely competitors pay more for a new customer, due to, very simply, brand.
If it’s not the quality of their answers ?
> This plan may include ads. Learn more
> When will ads be available in ChatGPT?
We’re beginning in the US on February 9, 2026
> Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad.
You pay 8 USD / month and have higher limits and adsGoogle worked as a free service because their backend was cheap. AI models lack that same benefit. The business model seems to be missing a step 2.
Claude has impressive mindshare in many engineering disciplines too, and given how many open source projects are a play on its name I’m not sure I’d argue it isn’t catchy either. Certainly rolls off the tongue easier for me than “chatGPT” does, which even Sam Altman their CEO agrees is an awful product name they are stuck with.
It's much more important to look at "paid." Only up to 50M (est.) are paid with a substantial chunk (10M) as enterprise/edu/promotional paid accounts.
ChatGPT has 800 millions monthly active users currently, out of 8 billions humans.
$30B at $380B post-money for Anthropic announced two weeks ago
This does not increase my confidence in OpenAI's future
> Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
90% chance it's all PR but who knows
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
We try to avoid having corporate press releases as the top-level link, though of course there are exceptions sometimes.
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
I'm getting unreasonable amounts of compute for $0/m. I have ChatGPT, Claude, Gemini and Grok in different browser tabs. When one runs out, I switch to the next.
Any other recommendations are welcome.
200 USD at Claude, versus 3000 USD (literally) at Gemini. Well, then it will be Claude.
If tomorrow Claude is 5000 USD, well, then it will be Gemini.
Use these freebies/relatively cheap tools up 'whilst stocks last'.
I personally managed to create a very high quality marketing promo vid using grok. After spending weeks of enduring a lot of pain. But I saved myself tens of thousands.
I took advantage of 30 Grok premium subscriptions that were given to me via a free trial. There's no doubt the cost of services I took advantage of is in the tens of thousands.
But what do I care? I get what I want and then I get out before the freebies disappear.
LOL at the cry babys down-voting. Get mad bruh, get mad.
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
But I'm not sure there is enough money on the world (literally) for them to get an IPO high enough. If they sell too little of the company, they'll be still holding the bag when it fails.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
So much human potential given to such a tiny group of incompetent morons.
When the history books are being written, this era will make the robber baron/gilded age pale in comparison.
I wish I could’ve lived and died one or two decades ago. Or better yet never even been.
Such a waste of burnt money that could be used for more useful projects.
I'm ready to embrace change, however in this case no one cares. The cheese hasn't just been moved, it has been taken to another planet where us mice are not allowed to go.
Note: I need work, not interviews. ;-)
It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
I'm sure that $50b has my money in there somewhere.
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
The fact it's become a household name internationally (giving it the appearance of success) can't save it from spending dramatically more money than it makes. It's been coasting on investments, but it's not even close to being actually profitable.
Huge or well-known companies have collapsed before, even though - because people become so used to them existing - it never quite feels like it will actually happen until it does.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
https://advergroup.com/gemini-hits-650-million-users/
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
https://www.qualia.com/qualia-clear/
Unlike OpenAI, Google has an actual business model, not just strange circular deals.
Edit: I misswrote "majority of" instead of 15% of Google's profits.
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
It kinda feels like an LLM-generated article that another LLM picked as a "citation", and then no human bothered to check if it actually said what the LLM said it did.
And, really, advergroup.com? Who sites an advertising agency as if it's a reliable resource?
https://advergroup.com/digital-marketing/
"AdverGroup Web Design and Creative Media Solutions is a full service advertising agency that delivers digital marketing services. We manage Google Ad Word campaigns and/or Meta Ad Campaigns for local clients in Chicago, Las Vegas and their surrounding suburbs."
So credible a resource on Gemini's performance/profitability... /sarc
But yeah it doesn't even actually say anything about profits, let alone attribute any specific percentage of profits to Gemini. It just vague marketing copy.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
I can't believe people who think this actually exist.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
It’s not a dishonor to their memories, or the atrocities committed, to call that out. It is not a dishonor to say there are stark and real similarities between the way the US is operating and treating civilians.
I personally find the opposite, IMHO it is dishonors their memories to refuse to acknowledge the similarities.
I’ve posted a comment similar to this one here before, and like how I ended it. I strongly encourage you to read about the history of Nazi Germany and how it came to happen. It wasn’t just a zero to death camps, it was 15 years in the making. That history is deeply shocking, as it is depressing, because the parallels and timelines are too similar for anything besides outright discomfort, sadness, and fear between it and the US. But without knowing it, we are ever more likely to repeat it.
One final thing to note: the US has a history of extreme violence, slave patrols and the treatment of non-whites of the 19th century were an inspiration for Hitler.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
It was about countering dismissal of all voices of caution as blind dismissal.
Saying X succeeded while people were saying it will fail is just survivorship bias.
Also your parents and grandpa did indeed hear about the Segway.
Free ChatGPT chat has made the company a household name, and helped it to persuade investors, but every single one of those free users costs the company money. Most of those free users have proved unwilling to convert to paid users, and adding ads to the free service promises to send it into the same enshittification death spiral so many other companies have fallen into.
Also, how on Earth would your grandma and parents not have heard of crypto? Crypto is frequently front page news, even in print newspapers. There have been crypto superbowl ads. Are they living under a rock?
Also Softbank invested, which is never a great signal.
- Anthropic owes to AWS for their enterprise growth. Yes, their own talent as well.
- AWS investing for a purpose - solving problems with multi-agent systems - "exclusive third-party cloud distribution provider for OpenAI Frontier, which enables organizations to build, deploy, and manage teams of AI agents.". I think the multi-agent landscape will be production-ready in 2026 for solving really complex problems. AWS saw something in Codex and OpenAI's models.
- On Circular investments - if you make $100B of your revenue from ecosystem of players who spend $50B on your infra... where else would you go?
I work for another cloud provider, not AWS.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Anyone knows more?
OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.
Or is it just to keep Nvidia from crashing?
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
I'd love to hear other thoughts though
But at such numbers it's nonsense.
I don't see any moat. LLMs are commodities.
Enterprise is on Gemini/NotebookLM and Copilot as it's a natural extension of the Google and Office suite they use.
Devs are in Anthropic camp, but they will jump as soon as they can save 90% of the money for 99% of the output.
Incredible.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
For instance,I'm very skeptical of AI and, from experience, do not think that the current models are worth the cost, but I'm always in HN trying to find arguments/people that use AI successfully to prove that I'm wrong
I don't need to convince you it's worth it for you, but it's easy to see that other people have found a way to make it worth it for themselves. I would definitely not spend as much as I personally do if it wasn't worth it to me.
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Yes, this is kind of like Tesla promising full self driving in 2016
I completely agree. I'm ashamed to admit, I've actually walked to the car wash without my car on more than one occasion. We all make mistakes!
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
The original car question is not ambiguous at all. And the specific responses to the car question weren't even concerned with ambiguity at all, the logic was borderline LLM psychosis in some examples like you'd see in GPT 3.5 but papered over by the well-spoken "intelligence" of a modern SOTA model.
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
"AGI" is the IPO.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
The majority of my coworkers now push AI-generated code each day, and it has completely absolved me of any fear whatsoever that AI will take my job.
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
there is no way openai - and players who are vested them can afford market to tank before it.
While nothing fancy has happened yet in the area of cheap energy, there is still enough power around the world to build AI data centers. The problem is this power exits in countries that the West has decided, many times for good reasons, they don't want to deal with their leaders.
I'm predicting that over 2027, either the US will become more aggressive in making war with these countries or company CEOs will start developing "reality-distorsion-fields" around them and decide having enough power for the next datacenter is more for the good of humanity. Before that Europe will decide that AI training on human faces(eg. of non-Europeans) is not really a problem and will allow US companies to train their models in EU countries.
Right now the US seems to not have a lot of inflation and the FED has every reason to print. So don't expect a burst, but this can change in a blink.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Good times.
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
https://www.inc.com/leila-sheridan/nvidia-is-wavering-on-its...
What's the statue of limitations for securities fraud? The current administration won't last forever.
Nope. That 100B is in "promises" for over several years in total.
They have $15B out of the $50B from Amazon right now.
> The current administration won't last forever.
This is why OpenAI must IPO and when it does, I won't be surprised that a crash is followed up before 2030.
By then, they will "announce" "AGI" (Which actually means an IPO)
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
You'll never get a billion dollar check from anyone.
I've even seen startups raise like 500k pre-seed with tranches in it, lmao!
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
OpenRouter claims "5M+" users; OpenAI is claiming >900M weekly active users.
I don't really think it's possible to learn anything about the broader market by looking at the OpenRouter model rankings.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
Is it?
At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
That day will come. Not everyone needs a Ferrari.
Edit: I misread the parent, I think they're saying the same thing.
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
s/breathing/investment/g s/balloon/bubble/g s/air/money/g
(Vibes ~ Vibrations ~ Heat)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
Money was just the means of the transaction.
surely that behavior leads to a good society and doesn't encourage nefarious behaviors
Seeing this phenomenon, a silicon valley entrepreneur get an idea with the following sales pitch:
"Turd-bars that will make you the fittest version of yourself , answer all your deepest questions, and take you to the promised land (mars)."
Surprisingly, the turd-bars sell well, and GDP rockets up. Meanwhile VCs with fomo are funding its competitor: the shit-sandwich.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
But it's not magical, and not much different to injection moulding or something in concept.
Almost everything created with home level 3d printers is plastic junk you can buy for a few dollars on aliexpress (without weird rough edges).
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Fear
An echo cannot go on forever!
This is an argument from 2024. Somehow, the models have continued to improve.
If they stopped improving today they are good enough as they already are to generate profound change.
The wave front is already visible, we’re just on the shore waiting for the impact.
It is a bubble with extreme levels of debt + funding from too many promises from companies that are in these sort of rounds.
People being consumed by the hype will also be completely consumed by the crash.
Comments like this is exactly how a 2000 and a 2008 style crash will happen.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left? Maybe 0.01% of it was beneficial.
I guess it isn't that noticeable from inside US, but the rest of the world is grateful.
Maybe speak for yourself? As part of the rest of the world, I am not grateful.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.
But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.
Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.
It's like how Uber and Airbnb in the early days were burning loads of cash to build market share. People went to these services because they were cheaper. Then they would increase prices once they had a comfortable position.
OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc. Compared to say Uber which didn't provide a lot of efficiency gains.
Unfortunately that doesn't change the fact even a small miscalculation could have an enormous impact. We are approaching levels of risk comparable in size to the subprime crisis of 2008.
I disagree. It's like Uber and Airbnb in how they try to gain market share. Big difference: For Uber (and when it got big, basically everybody I know has used it once in a while) and Airbnb, you oaid for each transaction. With OpenAI, most peopme are on the free tier. And if there is something incredibly hard, it's converting free users to paid users. That will, IMHO, be the thong that blows (many) of the AI companies up. They won't ever reach a profit/loss-equality.
But also ever increasing quality requirements. So we can't possibly know at this point if this is a market with high margins or not.
Google has to pay Apple billions of dollars to make Google.com the default search engine. I just looked it up, over 15% of search revenue goes to pay to be the default search engine.
Every Android device defaults to Gemini.
Every Microsoft device defaults to Copilot.
I’d love to see where these cost reductions are. If costs are going to decrease rapidly why does OpenAI’s spending plan look so insane?
> Every Microsoft device defaults to Copilot.
I don't think it's right to say that these devices "default" to their vendors' AI software when it's impossible to replace it with something else. Yes I can install Claude as a standalone app but I don't have the OS-wide integration that Gemini does for Android for example.
OpenAI and others are already profitable on inference (inference is really really cheap)
They are just heavily investing into the latest frontier
The biggest risk is whether they can stay cutting edge, or if open source or others will catch up quickly.
If it's that cheap I'll soon be doing it self-hosted, or switching to a local provider.
It's a race to the bottom for tokens-providers.
Then it's a race to the bottom.
And unlike competitors, OpenAI has no ecosystem. Just a website and a domain name. Even a VSCode fork like Cursor is an improvement over that state.
Google pays over 15% of search revenue to be the default search engine on various browsers.
cough Sora cough
Eventually there will be a race to the bottom on inference price to the customer by companies that aren't trying to subsidize their GPU investments.
OpenAI is spending money because they think they need to for their business to survive. They're hoping that the next big breakthrough just requires more compute and, somehow, that'll build them a moat.
I personally think we haven't cracked AGI yet but it doesn't change their calculus.
Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.
What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?
Do they just actual use the models they have?
Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?
Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?
Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?
Does China’s strategy of undercutting US value of models pay off bigly?
It is not like we threw away the dotcom advances, they were just put on hold for a while..
I've always thought this. If you're running something like OpenAI, it really doesn't matter to you if the company fails because you're already comfortably wealthy. But, it sure would be nice to be worth another 10x billion - though I'm not totally sure why.
So these individuals perceive a large upside and no downside. It's more of a hobby than a job. Like learning to play piano. It would be amazing to be a badass pianist...but not a big deal if that never happens.
The other variation goes in reverse -- using the legacy asset and it's capture labor force to output some kind of a commodity that is sold below market price to a controlled company in a different jurisdiction, where it's resold at small discount of a market price. The company still has to function here too.
Bonus points for not even owning the asset in question, but having effective control over it through the corrupt management, this way the government still pays the bills to keep it running at loss.
What you are describing is actually very western thing, because it assumes you can exchange the asset into cash directly and then buy something with that liquidity, which assumes solid property rights. I'm not even talking about OpenAI being an actual tech company that just wasn't there before. It's not how oligarchy works in the places.
Since the US is slowly moving in a direction of oligarchy, I think the actual reference will be helpful.
You're conflating the assets the elites own before the state collapse with the ones they seek to acquire afterwards. The don't care if the ones from before function, because their only purpose is to be maximally extractive. Afterwards, there's no need to funnel tax money through the functional businesses they acquire; they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money. No laundering games necessary.
I don't exactly disagree with that assessment and I think you should stay vigilant for that indeed. What I'm saying, that selling a hot potato to get cash is the opposite of what oligarchs are known to do. I could be that it's but a step to buy something else with oligarchic intentions in mind, but alternatively it could a normal westerner money-handling behavior.
>they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money.
That doesn't contradict what I wrote or at least meant. The asset in question is not the means of laundering, but a pretext for extracting money from everyone unfortunate enough to live in the forsaken place.
The laundering part usually comes when the oligarch wants to safeguard their own money from political risks, which they do by keeping the funds in a place that is outside of their (and their potential rivals) political influence. Otherwise, once the political balance shifts, the money is just gone, because no laws exist to guard it anymore. I'm not sure what this "outside" place could be for Americans, but could guess (with no confidence in the answer at all), it's either Swiss or Gulf banks. Maybe UK or whatnot. Some structures that have a combination of impartiality to their disputes, strong enough property and privacy regimes, but with zero to none ethical constrains to walk away from it.
I think the HOA still only pays like $10/month/apartment for an entry level that's now defined as 250/250 Mbit/s. Someone must have been unusually savvy with the contracts.
https://newsroom.cisco.com/c/r/newsroom/en/us/a/y1999/m11/ci...
Cisco survived but it took them until late last year to recover their 1999 stock value (that's 26 years).
Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.
People will start looking at valuations more carefully. Investors will get jittery. Spending on GPUs will drop, as will NVidia’s stock price.
I’m not sure that NVidia views OpenAI as replaceable.
Doubt Jensen sees himself as a “dealer” but considering the vendor lock-in and margins, he pretty much is the Tony Montana of Ai Chips.
It’s nuts that this type of financing is legal.
You need people to burn in house fires for regulation to require extinguishers.
We're going to be the next generation’s cautionary tale.
How someone can compare the above situation to a person getting a payday loan to put a roof over their head or food on their plate is beyond me.
The “it’s like <insert wild and inappropriate analogy to stoke emotion>” is a tired trope.
Might be a stupid gamble, but it's not akin to a loan shark shaking down a hungry, cold person for life's essentials.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.
It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.
So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.
If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.
Even if Nvidia has capped production for now, increased demand still allows them to sell chips at a greater margin. Or, to put another way, presumably Nvidia is charging OpenAI a premium for the privilege of paying with stock.
Now if you check, these companies selling their stock like this tend to have large amounts of debt. If their stock becomes worthless, you just wasted $80 producing an item that their creditors have first dibs on. And liquidating your shares immediately to ensure your gain, would weigh on their stock's value, potentially to the point where their stock would be only $80 worth, and you wouldn't be gaining anything anymore. Your earnings would then tank, alongside them.
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.
Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.
> Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?
Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.
This distorts the market for the product we're trading, and distorts the share price for both my company and yours.
It's a good question, what I think you're missing is that if the market is valuing me (NVIDIA) at 25x revenue then it's more like I traded you (OpenAI) a GPU it cost me $80 to make for $100 worth of OpenAI stock, and I got a bonus $2500 in market cap of my own stock (which existing shareholders like).
IOW for every incremental "$100" in revenue (circular or otherwise), existing shareholders get paid "$2500" in equity (NVIDIA appreciation + OpenAI shares).
This "works" for NVIDIA and its shareholders as long as they/the market keeps thinking $100 of OpenAI stock is a good price for a GPU. If OpenAI tangibly fails to deliver on this valuation then NVIDIA may wind up in the red on these deals.
Caveat: it's a bit more complicated than that as OpenAI doesn't typically buy/operate GPUs directly afaict, rather they team up with the big cloud providers like AMZN (also part of the deal). But it's an useful way to wrap your head around the economics, I think (open to correction, not a domain of professional expertise).
I don't see anything _inherently_ unethical about this as some comments seem to imply. It's definitely riskier than accepting cash, in which case you're free not to play, but it's a calculated risk based on future expectations of growth by OpenAI. Granted there are some sketchy incentives qua existing shareholders that could materialize in pump and dump dynamics.
And inflate your revenue by $80.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
I thought it was more that the legal goons delay the final judgement until Microsoft can eventually find someone they can (technically legally) bribe to drop the case?
Also Nvidia margins are waaay higher than 20%
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
What if the product only costs you $20 to produce?
WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.
It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).
They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.
They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).
One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.
OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.
Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.
I'd say most first movers fade away. Microsoft wasn't the first OS, Google wasn't the first search engine, Facebook wasn't the first social network... etc... etc... etc...
They are in the business of selling compute / datacenter rack spaces. A server where you pay per GBs transferred in/out.
If it’s Gemini or GPT behind, for most use cases users wouldn’t care.
This valuation puts their P/E around 40.
Anthropic $380B valuation on $13B ARR. P/E around 30.
5 years ago Uber was in similar territory. Tesla... Well we won't mention Tesla.
But it can also simply be the financial framing for direct bartering. Which is even more direct than regular financial transactions.
"I will provide these resources you need, in exchange for part ownership", and/or "a limited license to your tech", "right to provide access to our customers on these terms", Etc."
Amazon doesn't need any frothy fake revenue. But they do want to offer their customers the most in demand models, with the best financial terms for Amazon.
Nvidia wants customers, but not at the expense of throwing money away. Their market cap may be volatile, but their books are beyond solid.
I would be a lot more concerned if OpenAI was getting "funding" from a quantum computer startup, and vice versa.