Nvidia and OpenAI abandon unfinished $100B deal in favour of $30B investment
282 points by zerosizedweasle 12 hours ago | 295 comments

this_user 11 hours ago
I am very curios if OpenAI's IPO attempt this year will turn into WeWork 2.0 where all the air suddenly comes out of the valuation once the market acknowledges that they have no moat and lack a clear path to profitability that would make these huge investments worthwhile.
reply
paxys 10 hours ago
There’s a reason OpenAI and Anthropic are both trying to accelerate their IPO while still being wildly unprofitable. There is still unlimited AI hype in the market. If they go public this year the entire world is going to blindly buy them without looking at their books.
reply
burnte 7 hours ago
> There is still unlimited AI hype in the market.

I've observed a very different state. From what I've seen the sky-high expectations of AI have come down quite a lot.

reply
chasd00 4 hours ago
> sky-high expectations of AI have come down quite a lot.

they hype for AGI has certainly deflated, i haven't heard anything about that being right around the corner and the implications in a while now. The hype and doom now seems to be coming from software devs only, the front page news articles about AGI have pretty much stopped for me.

/"front page news" to me is the google news, US, Business, and Technology tabs

reply
cookingrobot 2 hours ago
Microsoft’s AI CEO Mustafa Suleyman predicts 'most, if not all' white-collar tasks will be automated by AI within 18 months.

https://www.businessinsider.com/microsoft-ai-ceo-mustafa-sul...

reply
Stromgren 5 hours ago
It depends where I look. Among colleagues and tech-native friends, I feel like there’s healthy skepticism as well as the excitement about new tech. On the other hand, all the investment podcasts that I’ve been following for years are nothing but ignorant AI hype and reciting articles about how all the jobs are about to disappear. I guess the people who doesn’t make firsthand experiences are not leaving the hype yet.
reply
zeeveener 4 hours ago
Both groups will operate on a wide spectrum, but if we're already generalizing...

Perhaps there's a matter of competing priorities?

Programmers are usually quite cynical overall, but in this case I see it as a "My CEO is telling me _out loud_ that they want to replace me, so why would I help them speed up that process?"

Investors likely want what they're invested in to appreciate, so I imagine they're likely over-leveraged and are doing what they can to get their bag.

reply
nolok 6 hours ago
I agree, but I still think is overall point is correct: they need to do it now while it's still smoking hot
reply
onlyrealcuzzo 11 hours ago
There are at least plausible scenarios where OpenAI is a VERY valuable company in the near future.

There were not with WeWork.

The SpaceX/xAI IPO will be more interesting.

reply
SecretDreams 10 hours ago
All of these things are vastly overvalued. Only one with tangible value is SpaceX because that's actually a moat-space. OAI holds no moat, has not done a good enough job to entrap their users, and has poor cost structure.

xAi isn't even a point of discussion.. it's just a scheme to rip off investors.

WeWork.. hard to take anyone seriously that ever invested in this bad boy.

reply
altairprime 7 hours ago
Wework was a valid long bet that office properties would re-appreciate once the pandemic stopped — but now that AI is pulverizing the job market, any hope of that long bet paying off will require one of three things: a free-market boom in workers that require commercial property for success (e.g. physical invention companies like e.g. Saildrone due to not being able to homelab resins for safety reasons), and/or a market-wide rehiring event due to AI’s failure to deliver, and/or regulatory shifts in profit taxation and new business investment that trigger the above-described boom.

I know some commercial property owners in my hometown let their lowest-desirability storefronts sit vacant for twenty or thirty years (!) in order to prevent commercial property rent from falling across their entire portfolio. Turns out you can pay a lot of property taxes with not much revenue, and there hasn’t historically been regulatory pressure to pay an escalating “empty tax” to compel landlord pricing to behave according to supply and demand pricing models. Wework is still a terrible investment for an investor, but if you’re looking to bet long with no call and have the patient of decades, it’s not the worst plan. (There are certainly worse ways to gamble your money on the commercial property market!)

reply
danny_codes 6 hours ago
No land value tax == wasteful speculation. It’s been known for 120 years but obviously rich people have done a good job suppressing that understanding
reply
palmotea 4 hours ago
> any hope of that long bet paying off will require one of three things: a free-market boom in workers that require commercial property for success (e.g. physical invention companies like e.g. Saildrone due to not being able to homelab resins for safety reasons)

That doesn't make sense for WeWork, though. Aren't they a rent-a-generic desk company? If you have any kind of specialist requirements (e.g. "processing resins") they'd seem like a bad fit.

reply
tim333 26 minutes ago
xAI is ahead of OpenAI on the LLM Arena 'Arena Overview' for what it's worth. Not bad given they've only really been going for a couple of years.
reply
luke5441 9 hours ago
SpaceX valuation is also going to be interesting. Talking about CapEx, SpaceX has deorbiting assets on top of depreciating ones. And without Starlink the space launch market size is pretty small.
reply
bgirard 7 hours ago
> SpaceX has deorbiting assets on top of depreciating ones

The deorbiting part is redundant. Their satellite are just that, a depreciating asset. Their lifetime seem to be 5 to 7 years. The important claim is if the total cost, including the launch, can be recuperate over that lifetime or not.

reply
testing22321 7 hours ago
> And without Starlink the space launch market size is pretty small.

The EV market was mighty small when Tesla started too.

Skate to where the puck is going.

reply
Ectiseethe 10 hours ago
> WeWork.. hard to take anyone seriously that ever invested in this bad boy.

Masayoshi Son may not be providing returns for its investors but he is providing entertainment for the rest of the world.

reply
whizzter 9 hours ago
Yep, SpaceX actually has a track record as an actual leader and innovator in it's niche (that's very CapEx intensive to enter), it's not really a moat but it's a lead that no other entity seems to be closing in on (on the contrary many would-be competitors seems to have almost given up).

As for OpenAI, I'm not sure if Altman is an idiot or fraudster, claims about reaching AGI/ASI with scaling and investing in that fashion was always delusional at best or fraudulent at worst, maybe he just hoped to divert enough money to engineers to make actual breakthroughs or that the hardware would become a moat but competitors have kept pace, and I fully agree that they are mostly now only hanging on with an insanely bad cost structure now.

reply
overfeed 6 hours ago
> on the contrary many would-be competitors seems to have almost given up

maybe the smaller ones; Blue Origin succeeded, and French and Chinese nu-space companies will continue to get funding for decades - national governments are capable of footing the bill of large CapEx projects. SpaceX competition is irreversibly tied to US foreign policy, and only scientific amd commercial launches are price-sensitive

reply
lern_too_spel 3 hours ago
> xAi isn't even a point of discussion

The reason GP said SpaceX/xAI is that these are now a single company.

reply
sourcegrift 9 hours ago
Twitter is already dead as everyone on Hacker news knew. Nobody I know uses X or whatever it's called now, xAI? I'm looking at Musk going bankrupt and as soon as that happens Trump will be Impeached.
reply
jcgrillo 8 hours ago
I'm super curious to see if Nvidia turns out to be Enron 2.0..
reply
tim333 20 minutes ago
Nah, not Enron. Maybe Cisco.
reply
quantified 6 hours ago
They would need to have massive accounting fraud and lose public support. Unlike Enron, Nvidia actually sells tangible goods at a massive profit, and hasn't appeared to gloat over people getting screwed over.
reply
palmotea 4 hours ago
> They would need to have massive accounting fraud and lose public support. Unlike Enron, Nvidia actually sells tangible goods at a massive profit, and hasn't appeared to gloat over people getting screwed over.

But what happens if they can no longer sell those tangible goods at a massive profit, and they gave return to their roots selling to gamers? When the boom ends is when massive accounting fraud could happen.

reply
chasd00 4 hours ago
i think worst case they go back to selling GPUs to gamers. That would indeed be a massive front page story and movies would be made about it but I don't think the fraud is there. They have a pretty straightforward business, make GPU cards and sell them.
reply
userulluipeste 2 hours ago
"They have a pretty straightforward business, make GPU cards and sell them."

They do, but that's not the (full) story here. Companies tend to easily migrate upwards, to a higher volume and/or higher profit margin market, and hardly (if ever) in the opposite direction. The painful restructuring necessary to enable this kind of reverse change is also damaging to the company brand, culture, and self-perception. If they ever get in such position, they may of course recover, but I wouldn't bet money on that.

reply
alsetmusic 9 hours ago
Ed Zitron is gonna have a field day with this. He wrote / spoke about the dark horses of the AI apocalypse a year or more ago. Scaling back investment was one of the signs he predicted would signal its start.

I like his podcast, Better Offline[0]. Some here might also like it, some would definitely hate it. He's not right about everything he says, but I agree with a lot of it. He has a newsletter for those who don't like podcasts.

0. https://www.betteroffline.com

reply
g-mork 7 hours ago
The irony with Zitron is that summarising his astoundingly verbose anti-AI articles is one of the most consistently productive uses I've had for AI
reply
cootsnuck 7 hours ago
Ed is the anger translator in my head. Good stuff.
reply
chasd00 4 hours ago
> Scaling back investment was one of the signs he predicted would signal its start.

hah Nvidia just announced the deal with OpenAI is now $30B instead of $100B.

reply
altairprime 7 hours ago
He often gets some airtime on HN for the AI stuff; see also past discussions: https://hn.algolia.com/?q=zitron
reply
joshcsimmons 6 hours ago
Ed's background is PR not engineering FWIW.
reply
rubatuga 5 hours ago
Wow everything he said is coming true /s
reply
astrange 4 hours ago
There are certainly some good AI critics but Ed Zitron and Gary Marcus are not among them. They're just people who get paid to write anti-AI newsletters whether or not any of it is true.
reply
marcyb5st 11 hours ago
Oracle debt holders are sweating profusely right now I imagine. How does OpenAI gets 300B$ to pay Oracle [1] when nVidia has to be convinced to shell out "just" 30B for actually purchasing nVidia hardware.

[1] https://www.ft.com/content/90aa74a5-b39d-4131-a138-367726cb1...

reply
lm28469 11 hours ago
> How does OpenAI gets 300B$ to pay Oracle

Easy, they just have to sell their overpriced vram chips (which haven't been manufactured yet), from their GPUs (which haven't been bought yet) which are in their data centers (the ones they're planning to build "soon"). It really isn't rocket science

reply
hshdhdhj4444 11 hours ago
Or as OpenAI has been trial ballooning for months, the government bails them out.
reply
burnte 7 hours ago
Why would that happen? Extremely little will happen to the US economy if OAI fails. The government has absolutely no reason to bail them out beyond pure corruption.

OAI's most valuable assets are hardware that will be worthless in 6-8 years. The second most valuable assets are the code which other companies are doing just fine with their own. The third is the hype halo that keeps them getting these deals that are disconnected from reality. Nothing there is holding up the economy.

They only have 4,000 employees. If they all lost their jobs, it would barely be a blip in a monthly jobs report. It's not as though millions of people will lose their jobs disrupting the economy like COVID.

The only downside is some imaginary money vanishes and some investors take a haircut on the imaginary money.

reply
overfeed 6 hours ago
> Why would that happen

"National Security", and the usual fluff about "ensuring that the United States remains at the forefront of cutting-edge AI technology". "Adversaries" will almost certainly be mentioned, "China" specifically has a 50% chance to be name-dropped.

reply
chasd00 4 hours ago
> United States remains at the forefront of cutting-edge AI technology

then the US can just call up Google and Anthropic and use their on-par if not better cutting-edge AI technology. OpenAI just isn't important enough and alternatives exist to fill the crater they leave. Also, the public isn't a huge fan of AI taking all the jobs and it's an election year (albeit mid-term).

reply
quantified 6 hours ago
Grok is the US gov'ts platform, not OpenAI.
reply
micik 6 hours ago
Lehman was allowed to go under because hey, their liabilities numbered in the low billions plus they are such dicks anyway. Especially the Dick in charge.

Salient point being, it didn't seem like that big a deal. Compared to the high-powered AI deals it seems like nothing at all.

The next day, an important monetary fund broke the buck because well, they were into Lehman and that was no longer great business.

This was the OG monetary fund, not some fly by night operation and suddenly everyone was redeeming all there was to redeem with a religious fervor. This faith centered around Wall Street Broker the Redeemer, something like that. Not much of a religion but then these are very down-to-earth people.

Then AIG called. They bragged about their stroke of genius idea to raid their vaults for commercial paper, raising well over 10B just like that — paper they didn’t even know they had! Sounds crazy? Read the full story, it’s way crazier than can be put in a paragraph.

Briefly pausing here to let it sink in just how pedigreed the pedigree of these masters of the universe is. They got it all — the degrees, the grit, the genius. Only the creme de la creme get hired for the elevated job at the higher echelons of Wall Street skyscrapers.

But back to the story. The vault loot was impressive but it wouldn’t cut it.

Realizing now the surprising fact of these institutions being interconnected and this contagion well beyond controllable with social distancing (from Lehman), Hank Paulson, the hero of the story (the film version at least, the real story leaves you with a different impression of this shrewd operator) makes the difficult decision to tell the Pres that “oh hey, we’re fked but, idk, a trillion dollars could help a lot”.

His aides don’t like the t-word much, so they kinda vibe out a more palatable number and the rest is bailout history.

If you think Oracle took a risk taking out that loan, think again. That loan is the hedge. A gun to the head of the vaunted Markets, free but admittedly somewhat feeble, makes for a powerful persuasion tactic. They won’t even have to ask for the bailout — Wall Street will do it for them. Systemic risk. Need they say more?

Different than 2008? No doubt. The numbers flying around the DC buildout Ouroboros dwarf the 2008 headscratchers and the companies involved made sure to link up like tentacle monsters getting it on. Interconnected as they were, 2008 investment banks still were somewhat in competition with one another -- the 2026 batch of trouble are in bed with each other.

When you're 30k in debt and insolvent, it's your problem. 30mil in debt, the bank's problem. 30B? Not a problem at all, certainly nothing the Fed couldn't solve, for you and the bank. And solve it they will for what's the alternative? Show must go on.

reply
burnte 5 hours ago
> Oh sweet summer child.

I'm sorry, but this line is so condescending I can't bring myself to read whatever else you wrote. I really can't stand it when people feel the need to be so demeaning when disagreeing.

I completely understand what happened with the finance bubble in 07/08. This is nothing like it, at all.

reply
chasd00 4 hours ago
your post doesn't make a lot of sense and it doesn't matter anyway. What happened in 2008/9 and what may eventually unfold in the AI bubble aren't comparable.

> Show must go on.

the show will go on without bailing out OpenAI, that much should be obvious to everyone.

reply
micik 2 hours ago
OpenAI is not the one who will be requiring the bailout. they aren't even a public company, for one.

Oracle, Coreweave or both will be the ones requiring it, ostensibly. in reality it's much larger than that.

tremendous money has been invested into AI already, commitments for far more spending still have been made, contracts signed and funds borrowed. the DC building sites are abuzz with expensive effort and soon racks will welcome fresh batches of not-so-fresh previous gen GPUs.

investment is roaring like a locomotive joy-ride atop tracks freshly laid. AI executives are also roaring on any talk show and podcast that will have them but the path to profitability still leads through a maze of smoke and mirrors and hasn't been exactly charted.

you think the LPs of VC firms will just write off the losses when those are realized? O&C will just default on loans and their many creditors will let them?

there are tens if not hundreds of billions already locked in the storm cloud beyond the point of no return. financiers from all corners of finance and sundry are already holding a compartment or two in that bag, each.

when the bag turns out to be largely filled with hot air, i suppose all those powerful bagholders will just go "welp, we dun goof'd" and hold a Sprint Retro about the learnings that will be their consolation prize for the financial haircuts suffered.

perhaps if they had no other option, they would. they do have that other option and the debts in risk of default which, something tells me, creditors won't be able to write off without being pushed to the brink themselves, are the wedge already planted in the financial system the bailout breeze draft will gently blow through.

P.S. it's not that Oracle or any of the Magnate 7 are devoid of means to patch up the fabric of a battered balance sheet.

that wasn't the case in 2008, either. it's a little known and even less appreciated fact that parties tied up in Lehman on day of Chapter 11 filing were made whole to the tune of 100 cents on the dollar after all was said and done in the post-bankruptcy proceedings. sure it took 10 years but it happened.

in the heat of the moment, though, there's a burning hole in the pocket, runs on the bank imminent if not in motion already and 401 millions of voices crying out in anguish.

that's no time for methodical disbursement, it's time for the hair-on-fire vaudeville act Wall Street had gotten rather good at throughout the numerous reruns of that particular number they performed to date.

it will be politically absolutely unacceptable and a burning public grievance for numerous news cycles. so what? as if that spectacle of manifest outrage, justified and futile, was anything but jolly good entertainment for those looking on at it from the gallery.

reply
kjkjadksj 6 hours ago
You are speaking like the president is a rational adult who believes in meritocracy
reply
throwaway85825 6 hours ago
Congress has the power of the purse and an OpenAI bailout would cost more than shifting money around could find.
reply
burnte 5 hours ago
You are completely correct.
reply
burnte 5 hours ago
I said literally nothing about any specific person or officeholder. That said, as today's SCOTUS ruling reinforces, the president doesn't control the bank book. MAGA will riot (again) if the admin actually bailed out OAI.
reply
mnky9800n 11 hours ago
This reminds me when F22s blasted Chinese balloons out of the sky.
reply
giancarlostoro 7 hours ago
Reminds me of when we sent balloons into Russian Airspace to see how many we could get back on the other end, in the 1950s.

https://en.wikipedia.org/wiki/Project_Genetrix

reply
hagbarth 10 hours ago
I have no idea why they would do that. Outside blatant corruption, which we shouldn't discount.
reply
SecretDreams 10 hours ago
> Outside blatant corruption

Are we acting like this is a low probability outcome?

reply
carlkarlcarol 8 hours ago
[dead]
reply
GolfPopper 10 hours ago
Do they have enough liquidity to bribe Trump to do that?
reply
exe34 7 hours ago
Tax payer bails them out, he gets his cut. The maths checks out.
reply
whizzter 9 hours ago
I think most bribes to Trump appear after the fact so no issues there (he does have plenty of enforcers to make sure he collects the bribes).
reply
__patchbit__ 11 hours ago
Reallocate the $70 billion split difference to orbital station AI datacenters and Moon mass driver launched life cycle renewal equipment resupplies.
reply
rvnx 11 hours ago
Good point, let's all invest in the SpaceX.ai IPO
reply
pawelduda 11 hours ago
Maybe they could sell the RAM reserves they've been hoarding
reply
Aerroon 11 hours ago
Can they? The articles said that they bought wafers, not finished RAM. Is there interest in buying something like that?
reply
jimnotgym 11 hours ago
They could sell them, but not at the price they bought them for!
reply
bandrami 11 hours ago
They would have to actually get fabricated first
reply
throwaway85825 6 hours ago
OpenAI is the HBM market.
reply
october8140 11 hours ago
This thing is about to pop.
reply
criddell 11 hours ago
Have you moved your retirement account money out of stocks and index funds into something safer? I've actually been thinking about it...
reply
marcyb5st 11 hours ago
I did. I moved to sovereign debt (not US), bonds and stocks of boring companies (staples, energy, medicine, ...) that have at least AA rating. Might miss out a few months of glamorous growth, but fuck that, it reached a point that just one company hiccuping will send the whole thing tumbling (IMHO).
reply
criddell 11 hours ago
Energy and healthcare has big AI exposure too. If it pops, you're going to be better off but not totally spared. I suppose that's probably a smart move though...
reply
marcyb5st 10 hours ago
Fair enough and thank you for the comment.

I went with a bit of Roche, Novartis, ... So something that would at least cushion the fall with dividends and not being in the GenAI crossfire since they definitely use AI/ML (I got them through an ETF). Also almost all my assets are now either CHF or Euro denominated/hedged. I am also not comfortable with the dollar weakening and the next Fed head probably cutting rates again like Trump wishes

reply
kjkjadksj 6 hours ago
You’d be surprised. A lot of biotech is using genai not just traditional ml. And hiring for genai and LLM. They want to build their own chatbots, only trained on their own data and primary literature not reddit comments from armchair biologists.
reply
marcyb5st 5 hours ago
Yeah, I figure. The thing is that I have a mix of bonds and stocks. Bonds should remain stable or even rally a bit if a crash happens. I mean, usually governments cut interest rates in time of crisis, so existing bonds should go up. It is definitely a hard time to invest as everything feels expensive
reply
throwfaraway4 7 hours ago
The Market can stay irrational longer than you can stay solvent.
reply
baggachipz 10 hours ago
Yes. Went from 100% S&P500 fund into a gold ETF and a high-dividend fund (SCHD). Still a good portion in S&P but gotta hedge some.
reply
qwerpy 8 hours ago
How do you handle the capital gains taxes? I’d love to be able to rebalance the massive S&P 500 portion of my portfolio into other things but it would trigger huge federal and state taxes. Was hoping to hold on to these until retirement at which point I’d slowly be selling it for living expenses and the income taxes would be much smaller.
reply
nilkn 7 hours ago
I'm very curious about this as well. This is the main thing that has held me back from a meaningful rebalancing. Eating a huge tax bill to avoid a theoretical future loss of unknown size and duration while also losing out on potential gains if that loss ends up not materializing is a hard pill to swallow. I suppose this is probably why most long-term investment advice suggests not trying to time the market unless you have a very short time horizon. (Note: for me, I'm referring to funds in taxable brokerage accounts.)
reply
chasd00 4 hours ago
this is the spot i'm in too, i can move my 401k and IRAs around fine but ye'old brokerage account is a different story. Unless my losses are greater than the capital gains i'm going to pay then i'm better off just staying put. That brokerage account is dedicated to funding college for my two boys. I have enough now for about 6 years of undergrad. The bills start coming in 2 years, idk if that's enough time to recover from a dot-com level crash...
reply
bmitch3020 27 minutes ago
If you need the money in 2 years, I wouldn't leave it in the stock market. Find a money market or CD to avoid the gamble. You're going to get hit with capital gains taxes either now or in 2 years, so that shouldn't impact your decision.

My personal time frame is 4-5 years of emergency funds. You can adjust that for your own risk tolerance, but have a look at various past crashes to make an educated decision.

I'd only leave it invested if you don't actually need it, because college can be delayed or financed with student loans.

reply
dehrmann 7 hours ago
You pay taxes with the proceeds from the sale? You sell all the losers in your portfolio to offset what gain you can?
reply
baggachipz 7 hours ago
I did this in my IRA and 401(k). No tax penalty or gains tax in doing that.
reply
criddell 10 hours ago
Is high-dividend a signal that the equities are more value oriented than growth oriented?
reply
baggachipz 9 hours ago
Yep, the fund picks the equities with the highest dividend payout, therfore the most value-oriented. Those tend to be old blue chips which have stood the test of time and pay out well to their shareholders. Lockheed Martin, Merck, Coca Cola, etc. When the growth economy tanks, it's an oasis of relative stability. People love their cheap sugar water.
reply
sethops1 10 hours ago
Similar. Went from near 100% VOO down to 25%, with about 50% in SCHD and the remaining 25% sitting on cash, for now.
reply
hypeatei 11 hours ago
Valuations really aren't that crazy, but the incestuous deals between Nvidia, Oracle, and OpenAI might cause a decent correction. I'm not too worried about my portfolio personally. It'll be a small bump in the road and you're better off not trying to time the market.
reply
SecretDreams 10 hours ago
> Valuations really aren't that crazy

Okay

reply
hypeatei 10 hours ago
What public company is massively overvalued in your opinion? Nvidia right now is trading at 44 P/E which is higher than the S&P average, sure, but not anything like the dotcom bubble with a median of 120x earnings.
reply
B56b 9 hours ago
The problem with this hype cycle has always been that the hyperscalers are pouring unbelievable amounts of capital into a technology that hasn't proven it can generate the revenues needed to justify that.

Nvidia might have an ok P/E right now, but the question is if the industry can sustain buying over $50B of GPUs every quarter(or that it even needs to).

reply
SecretDreams 9 hours ago
This exactly. How sustainable are the current spends in the wake of needing ROI against these spends in the not too distant future? And who will be able to afford an upgrade cycle only 2-3 years from now given none of the capex spent will have hit positive ROI 2-3 years out.

Will everyone just accept negative ROI in the name of hype? Will scalers be able to meaningfully increase service prices without eroding customer interest?

These are all unanswered questions that a simple PE statement can't support.

reply
kjkjadksj 6 hours ago
Look at TSLA pe
reply
astrange 4 hours ago
That's a terrible idea. Never ever do that. Never even think about what's in your retirement account unless you are actually about to retire.
reply
moduspol 9 hours ago
Yep. And I've already moved it back. I think we can all see this is a bubble, but it has been for over a year now, and I cannot predict when it will pop.

Also, there aren't a ton of great options that are safer.

reply
2OEH8eoCRo0 10 hours ago
No for a few reasons. I'm not close to retiring, I already own a good deal of bonds, and moving money based on emotion defeats the purpose.

I rebalanced the same as I always have.

reply
unethical_ban 9 hours ago
I thought about it. The bulk of my 401k money is in a growth fund that i found has something like 30% in the big tech companies.

Then again, I'm 20+ years from accessing it, so I figure I'm about 5 years out from moving more to S&P tracking and bonds. I am not a financial advisor.

reply
toomuchtodo 10 hours ago
Enough VXUS to minimize the impact of Mag 7 exuberance on my portfolio.
reply
catsquirrel28 10 hours ago
I'm not so sure VXUS will protect from the AI bubble popping since ~10% of its holdings are TSMC, ASML, Samsung, Tencent, Alibaba, and SK Hynix which are all way overvalued due to the AI bubble and will most likely crater when it becomes clear the LLM companies have no business and the data center lenders start calling in their debts.
reply
toomuchtodo 9 hours ago
Those companies had baseline demand before the AI bubble. They will have demand after.
reply
catsquirrel28 8 hours ago
Sure, they won't go to 0 but they will certainly go down once it becomes clear the LLM companies don't have money to pay for chips or RAM. That will cause VXUS to go down as well but much it goes down is anyone's guess.
reply
toomuchtodo 7 hours ago
Drawdown will be limited compared to the S&P500 or other indexes that weight the Mag 7 heavily. I'm unsure what your point is? It's impossible to predict the future, but it is trivial and straightforward to derisk against this current market environment ("AI bubble") with a basic level of capital market understanding.
reply
kronks 10 hours ago
To anyone reading, don’t do this.

This is the ONE thing you aren’t supposed to do as a passive investor. A play like this will cause you to lose upside almost always, and some people never get back in and miss out on almost a lifetime of growth.

THE MARKETS ARE NOT RATIONAL.

reply
criddell 8 hours ago
My concern is shifting from maximizing upside to minimizing downside because I'm only about 10-12 years from retirement.
reply
unethical_ban 7 hours ago
The laziest safe thing to do is put money into a target fund that is managed. So you could look for "Target 2040" and in theory, they change the asset mix as it approaches that date.
reply
barbazoo 7 hours ago
But that’s people, what do they know about 2040 that I don’t?

If I was 10 years away, I’d maybe look at bonds or GICs or is that too conservative?

reply
raydev 6 hours ago
Hopefully I'll soon be able to build a new high end gaming PC and not be forced to pay 1980s era prices for it.
reply
paxys 9 hours ago
I've been hearing "the bubble is about to burst" every day since 2011. Yeah there will be an eventual correction, maybe in a major way, but trying to time the market is and always will be a fool's errand. The rock bottom of the next hype cycle can be higher than the peak of the current one.
reply
infecto 10 hours ago
When retail is predicting a pop it’s time to buy.
reply
malfist 10 hours ago
When my friend told me not to put my hand in the fire, I knew it was time to stick it in and not pull it out, no matter what it cost me.
reply
infecto 9 hours ago
Not sure what your story has to do with the discussion at hand. On average people do not beat the market, the market can be irrational longer than you are solvent but also most people probably have not done any modeling and so these are really feelings and not real economic bets.
reply
malfist 7 hours ago
Your statement was deciding to do something based solely on the fact that someone else advised them not to do that. So I presented the same scenario with the same logical process.

It isn't valid. It's being a contrarian, and not the outgrowth of some logical process.

reply
infecto 38 minutes ago
No my statement was the usual joke that it’s always best to do the opposite of the herd. Once everyone is buying beanie babies the market is probably getting saturated. With all these bubble comments on HN I think it’s very much the same.

Is there a bubble? Probably. Is it going to pop, probably not.

reply
lm28469 11 hours ago
You know it's bad when even scam altman calls for regulations. They spend their entire lives telling you regulations are bad and taxation is theft but as soon as they need to drown the competition they lobby for more regulations

https://www.yahoo.com/news/articles/openai-chief-sam-altman-...

reply
mnky9800n 11 hours ago
Rules for thee not for meeeeeee
reply
raincole 11 hours ago
This comment cannot be further from reality. Altman has always been a very loud advocate for AI regulation.

https://www.nytimes.com/2023/05/16/technology/openai-altman-...

reply
raw_anon_1111 10 hours ago
Of course he has. Major incumbents want regulation that makes it harder for new comers
reply
shmageggy 10 hours ago
Except when that regulation actually has teeth, which is why he opposed California’s SB 1047. I agree with GP that Altman (and all the rest) want regulation insofar as it protects their moat but no further.
reply
latexr 5 hours ago
> Altman has always been a very loud advocate for AI regulation.

And then the EU said “OK, let’s regulate”, and Altman said “no, not like that!”

https://www.reuters.com/technology/openai-may-leave-eu-if-re...

Note the dates. The above article was posted merely a week after yours.

Altman only cares about regulations if they can benefit him, he doesn’t do it out of some sense of morality. To these tech bros, “regulation” means “codify into law whatever I’m doing, and disallow everything else so I don’t have competition.”

reply
freetonik 10 hours ago
Using words like “Scam Altman” instantly reduces credibility of the author in my eyes. If you want to convince people of something, perhaps it’s better not to use childish methods.

No disrespect, just sharing my thoughts. I see “Elmo Musk” and “Orange Man” etc., and I immediately think this is not worth reading (regardless of my opinion of those persons).

reply
lm28469 8 hours ago
> Using words like “Scam Altman” instantly reduces credibility of the author in my eyes.

He 100% deserves his title for Loopt alone. I don't have any credibility to being with, this is an anonymous wantrepreneur shitposting forum not the US congress. I know people lurking here love these sociopaths but come on...

reply
DonHopkins 7 hours ago
How sad for you. Whenever I hear people proudly announcing how arbitrarily and illogically and performatively they decide what's worth reading, I don't think what you write is worth reading either.

I'm much better off reading diverse opinions and deciding what to believe based on what they actually say, instead of imposing petty rules and self-censoring what I read, and self-cultivating ignorance.

You'd be much better off, mentally healthier, and subjected to less propaganda if instead you ignored what "Scam Altman" and "Elmo Musk" and "Orange Man" write instead of what people who criticize them write, since they have such asymmetrically bigger platforms and pro-oligarchy biases than their critics, and they also regularly call people childish names themselves.

TL;DR: you're choosing to ignore the wrong people.

reply
Capricorn2481 11 hours ago
But he's been doing that for years too.
reply
villgax 11 hours ago
Altman has been calling for it since 2023, lobbying world leaders meeting them to push on this Lol, were you under a rock or something?
reply
lm28469 6 hours ago
> since 2023

Is this supposed to be ancient history already ?

reply
bentt 10 hours ago
OpenAI is going to have to leapfrog everyone else with some kind of alien tech to remain viable. Nvidia is probably just saving face here and possibly hedging.
reply
onlyrealcuzzo 9 hours ago
Nvidia has to hedge because if they go from a growth stock to a de-growth stock in a year - their P/E is going to go from 45 to 10 real fast.
reply
baggachipz 10 hours ago
Follow the ball... which shell is the ball under? Keep an eye on the ball!
reply
mikkupikku 7 hours ago
Why Nvidia would want to invest in prospecting firms instead of just selling pickaxes never made sense to me in the first place.
reply
dmix 6 hours ago
Maybe they have too much money from selling so make pickaxes and chose to reinvest into the industry that keeps funneling them money.
reply
mikkupikku 4 hours ago
I just don't get it. If they have too much money, don't the shareholders want dividends? If I were a shareholder I'd be pissed that Nvidia is trying to be part of the bubble instead of taking all the cream off the top.
reply
kristianp 3 hours ago
Shareholders want capital gains, not dividends, because dividends attract a higher tax rate. If they started paying huge dividends savvy investors might look elsewhere. The implication is that NVidia wants to be seen as the best place to put your money. If they're paying a dividend it means they can't use it to increase their own profits.
reply
cmiles8 11 hours ago
There are serious balance sheet concerns for these companies with exposure to OpenAI, Anthropic and such.

It’s all fun and games till it’s not. All this capital investment is going to start hitting earnings as massive deprecation and/or mark-to-market valuation adjustments and if the bubble pops (or even just cools a bit) the math starts to look real ugly real quick.

reply
jimnotgym 11 hours ago
The market is not there at all though is it? Nobody is paying what it actually costs to deliver AI services. It is not clear to me that it is cheaper than just paying people to do the work.
reply
zerosizedweasle 10 hours ago
Someone did a calculation using heat generated - energy usage (which is ultimately the base cost of the universe) - and the human brain and body is just incredibly most cost efficient than how we're doing AI. So for basic tasks it's just absurdly expensive to be using AI instead of a human.
reply
rhubarbtree 10 hours ago
we don't pay humans in the food they consume
reply
zerosizedweasle 10 hours ago
We don't pay for GPU's in the energy they consume either.
reply
danielovichdk 10 hours ago
Idiots, this is a splendid example of circular economy.

OpenAi gets 30b, buys chips from nvidia for 30b.

How is that an investment?

reply
chasd00 4 hours ago
> OpenAi gets 30b, buys chips from nvidia for 30b.

well if OpenAi uses a credit card at least they'll get a ton of rewards points :)

reply
damnitbuilds 7 hours ago
2026 in AI: Superbowl ads, IPOs, bankruptcies.
reply
DonHopkins 6 hours ago
That creepy AI dog-tracking Superbowl ad is dystopian enough -- especially when even South Park is now mocking Homeland Security and Noem shooting puppies -- and it's likely a future episode will be about federal ICE agents using surveillance cams and AI to round up more dogs for her to shoot.

https://www.youtube.com/watch?v=GcAUmeH8Obk

reply
caconym_ 9 hours ago
Does this mean I will be able to buy RAM for the NAS build I left too late (stupid me assuming I could rely on some basic modicum of price stability)? I assume not.

I have the disks but only random old gaming PCs to put them in. I think I'm going to expand my Proxmox cluster and run Ceph so that I don't have to pay that 6x markup or whatever the fuck it is these days.

I'm tired, man. I'm tired of living in this world where AI is simultaneously an unstoppable eschatological juggernaut that's already making everything worse and at best is going to steal my livelihood and destroy my family's future, but also a hype driven shell game with full buy-in from world leaders and the moneyed elite who see a golden opportunity to extract unprecedented amounts of wealth for themselves before the West falls and they have to make other arrangements.

reply
paxys 7 hours ago
99% of hobbyist and small business use cases can be served by hardware from 5-10 years which is still relatively cheap. You don't need to compete with Nvidia for brand new DDR5 RAM for your basement NAS.
reply
kasabali 6 hours ago
Nah. Even goddamn 8GB DDR3 modules have tripled in price. SSD prices, similarly.
reply
overfeed 6 hours ago
Even DDR3(!) prices have surged - and that's from 20 years ago. Many people realized what you outlined - or just bought what was available, which resulted in higher demand for older tech. Higher demand + constant supply = higher prices.
reply
throwaway85825 6 hours ago
DDR3 is no longer in production.
reply
overfeed 5 hours ago
...and?
reply
caconym_ 6 hours ago
Wow, thank you for this incredibly valuable perspective. I guess I should just be happy I can still own my computers!
reply
mmastrac 9 hours ago
You can find DDR4 at reasonable prices, assuming it's not DDR4 ECC, and assuming it's not a QNAP that will only boot with RAM from a very narrow window of compatibility (ask me how I know).
reply
kasabali 6 hours ago
I don't know where this myth comes from, DDR4 prices have quadrupled since July.
reply
vaxman 2 hours ago
OpenAI is not worth $860B to anyone other than to the companies hoping to inflate their own valuations by selling it goods and services, at least until OpenAI inevitably goes to zero and its assets are acquired for substantially less than $30B. It simply does not cost $30B to build an OpenAI competitor and the opportunity cost of building one also isn't approaching $30B (unless one accounts for the stock hit from investor FOMO over such a delay, as, for example, Apple knows all too well).

If OpenAI were one of many companies generally promoting the increased use of GPUs in industry, thereby developing the market that nVidia operates in, that would be one thing, but $30B, to one company, that then gets spent on nVidia purchases? Just, No. nVidia will get into a lot of trouble for doing this kind of deal that undermines confidence in the entire stock market.

reply
jimnotgym 11 hours ago
Now the only question is when. When does this bubble burst?

Great promise, replace all your call centre staff, then your developers with AI. It is cheaper, but only because the AI companies are not charging you what it really costs to do the work.

reply
raw_anon_1111 10 hours ago
One of my specialties is implementing hosted call centers using Amazon Connect - the AWS version of the call center that Amazon uses internally.

The fully allocated cost of one call to a human agent is $3-5. That pays for a lot of inference.

reply
rhubarbtree 10 hours ago
pricing as an objection to ai is just cope. it might hurt the current status quo when they can no longer burn capital, but the long-term course will remain unchanged
reply
raw_anon_1111 9 hours ago
Prices will literally have to go up by 1500x+ in the above scenario to make it cheaper for a human agent. I assure you that AWS is big losing money running Nova Lite 2 (the model I use for speed and low latency) with their own models running on their own custom chips.

I calculated the full cost of a call to be $0.05 cents a minute when handled by AI and that includes charges for a 1800 number and the varios other AWS services it uses.

Nova does the processing of the text after a separate voice to text service (Lex)

reply
jimnotgym 9 hours ago
That's great, but it sounds like Amazon has a specific model trained for this service? And I guess that Amazon funded the development in a sustainable way. Is any of that true for Anthropic and OpenAI?
reply
raw_anon_1111 8 hours ago
No, its just one of the many models hosted on Bedrock. Bedrock has all of the popular models except for OpenAI which is exclusive to Microsoft. Nova Lite isn’t the best model. But what I need it for to turn random voice transcriptions into well defined JSON I can use deterministically downstream quickly, it’s perfect and fast.
reply
aurizon 11 hours ago
I wonder how the huge slug of memory that might now have to re-direct mid-ramp as this (and other AI pullbacks) ramify forward? Will Crucial re-enter the desktop market? Or will it create a slow/fast subsidence in memory? We will live in interesting times..
reply
altairprime 7 hours ago
Considering how gasoline pricing works, RAM prices will likely decay at 1/xth the rate that the grew, so that retailers can capture maximum profit per price point before lowering to the next, and manufacturers can hold the oversupply in reserve for more AI demand. I would expect possible sharp price drop events (depending on the overage levels) just after the end of e.g. Micron’s financial years, though, as they will have unloaded the overflow at discount bulk pricing enabling some retailers to undercut the pre-unload pricing and resuming somewhat rational market behavior.
reply
overfeed 6 hours ago
The next-financial-quarter-only mindset is just baiting CXMT into capturing >70% of the market, if they are able to scale their production. Once consumers trust the brand, it's all over for Micron, SK Hynix and Samsung's dominance.
reply
altairprime 5 hours ago
Looking forward to it for sure. Are they doing registered ECC in DDR5 yet? Last I checked I only found their DDR4 line.
reply
SirensOfTitan 11 hours ago
Regardless of the promise of the underlying technology, I do wonder about the long-term viability of companies like OpenAI and Anthropic. Not only are they quite beholden to companies like Nvidia or Google for hardware, but LLM tech as it stands right now will turn into a commodity.

It's why Amodei has spoken in favor of stricter export controls and Altman has pushed for regulation. They have no moat.

I'm thankful for the various open-weighted Chinese models out there. They've kept good pace with flagship models, and they're integral to avoiding a future where 1-2 companies own the future of knowledge labor. America's obsession with the shareholder in lieu of any other social consideration is ugly.

reply
chasd00 11 hours ago
I think google ends up the winner. They can keep chugging along and just wait for everyone else to go bankrupt. I guess apple sees it too since the signed with google and not OpenAI.
reply
stego-tech 10 hours ago
I’ll second this. Google’s investment in underlying accelerators is the big differentiator here, along with their existing datacenter footprint.

Everyone else has to build infrastructure. Google just had to build a single part, really, and already had the software footprint to shove it everywhere - and the advertising data to deliver features that folks actually wanted, but could also be monetized.

reply
martinald 9 hours ago
I was thinking about that (I definitely agree with you on the software and data angle).

But when you think about it it's actually a bit more complex. Right now (eg) OpenAI buys GPUs from (eg) NVidia, who buys HBM from Samsung and fabs the card on TSMC.

Google instead designs the chip, with I assume a significant amount of assistance of Broadcom - at least in terms of manufacturing, who then buys the HBM from the same supplier(s) and fabs the card with TSMC.

So I'm not entirely sure if the margin savings are that huge. I assume Broadcom charges a fair bit to manage the manufacturing process on behalf of Google. Almost certainly a lot less than NVidia would charge in terms of gross profit margins, but Google also has to pay for a lot of engineers to do the work that would be done in NVidia.

No doubt it is a saving overall - otherwise they wouldn't do it. But I wonder how dramatic it is.

Obviously Google has significant upside in the ability to customise their chips exactly how they want them, but NVidia (and to a lesser extent) AMD probably can source more customer workflows/issues from their broader set of clients.

I think "Google makes its own TPUs" makes a lot of people think that the entire operation in house, but in reality they're just doing more design work than the other players. There's still a lot of margin "leaking" through Broadcom, memory suppliers and TSMC so I wonder how dramatic it is really is

reply
coredog64 9 hours ago
My take is it's the inference efficiency. It's one thing to have a huge GPU cluster for training, but come inference time you don't need nearly so much. Having the TPU (and models purpose built for TPU) allows for best cost in serving at hyperscale.
reply
martinald 8 hours ago
Yes potentially - but the OG TPUs were actually very poorly suited for LLM usage - designed for far smaller models with more parallelism in execution.

They've obviously adapted the design but it's a risk optimising in hardware like that - if there is another model architecture jump the risk of having a narrow specialised set of hardware means you can't generalise enough.

reply
zozbot234 8 hours ago
Prefill has a lot of parallelism, and so does decode with a larger context (very common with agentic tasks). People like to say "old inference chips are no good for LLM use" but that's not really true.
reply
flyinglizard 9 hours ago
NVidia is operating with what, 70% gross margin? That’s what Google saves. Plus, Broadcom may be in for the design but I’m not sure they’re involved in the manufacturing of TPUs.
reply
lizknope 9 hours ago
Broadcom does the physical design and sources a huge amount of the IP like serdes blocks. TSMC manufactures the chips.
reply
dyauspitr 9 hours ago
What a wild situation to have a significant part of Earth’s major economies be directly reliant, not on one country, but on one building in the world.
reply
collingreen 9 hours ago
Yeah this is a bummer. If it goes south everyone in power will also have perfect hindsight and say they saw it coming because obviously you shouldn't have this much built on such a small footprint. And yet...
reply
palmotea 8 hours ago
> Yeah this is a bummer. If it goes south everyone in power will also have perfect hindsight and say they saw it coming because obviously you shouldn't have this much built on such a small footprint. And yet...

It'll be true, everyone does see it coming (just like with rare earth minerals). But the market-infected Western society doesn't have the maturity to do anything about it. Businesses won't because they're expected to optimize for short-term financial returns, government won't because it's hobbled because biases against it (e.g. any failure becomes a political embarrassment, and there's a lot of pressure to stay out of areas where businesses operate and not interfere with businesses).

America needs a lot more strategic government control of the economy, to kick businesses out of their short-term shareholder-focused thinking. If it can't manage that, it will decline into irrelevance.

reply
tummler 8 hours ago
Google has the time, money, TPUs, and ability to siphon talent. It'll be an unsexy slog to the top, but they'll get there eventually.
reply
catlover76 8 hours ago
[dead]
reply
Aboutplants 11 hours ago
The minute Apple chose Google, OpenAI became a dead duck. It will float for a while but it cannot compete with the likes of Google, their unlimited pockets and better yet their access to data
reply
awongh 11 hours ago
I think it points to OpenAI trying to pivot to leveraging their brand awareness head start and optimizing for either ads or something like the Jony Ive device- focusing on the consumer side.

For now people identify LLMs and AI with the ChatGPT brand.

This seems like it might be the stickiest thing they can grab ahold of in the long term.

reply
cael450 10 hours ago
Consumer AI is not going to come close to bailing them out. They need B2B use cases. Anthropic is a little better positioned because they picked the most proven B2B use case — development — and focused hard on it. But they'll have to expand to additional use cases to keep up with their spend and valuation, which is why things like cowork exist.

But I tend to agree that the ultimate winner is going to be Google. Maybe Microsoft too.

reply
ghaff 10 hours ago
Consumers en masse aren't going to pay big $$s for AI. Maybe some specific embedded apps as part of other products.
reply
WarmWash 9 hours ago
They'll pay $60-$80/mo for it. Just watch.

Unless you're totally dumb or a super genius, LLMs can easily provide that kind of monthly value to you. This is already true for most SOTA models, and will only become more true as they get smarter and as society reconfigures for smoother AI integration.

Right now we are in the "get them hooked" phase of the business cycle. It's working really damn well, arguably better than any other technology ever. People will pay, they're not worried about that.

reply
ghaff 2 hours ago
I don't see that. I've used LLMs and I've seen very little direct value. I've seen some value though Photoshop etc. But nothing I'd pay for a direct subsciption for.
reply
zozbot234 9 hours ago
It would have to be $60-$80/mo. in value over and above what you could get at the same time with cheap 3rd party inference on open models. That's not impossible depending on what kind of service they provide, but it's really hard.
reply
ghaff 8 hours ago
I use LLMs now and then but not really regularly. I'm nowhere close to paying for a significant subscription today.
reply
WarmWash 8 hours ago
The average cell phone bill in the US is $135/mo.

Plans with unlimited talk/text and 5GB+ of data have been available for <$30 for over a decade now.

The AI labs are not worried.

reply
gloryjulio 8 hours ago
The value is well worth over $60-$80/mo. But conflating that with the market condition is very different.

In the world where you cheap open weight models and free tier closed sources models are flooding the market, you need very good reason to convince regular people to pay for just certain models en masse in b2c market

reply
WarmWash 8 hours ago
After 30 years with a shit operating system known as Windows, Linux still cannot get over 5% adoption. Despite being free and compatible with every computer.

"Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".

Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.

reply
gloryjulio 8 hours ago
> After 30 years with a shit operating system known as Windows, Linux still cannot get over 5% adoption. Despite being free and compatible with every computer.

You need to install linux and actively debugging it. For ai, regular people can just easily switch around by opening an browser. There are many low or 0 barrir choices. Do you know windows 11 is mostly free too for b2c customers now? Nobody is paying for anything

> "Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".

You just proved my point. Yes they are good, but why would people pay for it? Google earns money through ads mostly.

> Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.

That's exactly the points, because most of the internet services are free. Nobody is paying for anything because they are ads supported

reply
ghaff 2 hours ago
It's nothing to do with Windows but with the applications (including games) that just run on it and the fact that most companies just run it it by default.
reply
surgical_fire 9 hours ago
It doesn't matter. I firmly believe both OpenAI and Anthropic are toast. And I aay this as someone that uses both Codex and Claude primarily.

I really dislike Google, but it is painfully obvious they won this. Open AI and Anthropic bleed money. Google can bankroll Gemini indefinitely because they have a very lucrative ad business.

We can't even argue that bankrolling Gemini for them is a bad idea. With Gemini they can have yet another source of data to monetize users from. Technically Gemini can "cost" them money forever, and it would still pay for itself because with it they can know even more data about users to feed their ad business with. You tell LLMs things that they would never know otherwise.

Also, they mostly have the infrastructure already. While everyone spends tons of money to build datacenters, they have those already. Hell, they even make money by renting compute to AI competitor.

Barred some serious unprecedented regulatory action against them (very unlikely), I don't see how they would lose here.

Unfortunately, I might add. i consider Google an insidiously evil corporation. The world would be much better without it.

reply
wolvoleo 7 hours ago
They also have tons of data on the users' habits and desires which they can use to inform the AI with each specific user's preferences without them having to state them. Because so many people use Google maps, Gmail etc. It's not just about training data but also operational context. The others lack this kind of long-term broad user insight.

I'm not using Google services much at all and I don't use Gemini but I'm sure it will serve the users well. I just don't want to be datamined by a company like Google. I don't mind my data improving my services but I don't want it to be used against me for advertising etc.

reply
raw_anon_1111 10 hours ago
OpenAI is not going to fund themselves with $20 subscriptions and advertising enough to be profitable.
reply
gruturo 10 hours ago
> OpenAI is not going to fund themselves with $20 subscriptions and advertising enough to be profitable.

Then it's doomed. Which is also my opinion, I don't disagree at all with you.

reply
dakolli 10 hours ago
Ads in GPT, might literally be the worst business decision ever made. Google can get away with Ads, its expected from them, but not OpenAI
reply
_aavaa_ 10 hours ago
Sergei and Brin were pretty vocal about the problems with ads and why they don't belong in search engines when they started.

The only reason it's expected now is because of a slow boil.

reply
duskdozer 10 hours ago
They ideally will not want you to realize you're looking at ads.
reply
ChoGGi 8 hours ago
> For now people identify LLMs and AI with the ChatGPT brand.

> This seems like it might be the stickiest thing they can grab ahold of in the long term.

For now, but do you still Xerox paper?

reply
johsole 8 hours ago
I think Microsoft probably picks up all of OpenAI if OpenAI gets in financial trouble.
reply
chasd00 8 hours ago
Yes, I think that’s their plan. Remember when Altman got fired from OpenAI? Msoft was right there with open arms. Msoft is probably letting OpenAI do the dirty work of fleecing investors and then when all their money is gone doing the R/D, MSoft scoops up the IP and continues on.
reply
xnx 8 hours ago
> OpenAI became a dead duck

Won't Microsoft own OpenAI after it flames out?

reply
Balinares 4 hours ago
A match made in heaven.
reply
butlike 9 hours ago
In addition to that, Google and Apple are demonstrated business partners. Google has consistently paid Apple billions to be the default search engine, so they have demonstrated they pay on time and are a known quantity. Imagine if OpenAI evaporated and Siri was left without a backend. It'd be too risky.
reply
co_king_5 10 hours ago
I hope I see Anthropic and OpenAI shutter within my lifetime.

Google has been guilty of all of the same crimes, but it bothers me to see new firms pop up with the same rapacious strategies. I hope Anthropic and OpenAI suffer.

reply
echelon 10 hours ago
You better hope Anthropic and OpenAI thrive, because a world in which Google is the sole winner is a nightmare.

Google's best trick was skirting the antitrust ruling against them by making the judge think they'd "lose" AI. What a joke.

Meanwhile they're camping everyone's trademarks, turning them into lucrative bidding wars because they own 92% of the browser URL bars.

Try googling for Claude or ChatGPT. Those companies are shelling out hundreds of millions to their biggest competitor to defend their trademarks. If they stop, suddenly they lose 60% of their traffic. Seems unfair, right?

reply
co_king_5 10 hours ago
I understand that Google is an extraordinarily bloated monopoly.

What I mean is that I am so bitter about OpenAI and Anthropic's social media manipulation and the effects of AI psychosis on the people around me that I would gladly accept a worse future and a less free society just to watch them suffer.

reply
pfraze 10 hours ago
[flagged]
reply
palmotea 8 hours ago
Also, Sam Altman (at least) gives the impression of being a bit of a manipulative psychopath. Even if there are others out there like him, who are just more competent at hiding their tendencies, I really don't want him to win the "world's richest man" jackpot; it'd be a bad lesson to others. Steve Jobs hero-worship is bad enough.
reply
PunchTornado 10 hours ago
I'm waiting to see a more egregious company than openai and a bigger scammer ceo like altman. no, thank you. i hope openai goes bankrupt. especially since the ousting of ilya.
reply
co_king_5 10 hours ago
> I'm waiting to see a more egregious company than openai and a bigger scammer ceo like altman.

Anthropic and Dario Amodei are undoubtedly bigger scammers IMO.

reply
Imustaskforhelp 9 hours ago
Honestly at this point, I don't care which company lives or dies.

Because recent open source models have reached my idea of "enough". I just want the bubble to burst, but I think the point of the bubble burst is that Anthropic and OpenAI couldn't survive whereas Google has chances of survival but even then we have open source models and the bubble has chances of reducing hardware costs.

OpenAI and Anthropic walked so that Google or Open source models could run but I wish competition and hope that maybe all these companies can survive but the token cost is gonna cost more, maybe that will tilt things more towards hardware.

I just want the bubble to burst because the chances of it prolonging would have a much severe impact than what improvements we might see in Open source models. And to be quite frank, we might be living an over-stimulus of "Intelligence", and has the world improved?

Everything I imagined in AI sort of reached and beyond and I am not satisfied with the result. Are you guys?

I mean, now I can make scripts to automate some things and some other things but I feel like we lost something so much more valuable in the process. I have made almost all of my projects with LLM's and yet they are still empty. Hollow.

So to me, the idea of bursting the bubble is of the utmost importance now because as long as the bubble continues, we are subsiziding the bubble itself and we are gonna be the one who are gonna face the most impact, and well already are facing it.

in hindsight, I think evolution has a part in this. We humans are so hard coded to not get outside of the tribe/the-newest-thing so maybe collectively us as a civiliazation can get dis-enchanted first via crypto now AI but we also can think for ourselves and the civilization is built from us in my naive view.

So the only thing we can do is think for ourselves and try to learn but it seems as if that's the very thing AI wants to offload.

Have a nice day.

reply
m-schuetz 11 hours ago
Also Gemini works absolutely fantastic right now. I find it provides better results for coding tasks compared to ChatGPT
reply
frde 11 hours ago
Don't want to sound rude, but anytime anyone says this I assume they haven't tried using agentic coding tools and are still copy pasting coding questions into a web input box

I would be really curious to know what tools you've tried and are using where gemini feels better to use

reply
f311a 10 hours ago
It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session.

In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.

Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.

reply
dudeinhawaii 8 hours ago
My experience is that on large codebases that get tricky problems, you eventually get an answer quicker if you can send _all_ the context to a relevant large model to crunch on it for a long period of time.

Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.

I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).

I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.

They took a good while thinking.

But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.

I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.

So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.

reply
gman83 10 hours ago
You need to stick Gemini in a straightjacket; I've been using https://github.com/ClavixDev/Clavix. When using something like that, even something like Gemini 3 Flash becomes usable. If not, it more often than not just loses the plot.
reply
segfaultex 10 hours ago
Conversely, I have yet to see agentic coding tools produce anything I’d be willing to ship.
reply
parliament32 8 hours ago
Every time I've tried to use agentic coding tools it's failed so hard I'm convinced the entire concept is a bamboozle to get customers to spend more tokens.
reply
m00x 10 hours ago
Gemini is a generalist model and works better than all existing models at generalist problems.

Coding has been vastly improved in 3.0 and 3.1, but Google won't give us the full juice as Google usually does.

reply
FartyMcFarter 10 hours ago
My guess is that Google has teams working on catching up with Claude Code, and I wouldn't be surprised if they manage to close the gap significantly or even surpass it.

Google has the datasets, the expertise, and the motivation.

reply
kdheiwns 10 hours ago
I've had the same experience with editing shaders. ChatGPT has absolutely no clue what's going on and it seems like it randomly edits shader code. It's never given me anything remotely usable. Gemini has been able to edit shaders and get me a result that's not perfect, but fairly close to what I want.
reply
logicallee 11 hours ago
have you compared it with Claude Code at all? Is there a similar subscription model for Gemini as Claude? Does it have an agent like Claude Code or ChatGPT Codex? what are you using it for? How does it do with large contexts? (Claude AI Code has a 1 million token context).
reply
m-schuetz 7 hours ago
I tried Claude Opus but at least for my tasks, Gemini provided better results. Both were way better than ChatGPT. Haven't done any agents yet, waiting on that until they mature a bit more.
reply
landl0rd 10 hours ago
- yes, pretty close to opus performance

- yes

- yes (not quite as good as CC/Codex but you can swap the API instead of using gemini-cli)

- same stuff as them

- better than others, google got long (1mm) context right before anyone else and doesn't charge two kidneys, an arm, and a leg like anthropic

reply
logicallee 9 hours ago
thanks for these answers.
reply
airstrike 11 hours ago
it's nowhere near claude opus

but claude and claude code are different things

reply
dudeinhawaii 7 hours ago
My take has been...

Gemini 3.1 (and Gemini 3) are a lot smarter than Claude Opus 4.6

But...

Gemini 3 series are both mediocre at best in agentic coding.

Single shot question(s) about a code problem vs "build this feature autonomously".

Gemini's CLI harness is just not very good and Gemini's approach to agentic coding leaves a lot to be desired. It doesn't perform the double-checking that Codex does, it's slower than Claude, it runs off and does things without asking and not clearly explaining why.

reply
logicallee 9 hours ago
(Claude Code now runs claude opus, so they're not so different.)

>it's [Gemini] nowhere near claude opus

Could you be a bit more specific, because your sibling reply says "pretty close to opus performance" so it would help if you gave additional information about how you use it and how you feel the two compare. Thanks.

reply
nobody_r_knows 10 hours ago
ChatGTP isn't even meant for coding anymore, nor Gemini. It's OpenAI Codex vs Claude Code. Gemini doesn't even have an offering.
reply
input_sh 10 hours ago
https://antigravity.google/

On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.

I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.

reply
pastjean 10 hours ago
opencode + gemini is pretty nicely working
reply
m-schuetz 7 hours ago
And yet I got better results with Gemini than with Claude Opus.
reply
hansmayer 10 hours ago
It is a rather attractive view, and I used to hold it too. However, seeing as Alphabet recently issued 100-year bonds to finance the AI CapEx bloat, means they are not that far off from the rest of the AI "YOLO"s currently jumping off the cliff ...
reply
jazzypants 9 hours ago
They have over $100B in cash on hand. I can't pretend to understand their financial dealings, but they have a lot more runway before that cliff than most of the other companies.
reply
gorgolo 10 hours ago
If someone is willing to fund you with a 100y bond, and it gives you extra cash to move even a bit faster, it sounds like a pretty good deal.

One thing I don’t get though, if superintelligence is really 5 years away, what’s going to be the point of a fixed-interest 100y bond.

reply
hansmayer 8 hours ago
Motorola was I think the last company to issue a 100-year bond in the 90s. Whatever happened to Motorola?
reply
sethops1 11 hours ago
This is the conclusion I came to as well. Either make your own hardware, or drown paying premiums until you run out of money. For a while I was hopeful for some competition from AMD but that never panned out.
reply
xnx 8 hours ago
In a few years this will amazingly alway have been obvious to everyone.
reply
piker 11 hours ago
And what about Microsoft?
reply
mrbungie 11 hours ago
They don't have the know how (except by proxy via OpenAI) nor custom hardware and somehow they are even worse at integrating AI into their products than Google.
reply
raw_anon_1111 10 hours ago
They don’t need to. Just like Amazon they are seeing record revenues from Azure because of their third party LLM hosting platforms only being gated because no one can get enough chips right now
reply
napolux 11 hours ago
See Apple in my previous comment
reply
alex1138 8 hours ago
Now if only Google could a) drop its commitment to censorship and b) stop prioritizing Youtube links in its answers
reply
duped 8 hours ago
Google has proven themselves to be incapable of monetizing anything besides ads. One should be deeply skeptical of their ability to bring consumer software to market, and keep it there.
reply
napolux 11 hours ago
downvote all you want. google has all the money to keep up and just wait for the others to die. apple is a different story, btw, can probably buy openai or anthropic, but for now they're just waiting like google, and since they need to provide users AI after the failure with Apple Intelligence, they prefer to pay for Google and wait for the others to fight against each other.

openai and anthropic know already what will happen if they go public :)

reply
r0b05 11 hours ago
What will happen if they go public?
reply
mrcwinn 10 hours ago
That’s not a well informed argument. Even if Apple could finance the $1T+ it would cost to buy Anthropic - they’re not making that money back by making the iPhone a little better. The only way to monetize is by selling, as Anthropic does, enterprise services to businesses. And that’s not Apple’s “DNA,” to use their language.
reply
aurizon 11 hours ago
Google is vulnerable in search and that already shows as we see a decline as many parallel paths emerge. At the beginning it was a simple lookup for valid information and it became dominant - then pages of pay ranked preference spots filled pages that obscured what you wanted = it became evil.
reply
raw_anon_1111 10 hours ago
We see no such thing. Google just announces review revenue and profit and Apple hinted at it not seeing any decline in revenue from their search deal with Google which is performance based.
reply
wooger 10 hours ago
And Gemini is already integrated into the results page and gives useful answers instantly, alongside advertising... What problem for google are you seeing?
reply
neya 10 hours ago
Google is the new Open AI. Open AI is the new Google. Guess who wants to shove advertisements into paying customers' face and take a % of their revenues for using their models to build products? Not Google.
reply
pell 10 hours ago
>Not Google.

Google's main revenue source (~ 75%) is advertising. They will absolutely try to shove in ads into their AI offerings. They simply don't have to do it this quickly.

reply
ipaddr 10 hours ago
The majority of people who use Google for AI encounter it at the top of an ad filled search engine.
reply
SecretDreams 10 hours ago
> Guess who wants to shove advertisements into paying customers' face and take a % of their revenues for using their models to build products? Not Google.

But, also, probably google.

reply
Forgeties79 10 hours ago
Don’t worry, Google is profiting off of your data one way or another lol
reply
munk-a 8 hours ago
OpenAI is not viable. OpenAI is spending like Google without a warchest and they have essentially nothing to offer outside of brand recognition. Nvidia propping them up to force AI training to be done on their chips vs. google in-house cores is their only viable path forward. Even if they develop a strong model the commitments they've made are astronomically out of reach of all but the largest companies and AI has proven to be a very low moat market. They can't demand a markup sufficient to justify that spend - it's too trivial to undercut them.

Google/Apple/Nvidia - those with warchests that can treat this expenditure as R&D, write it off, and not be up to their eyeballs in debt - those are the most likely to win. It may still be a dark-horse previously unknown company but if it is that company will need to be a lot more disciplined about expenditures.

reply
sigmar 10 hours ago
>various open-weighted Chinese models out there. They've kept good pace with flagship models,

I don't think this is accurate. Maybe it will change in the future but it seems like the Chinese models aren't keeping up with actually training techniques, they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. https://x.com/Altimor/status/2024166557107311057

reply
A_D_E_P_T 9 hours ago
> they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge.

You link to an assumption, and one that's seemingly highly motivated.

Have you used the Chinese models? IMO Kimi K2.5 beats everything but Opus 4.6 and Gemini 3.1... and it's not exactly inferior to the latter, it's just different. It's much better at most writing tasks, and its "Deep Research" mode is by a wide margin the best in the business. (OpenAI's has really gone downhill for some reason.)

reply
nwlieb 9 hours ago
Have you tried the OpenAI deep research in the past week or so? It's been updated to use 5.2 https://x.com/OpenAI/status/2021299935678026168

(I work at OpenAI, but on the infra side of things not on models)

reply
parliament32 8 hours ago
Does that actually matter? If "catching up" means "a few months behind" at worst for.. free?
reply
sigmar 8 hours ago
For certain use-cases, sure it doesn't matter. but that doesn't make those models cutting edge. Some use-cases are adversarial, and 1% lower efficacy matters a lot.
reply
arthurcolle 9 hours ago
I have been using a quorum composed of step-3.5-flash, Kimi k2.5 and glm-5 and I have found it outperforms opus-4.5 at a fraction of the cost

That's pretty cutting edge to me.

EDIT: It's not a swarm — it's closer to a voting system. All three models get the same prompt simultaneously via parallel API calls (OpenAI-compatible endpoints), and the system uses weighted consensus to pick a winner. Each model has a weight (e.g. step-3.5-flash=4, kimi-k2.5=3, glm-5=2) based on empirically observed reliability.

The flow looks like:

  1. User query comes in
  2. All 3 models (+ optionally a local model like qwen3-abliterated:8b) get called in parallel
  3. Responses come back in ~2-5s typically
  4. The system filters out refusals and empty responses
  5. Weighted voting picks the winner — if models agree on tool use (e.g. "fetch this URL"), that action executes
  6. For text responses, it can also synthesize across multiple candidates
The key insight is that cheap models in consensus are more reliable than a single expensive model. Any one of these models alone hallucinates or refuses more than the quorum does collectively. The refusal filtering is especially useful — if one model over-refuses, the others compensate.

Tooling: it's a single Python agent (~5200 lines) with protocol-based tool dispatch — 110+ operations covering filesystem, git, web fetching, code analysis, media processing, a RAG knowledge base, etc. The quorum sits in front of the LLM decision layer, so the agent autonomously picks tools and chains actions. Purpose is general — coding, research, data analysis, whatever. I won't include it for length but I just kicked off a prompt to get some info on the recent Trump tariff Supreme Court decision: it fetched stock data from Benzinga/Google Finance, then researched the SCOTUS tariff ruling across AP, CNN, Politico, The Hill, and CNBC, all orchestrated by the quorum picking which URLs to fetch and synthesizing the results, continuing until something like 45 URLs were fully processed. Output was longer than a typical single chatbot response, because you get all the non-determinism from what the models actually ended up doing in the long-running execution, and then it needs to get consensus, which means all of the responses get at least one or N additional passes across the other models to get to that consensus.

  Cost-wise, these three models are all either free-tier or pennies per million tokens. The entire session above (dozens of quorum rounds, multiple web fetches) cost less than a single Opus prompt.
reply
earth2mars 9 hours ago
When you say quorum what do you mean? Is it like an agent swarm or using all of them in your workflow and independently they perform better than opus? Curious how you use (tooling and purpose - coding?)
reply
tmaly 8 hours ago
I have not heard of step-3.5-flash before. But as the other commenter asked, I would love to hear about your quorum technique. What type of projects are you building with the quorum?
reply
enceladus06 10 hours ago
OpenAI and Anthropic don't have a moat. We will have actual open models like DeepSeek and Kimi with the same functionality as Opus 4.6 in Claude Code <6mo IMO. Competition is a good thing for the end-user.
reply
zozbot234 10 hours ago
The open-weight models are great but they're roughly a full year behind frontier models. That's a lot. There's also a whole lot of uses where running a generic Chinese-made model may be less than advisable, and OpenAI/Anthropic have know-how for creating custom models where appropriate. That can be quite valuable.
reply
coder543 10 hours ago
I would not say a full year... not even close to a year: GLM-5 is very close to the frontier: https://artificialanalysis.ai/

Artificial Analysis isn't perfect, but it is an independent third party that actually runs the benchmarks themselves, and they use a wide range of benchmarks. It is a better automated litmus test than any other that I've been able to find in years of watching the development of LLMs.

And the gap has been rapidly shrinking: https://www.youtube.com/watch?v=0NBILspM4c4&t=642s

reply
zozbot234 10 hours ago
Benchmarks are always fishy, you need to look at things that you'd use the model for in the real world. From that point of view, the SOTA for open models is quite behind.
reply
lancebeet 8 hours ago
If benchmarks are fishy, it seems their bias would be to produce better scores than expected for proprietary models, since they have more incentives to game the benchmarks.
reply
coder543 10 hours ago
No... benchmarks are not always "fishy." That is just a defense people use when they have nothing else to point to. I already said the benchmarks aren't perfect, but they are much better than claiming vibes are a more objective way to look at things. Yes, you should test for your individual use case, which is a benchmark.

As I said, I have been following this stuff closely for many years now. My opinion is not informed just by looking at a single chart, but by a lot of experience. The chart is less fishy than blanket statements about the closed models somehow being way better than the benchmarks show.

reply
mattmaroon 10 hours ago
That's a lot now, in the same way that a PC in 1999 vs a PC in 2000 was a fairly sizeable discrepancy. At some point, probably soon, progress will slow, and it won't be much.
reply
jnovek 10 hours ago
I just did a test project using K2.5 on opencode and, for me, it doesn’t even come close to Claude Code. I was constantly having to wrangle the model to prevent it from spewing out 1000 lines at once and it couldn’t hold the architecture in its head so it would start doing things in inconsistent ways in different parts of the project. What it created would be a real maintenance nightmare.

It’s much better than the previous open models but it’s not yet close.

reply
34679 9 hours ago
I can't shake the feeling that the RAM shortage was intentionally created to serve as a sort of artificial moat by slowing or outright preventing the adoption of open weight models. Altman is playing with hundreds of billions of other people's dollars, trying to protect (in his mind) a multi-trillion dollar company. If he could spend a few billion to shut down access to the hardware people need to run competitor's products, why wouldn't he?
reply
chasd00 7 hours ago
From what I understand the RAM producers see the writing on the wall. They’re not going to invest in massively more capacity only to have it sit completely idle in 10 years.

RAM shortage is probably a bubble indicator itself. That industry doesn’t believe enough in the long term demand to build out more capacity.

reply
zozbot234 8 hours ago
It's very difficult to "intentionally create" a real shortage. You can hoard as much as you want, but people will expect you to dump it all right back onto the market unless you really have a higher-value use for the stuff you hoarded (And then you didn't intentionally create anything, you just bought something you needed!).

Plus producers will now feel free to expand production and dump even more onto the market. This is great if you needed that amount of supply, but it's terrible if you were just trying to deprive others.

reply
tmaly 8 hours ago
Hard drives and GPUs seem to be facing the same fate.
reply
AznHisoka 11 hours ago
Anthropic at least seems to be doing well with enterprises. OpenAI doesnt have that level of trust with enterprise use cases, and commodization is a bigger issue with consumers, when they can just switch to another tool easily
reply
SirensOfTitan 5 hours ago
Yeah, Anthropic is inarguably in a better position, but I don’t see how they justify their fundraising unless they find some entrenched position that is difficult for competitors to replicate.

Enterprise switching costs aren’t 0, but they’re much less than most other categories, especially as models mature and become more fungible.

The best moat I can think of is a patentable technique that facilitates a huge leap that Anthropic can defend, but even then, Chinese companies could easily ignore those patents. And I don’t even know if AI companies could stick to those guns as their training is essentially theft of huge portions of copyrighted material.

reply
wejwej 11 hours ago
To take the other side of this, as computers got commodified there still was a massive benefit to using cloud computing. Could it be possible that that happens with LLMs as well as hardware becomes more and more specialized? I personally have no idea but love that there’s a bunch of competition and totally agree with your point regulation and export controls are just ways to make it harder for new orgs to compete.
reply
idopmstuff 10 hours ago
I do think the models themselves will get commoditized, but I've come around to the opinion that there's still plenty of moat to be had.

On the user side, memory and context, especially as continual learning is developed, is pretty valuable. I use Claude Code to help run a lot of parts of my business, and it has so much context about what I do and the different products I sell that it would be annoying to switch at this point. I just used it to help me close my books for the year, and the fact that it was looking at my QuickBooks transactions with an understanding of my business definitely saved me a lot of time explaining.

On the enterprise side, I think businesses are going to be hesitant to swap models in and out, especially when they're used for core product functionality. It's annoying to change deterministic software, and switching probabilistic models seems much more fraught.

reply
ahussain 9 hours ago
People were saying the same last year, and then Anthropic launched Claude Code which is already at a $2.5B revenue run rate.

LLMs are useful and these companies will continue to find ways to capture some of the value they are creating.

reply
ulfbert_inc 11 hours ago
>LLM tech as it stands right now will turn into a commodity

I am yet to see in-depth analysis that supports this claim

reply
otabdeveloper4 9 hours ago
It's already a commodity. The strongest use case is self-hosted pornography generation.
reply
jpalomaki 11 hours ago
Both Anthropic and OpenAI are working hard to move away from being "just" the LLM provider on the background.
reply
KoolKat23 10 hours ago
Anthropic I feel will be alright. They have their niche, it's good and people actually do pay for their services. Why do people still use salesforce when there's other free CRM's. They also haven't from what I can tell scaled for some imaginary future growth.

OpenAI I'm sorry to say are all over the place. They're good at what they do, but they try to do too much and need near ponzi style growth to sustain their business model.

reply
nvarsj 10 hours ago
I don't think you can put OpenAI and Anthropic together like that.

Anthropic has actually cracked Agentic AI that is generally useful. No other company has done that.

reply
tinyhouse 8 hours ago
Openai is just playing catchup at this point, they completely lost thier way in my view.

Anthropic on the other hand is very capable and given the success of claude code and cowork, I think they will maintain their lead across knowledge work for a long time just by having the best data to keep improving their models and everything around. It's also the hottest tech conpany rn, like Google were back in the day.

If I need to bet on two companies that will win the AI race in the west, it's Anthropic and Google. Google on the consumer side mostly and Anthropic in enterprise. OpenAI will probably IPO soon to shift the risk to the public.

reply
chasd00 7 hours ago
If anthropic continues getting their foot in the enterprise door then maybe they can tap into enterprise cloud spending. If Athropic can come up with services and things (db, dns, networking, webservers, etc) that claudecode will then prefer then maybe they become a cloud provider. To me, and I am no business expert btw, that could be a path to sustainable financials.

Edit: one thing I didn’t think about is Anthropic more or less runs at the pleasure of AWS. Of Amazon sees Anthropic as a threat to AWS then it could be lights out.

reply
tinyhouse 4 hours ago
Yes, they depend on AWS for compute and Amazon also owns a big chunk of Anthropic (it used to be close to 30%, probably less now with the recent raises). I think it's a good partnership since for the most part they focus on different things and I don't see Anthropic going after AWS - they are an AI company first and foremost. Amazon has their own AI stuff for enterprise but no one uses it so I don't think they take it seriously. They know they cannot compete here.

I think that OpenAI and Microsoft is a more challenging partnership with much more overlap.

reply
deepriverfish 9 hours ago
they might end up like Dropbox
reply
llm_nerd 10 hours ago
Anthropic, at least, has gone to lengths to avoid hardware lock-in or being open to extortion of the nvidia variety. Anthropic is running their models on nvidia GPUs, but also Amazon Trainium and Google's TPUs. Massive scale-outs on all three, so clearly they've abstracted their operations enough that they aren't wed to CUDA or anything nvidia-specific.

Similarly, OpenAI has made some massive investments in AMD hardware, and have also ensured that they aren't tied to nvidia.

I think it's nvidia that has less of a moat than many imagine they do, given that they're a $4.5T company. While small software shops might define their entire solution via CUDA, to the large firms this is just one possible abstraction engine. So if an upstart just copy pastes a massive number of relatively simple tensor cores and earns their business, they can embrace it.

reply
delaminator 11 hours ago
Anthropic is also using lots of Amazon hardware for inference.
reply
lvl155 10 hours ago
Think LLM by itself is basically a commodity at this point. Not quite interchangeable but it’s more of artistic differences rather than technological. I used to think it was data and that would give companies like Google a leg up.
reply
techpression 11 hours ago
How is censorship / ”alternative information” affecting them? Genuinely curious as I’ve only read briefly about it and it was ages ago.
reply
whynotmaybe 10 hours ago
I've tried deepseek a few months ago and asket about the Tiananmen square protests and massacre.

At first the answer was "I can't say anything that might hurt people" but with a little persuasion it went further.

The answer wasn't the current official answer but way more nuanced that Wikipedia's article. More in the vein of "we don't know for sure", "different versions", "external propaganda", "some officials have lied and been arrested since"

In the end, when I asked whether I should trust the government or ask for multiple source, it strongly suggested to use multiple sources to form an opinion.

= not as censored as I expected.

reply
999900000999 10 hours ago
They'll ban Chinese models, or do something like calling them security risks without proof.

Enterprise customers will gladly pay 10x to 20x for American models. Of course this means American tech companies will start to fall behind, combined with our recent Xenophobia.

Almost all the top AI researchers are either Chinese nationals or recent immigrants. With the way we've been treating immigrants lately ( plenty of people with status have been detained, often for weeks), I can't imagine the world's best talent continuing to come here.

It's going to be an interesting decade y'all.

reply
deanmoriarty 8 hours ago
Really hard for me to understand why the average HN commenter has an almost cultish behavior towards Anthropic, they are somehow excused for all their sins whereas everything OpenAI does is taken in the most uncharitable way. It’s a very consistent pattern.
reply
QuadmasterXLII 5 hours ago
There is some need for moral dynamic range. Anthropic is quite bad and there's lots of room at the bottom for OpenAI to be remarkably worse.
reply
recitedropper 6 hours ago
Starts with "astro" and ends with "turfing".

Think about how valuable HN is for a company whose primary market is professional devs.

reply
hz231 10 hours ago
Good, AI is bad.
reply