Nepo baby is coming with a political angle and wants control of the news media part of WB. The American media landscape is already without much competition nor diversity in political views, now there would practically be none.
Unfortunately there is no chance of that happening.
At his level of personal wealth there is no realistic scenario that leads to personal bankruptcy. In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.
The only consolation is the lawnmower is 81 and thus is going to be dead soon (even the mega-wealthy can't plastic surgery themselves out of this outcome, at least not yet) and he can't take any of it with him. But all indications point to his progeny having aspirations to be even more damaging to society than he has been.
Reminder to lay up your treasures in heaven.
That's not how any of this works. "Too big to fail" can be applied to companies, but I don't know of any examples of it being applied to people.
Piketty’s central argument is that when the rate of return on capital (r) exceeds the rate of economic growth (g), wealth concentrates over time into fewer and fewer hands. This is his now-famous r > g inequality.
The implication is that capitalism, left to its own devices, doesn’t naturally spread wealth around. It does the opposite. The relatively egalitarian period of the mid-20th century (roughly 1930s-1970s) was the historical exception, driven by two world wars, the Great Depression, and deliberate policy choices like progressive taxation. The longer historical pattern, which Piketty traces with extensive data going back to the 18th century, is one of increasing concentration.
His practical prescription is a global progressive tax on wealth (not just income) to counteract this tendency. He acknowledges this is politically difficult but argues it’s the most straightforward mechanism to prevent a return to the kind of patrimonial capitalism that defined the Gilded Age and the Belle Époque, where inherited wealth dominated and social mobility was minimal.
The book’s real contribution was less the theoretical claim (which economists had gestured at before) and more the empirical work. Piketty and his collaborators assembled an unprecedented dataset on wealth and income distribution across multiple countries and centuries, which gave the argument a weight that prior discussions lacked.
https://finance.yahoo.com/news/10-billionaires-went-broke-15...
These are the kind of criminals where the judges will let them stay under home arrest in their twenty-bedroom mansion, have their chauffeur drive them around in a car worth more than my entire life savings, etc... because it would be "unconscionable" for them to lose the life that they're accustomed to. I.e.: Affluenza.
Just look at Prince Andrew or whatever he's called now. He raped children and his rightful punishment would be to sit in a jail cell with no access to anything even resembling his lavish digs, instead he's luxuriating in a lifestyle you and I would envy.
I can list far, far more examples of billionaires or mere hundred-millionaires living luxuriously after committing capital crimes or "going bankrupt" than not.
Find me an ex-billionaire living out of a motor home, then I'll cede your point.
1. Against rich/powerful people
Empirical work... like conveniently ignoring the fact that there's far less old money billionaires than we'd expect?
>For these lucky people, the experience of the Vanderbilts and their contemporaries offers a cautionary tale. At the turn of the 20th century, America’s census recorded about 4,000 millionaires, note Victor Haghani and James White, two wealth managers, in their book, “The Missing Billionaires”. Suppose a quarter of them had at least $5m (the richest had hundreds) and had invested it in America’s stockmarket. Had they then procreated at the average rate, paid their taxes and spent 2% of their capital each year, their descendants today would include nearly 16,000 old-money billionaires. In reality, it is a struggle to find a single one who traces their fortune back to the first Gilded Age.
https://www.economist.com/finance-and-economics/2025/06/12/h...
This is a good point because there are no oil billionaires and things like trusts, family offices, offshoring etc. actually pose no challenge to accurately numerating and identifying people that ‘have’ or effectively control over a billion dollars at their discretion because they all just sign up for the list.
Of course there’s the Panama Papers and the Paradise Papers but that doesn
So far the only individual that has been meaningfully punished has been Ghislaine Maxwell.
This seems like a prime example of being too big to fail. The FBI puts on kid gloves whenever a rich person is accused of wrong doing.
>So far the only individual that has been meaningfully punished has been Ghislaine Maxwell.
That factoid is meaningless without the rate of prosecutions/convictions for people that FBI "had tabs on".
With J6, in the matter of 2 or so years the FBI has secured over 1000 convictions.
When it wants to, the FBI can move very quickly.
Speaking of which, the previous conviction, the super sweet deal Acosta gave to Epstein before is also an example of elite unaccountability.
FBI and friends protected Epstein until it became impossible.
Again, large numbers, but no context. How many people did you think were at the riots? 10k? 50k?
Moreover, Jan 6th was an event that definitely happened. The same can't be said for whatever happened at Epstein's island. The island exists, Epstein's a convicted sex offender, and people flew there, but associating with sex offenders isn't a crime, no matter how despicable it might seem.
Epstein died in his cell. If Maxwell preferred death to punishment she could've also killed herself. Also it's well documented that women receive less harsh punishment in court vs men for the same crimes, so yeah, it's sexism but not in the way you insinuate.
> Epstein himself is probably still alive in Tel Aviv anyway.
Yes, and it's Maxwell's lookalike that's serving the sentence, while she's enjoying herself in Argentina. See how quickly you can derails discussion with such absurd claims without any substance?
It isn't that they get bailed out by the government (like the banks in 2008), it is that at the scale of their wealth there is no realistic way to lose it fast enough to make any significant negative difference when the neutral state of wealth at that scale is to snowball ever larger (mostly because we refuse to tax it appropriately).
https://finance.yahoo.com/news/10-billionaires-went-broke-15...
This is plainly false. There are plenty of example, even recently, of billionaires losing their fortunes or going bankrupt. Often they come with criminal prosecution because they get desperate and try illegal ways to hang on to their wealth. Sam Bankman-Fried, Elizabeth Holmes, and several other examples come to mind.
There are a lot of stories of billionaires getting too risky with their investments or too concentrated in businesses and losing the majority of their wealth. The Barclay story, Jim Justice, the old Peloton CEO.
It’s not a common outcome because you have to try hard to screw up that badly when you have over a billion dollars in wealth. Parking it anywhere in common investments would leave you and your ancestors set forever.
Billionaires aren't on the same level of wealth as hectobillionaires, just like decamillionaires aren't on the same level of wealth as billionaires.
Billionaires that were dumb enough to attempt to screw even bigger billionaires. Sure you can find exceptions to the rules, but Ellison isn't going to be one of those.
Haven't been following Bryan Johnson, eh?
he's gonna die just like the rest of us, just with a slightly odder uncanny valley look for himself when he goes.
Like, what in the good god damn are we using all this energy for?
Bad AI porn, terrible AI music, AI scams and completely devastating the labor market.
And based on the recent Anthropic/Pentagon rift... I guess also creating autonomous kill-bots and doing mass surveillance.
Just a bunch of super cool stuff.
Nestle jumps into my mind whenever I want to think of an evil corporation and water together.
Seems like the environmentally responsible thing to do be to build the datacenter near the coast and use the waste heat to desalinate water. Or at least dissipate the heat into the ocean rather than boiling off an inland freshwater supply.
Keep in mind that the sun is constantly dumping energy on us. Absorption averaged across the entire earth is ~200 W/m^2. Assuming I didn't misplace some zeros somewhere then a gigawatt corresponds to ~5 km^2 of ocean surface. That's the daily flux. Penetration falls off exponentially so 75% of that only ever makes it ~10 m down.
I think the takeaway here is the utterly incomprehensible scale of the ocean.
Some of the reason for the high density is that you need devices physically close to each other to share such bandwidth. It’s not because we’re limited by the physical building space, because we can construct buildings all day long. Sending bits around at ultra high speed is hard and you need to keep all of the devices physically close to avoid having your interconnect costs explode.
Still, the world I’m used to operating in is typically 5-10 kVA/rack.
There has to be some theory to explain the story to be consistent with this comment.
To use the hit HBO TV show silicon valley analogy, it is far more likely that "the bear is sticky with honey" will happen at Oracle than at Open AI. Some kind of game of telephone gone wrong at some point and now the people responsible at Oracle must double down in order to kick the can to the next quarter and not appear clueless.
Statutory disclaimer: I am not affiliated with either Open AI or Oracle and have no insider information. All of this is mere conjecture and has no basis in reality.
Don't forget the possibility that it's AI slop.
That sounds about right.
> People at openai are lying to cnbc?
Remove "to cnbc" and that's a yes.
> cnbc are fabricating stories while drunk?
Maybe not drunk but likely high.
I could see Nvidia adding terms of sale requiring disposal rather than resale.
I also don't think companies are going to have mandatory replacement cycles for GPU hardware the same way they do for everything else, because:
1. It is an order or magnitude (or more) more expensive.
2. It isn't clear whether Moore's law will apply to the AI GPU space the same way it has for everything else.
Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
That's exactly the point.
Performance/watt is increasing so much gen-to-gen that it makes no longer sense to run older hardware.
Not my words, Jensen's.
New stuff is all liquid cooled by default and that's a paradigm shift for your average home lab.
I'm less aware of exactly what's happening on the power side of things but I think some of the architectures are now moving to relatively high voltage DC throughout and then down converting it to low voltage right before it's used. So not exactly just plug-and-play with your average nema15 outlet.
There are PCIe versions of these right? And another comment is saying there are PCI adapters too. It "only" requires 600 to 700W. It's not out of reach for everybody.
If the used regular server market is any indication, you can find, after a few years, a lot of enterprise gear at totally discounted prices. CPU costing $4K brand new for $100 after a few years: stuff like that.
A friend has got a 42U rack and so do some homelab'ers. People have been running GPU farms mining cryptocurrencies or doing "transcoding" (for money).
It's not just CPUs at 1/40th of their brand new price: network gear too. And ECC RAM (before the recent RAM craze).
I'm pretty sure that if H200 begin to flood the used market, people shall quickly adapt.
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
I agree with that. But if they resell old H200s, people are resourceful and shall find a way to run these.
You'd be better off with the SXM-PCIe adapters.
It's a monolithic 8U rackmount appliance so perhaps a dishwasher would make for a decent size comparison?
Definitely no good if you rent but homeowners should have little to no difficulty. The sort of people interested in such gear usually have multi kW racks already.
My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs.
I bought a used NEC SX Aurora TSUBASA (PCIe x16 board that looks like a GPU board) and realized it has no fans. The server case it is designed to fit into is pressurized by fans forcing air through eight cards on a special 4 + 4 slot motherboard. I have to stack and mount three 40mm fans on the back.
Jensen said they added a lot of RAS in Blackwell which kind of admits Hopper wasn't reliable enough.
In order to take advantage of that, someone needs to be positioned to process all that material economically, and to make the logistics achievable by the big players. If it costs Facebook $10million to store and transport phased out gpus vs just sending them to a landfil, they're not going to do it. If they get $100k for recycling - probably not going to do it. If they pocket $5 million, they will definitely contract that out, especially if it costs $50 million to build out the infrastructure to handle it.
Probably a good company idea - transport, disposal, refurbishment of out of cycle GPUs and datacenter assets, creating a massive recycling pipeline for recapturing all the valuable elements is a pretty good niche.
https://www.youtube.com/watch?v=1H3xQaf7BFI&t=1577s
in the States.
Would be interested to know if others have takes on this.
A couple real world points:
1. They generally don't just fail. More likely a repairable component on a board fails and you can send it out to be repaired.
2. For my current stuff, I have a 3 year pro support contract that can be extended. Anything happens, Dell goes and fixes it. We also haven't had someone in our cage at the DC in over 6 months now.
This site apparently sources ex-enterprise(-only) systems and puts them into desktop style enclosures.
Why would them sell it cheaper to the 2nd market??
It will hurt the sales of new ones. This is the way even with food, let alone technology. Don't expect to buy cheaper 2nd GPU any century soon.
Their databases are heavily used in government, banking and other large industries which have been slower to adapt to change and strugglyto migrate away. At what point does purchasing oracle to gain customer share, existing data centres and the opportunity to migrate to your cloud platform make more sense than competing?
They still have a high market value. However, the debt they will need to service will result in ongoing price increases which will encourage people to migrate away. Over time they will struggle to service the debt and a buyout will be the best of the bad options.
An interesting perspective of IBM is its relative position. It's leveled off at about $60b/y, after a lengthy decline. It is far overmatched by many big tech companies today in terms of revenue.
It's a niche business, serving niche demands. I think IBM's moat is that most of its business is highly uninteresting: industrialized box ticking work, deeply entangled by contracts and a strong need for continuity by its customers.
I actually had a recent encounter with one of IBM's products. A commercial B2B REST API I created was analyzed by an IBM vulnerability scanning platform on behalf of a major US municipality. It didn't find anything actually critical, but there were some worthwhile points in the report, and working around a false positive was a frustration. The product, in this case, is diffusion of responsibility.
On the Big Iron end, IBM isn't really selling hardware. They selling an ecosystem: services, software, support, continuity (over decades,) etc. It pleases me that they chose to stick to Power: it's nice to know Itanium didn't kill off every enterprise RISC platform.
Maybe, one day, some major quantum computing breakthrough happens at IBM. As far as I can see, that's the only play they have that could change their trajectory. In the meantime, they have a large software portfolio and plenty of institutions that will keep signing contracts long after I'm gone.
They do a lot of stuff. Also own Hashicorp now, so they have things like: Ansible / RedHat Linux (already owned), Terraform, Consul, Nomad, Packer, etc. A lot of "let's build modern infra" tooling.
That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.
[1] https://l4rz.net/running-nvidia-sxm-gpus-in-consumer-pcs/
If you're Oracle it's not necessary a bad thing if you build an antiquated data center. Isn't much of their customer base legacy customers they are rent-seeking from in perpetuity? Those people are never going to be doing cutting edge AI. They will do what they have always done: adopt new technologies right at the nadir of the Trough of Disillusionment.
So they have to hope they’re a part of the future in the AI capacity because their SaaS business is going to take a big hit.
YTD performance didn’t fully bake this reality in. It was seen as them having 2 huge revenue streams, the market is realizing that AI is a threat to SaaS and baking that into stonks
Stargate is backed by the US gvt hence why they're comfortable to put that under debt financing
Are they? Unless you are Nvidia that is very far from the case.
OpenAI's current revenue is $25 billion a year. They are expected to spend $600 billion on infrastructure in the next 4 years to sustain and grow that revenue.
Amazon, Google, Microsoft and Meta are spending a combined $650 billion on infrastructure in 2026 alone.
The story is the same across the rest of the industry.
None of these investments are immediately profitable. And it remains to be seen whether they eventually will be or not.
If you're OpenAI spending $100M on a training run they're not.
But if you're Oracle renting out GPUs to little guys doing inference, they are.
If it's built in stages each state will have never variants of hardware I imagine.
David Ellison is fueling his buying spree with debt guaranteed by his dad's oracle shares. The various assets David has bought are already suffering losses of viewership because viewers are turned off by their new ideological slant.
Usually debt investors are not worried if the stock price is high. Debt has precedence over equity, so if the stock price is riding high, the CEO can always be convinced to print more shares to service the debt. The Oracle stock price has not been doing that hot lately, however. As the article said, it is 50% down. Still ORCL has 430 Billion market cap in comparison with 130 Billion of debt. It seems manageable. But stock prices can move very fast. Ironically, the war in Iran, which David's new news sources keep supporting is causing ORCL stock to go down which can bring down David's new media empire.
David just purchased Warner Bros for about 110. A lot of that (40 billion) is also guaranteed by daddy's ORCL shares. Warner Bros owns Comedy Central, which sadly has been one of Americas most dependable news sources.
The house of cards is still standing but its getting awfully wobbly.
https://www.msn.com/en-us/money/general/as-oracle-plans-thou...
https://en.wikipedia.org/wiki/Power_Macintosh_7100
Sagan sued. Engineers at Apple changed the name to BHA: "Butt-Head Astronomer".
He sued again. The final codename was "LAW: Lawyers are Wimps".
The way Nvidia does it is actually super respectful and it's honestly better to use names like these instead of ULTRA PRO MAX 5x etc.
The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.
It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.
Just to compare and contrast:
https://www.videocardbenchmark.net/power_performance.html
Here's a synthethic benchmark page listing every GPU in recent memory. True, its not AI, but if we look at the 1080 Ti, a 9 year old card at this point, and compare it with the 5090 we see the gains were 190/74=2.56x in that timespan that involved multiple die shrinks and uArch changes.
I think these numbers might not hold up on IRL workloads, and afaict older datacenter cards still hold up well and are being used in production.
E.g. the next gen might have hardware inference for lower bits, more memory bandwidth, etc.
"Those things are still flying! Introduced in 1955!"
"But that was the B version, all those that are still flying are the H version, so many iterations between them!"
"Welcome to 1962"
The efficiency is in other areas too e.g. memory, network, etc. It's TOTAL.
> Here's a synthethic benchmark page listing every GPU in recent memory
We don't have the GPU gains not because of process nodes. Nvidia and later AMD stopped investing in that direction. They started optimizing for AI not graphics.
Meanwhile, commercial operators have already deployed their hardware for public workloads. Existing Blackwell capacity won’t just be shifted into classified environments—governments don’t repurpose hardware from unclassified infrastructure for secret/TS systems. That deployed stock will stay in the private sector for hosted AI workloads.
For many high-security use cases, new Blackwell systems may effectively be the only viable option, especially given the slow review cycles around new firmware and GPU software stacks. Newer chipsets will also be prioritized for training due to performance gains.
Oracle likely recognizes this dynamic and is betting competitors may eventually need to deploy in their data centers. Governments haven’t historically deployed GPU capacity at this scale-beyond ASIC/FPGA crypto workloads.. and likely don’t have large pools of pristine Blackwell hardware available.
They’re also purchasing late in the cycle, which may work in their favour.
You are right about the building of today's DC's. There is a small part of me that feels Oracle might be a bit toxic long term with all this debt him and his kid have taken on. And this could be the first reaction to it.
Why use an LLM to write an HN comment? What does anyone gain from this?
I’m also dumbfounded by this rise of the AI foister. I can understand a scammer, but a normal person using it to produce a paragraph?
One way to test and refine the bots is to have them post in more discerning forums like HN, tweaking the system prompt until people stop calling them out as fake.
Once nobody can tell any more, then the comments will be subtly altered to deliver the intended message.
Personally, I suspect that the classic pseudoanonymous forums are cooked. Within five years, they'll all be totally overrun by chatbots and their "value" will tank.
The only recourse will be mobile-only "chat apps" that guarantee 100% human participation through specific hardware device and configuration attestation (TPMs, etc...), and also validating via the gyros that the device is moving appropriately for keypresses, etc...
Everything else will be > 50% bots soon, overrun by propaganda, etc...
Yes, I know, we're most of the way there already. Reddit and Twitter are already sinking into the swamp of sadness.
But trust me, it can get worse than that! Much worse.
What they want and what they can get are different things. There will always be the tomorrow model they want
This is what I don't understand. Why is the article making the assumption that the DC itself is tied to a particular GPU generation? AWS doesn't knock down a building and start over every time Intel releases a new Xeon.
It's like setting up a warehouse of GPUs to mine bitcoin while others are switching to ASICs.
Other reporting says this is very much the case. Stargate barely has some of the land cleared, but the buildings were supposed to be finished and have GPUs installed over the course of 2026.
There's also the indicator of Nvidia giving out billion-dollar deals to other companies such that they could commit to buying even more Blackwells to keep production going. The chips from those new deals don't have anywhere to go, everyone already spent their cash on getting shipped chips that they're still installing today (apparently some are even in warehouses)
> For data collected from the UI or other usage: We retain the personal information described in this privacy notice for as long as you use our Services
I have two quick questions:
1. Why are UI prompts and responses kept for the entire life of the account?
2. When an account is closed, is the data actually deleted or just de-identified?
A100 has 312 TFLOPS of FP16 for 250W, i.e., 1.25 TFLOPS/W.
B200 has 2250 TFLOPS of FP16 compute for 1000W, i.e., 2.25 TFLOPS/W.
This is ~34% growth per generation and ~14% per year. It's hard to believe it will be 400% per generation this time
So FP4/INT4 will likely improve the same 30% OPS/W. You could get a separate improvement by reducing precision, but going 1-bit for 4x improvement feels unlikely for now.
By the time Vera Rubins will be available on scale, will they immediately be put into DCs, or will tomorrows chips be running.. the day after tomorrow?
I moved over from OpenRouter and it's been breezy. I hope you are sustainable at $30/month and are successful!
Or we‘ll get a supply problem and they get nothing or not enough. Tomorrow’s DC, never. Tada!
If the hardware refresh rate makes a substantial share of data center cost function more like opex than capex, the companies funding it out of operations (especially from operations of what are essentially monopoly businesses, in the sense pricing power), even if it isn’t the operations it power specifically, are fine in the near-to-intemediate term (barring exogenous shocks to those other businesses), whereas Oracle, funding it by a debt bonanza, is in a different position.
And Starlink / xAI is going to shoot them into space. We are simultaneously living in the future and the past.
I highly doubt that. They claim they want to shoot them into space, but I don’t believe a word of it until I see it happen (and see it work). It’s no more real than hyperloop.