AWS says data center overheating in North Virginia disrupts services
64 points by christhecaribou 20 hours ago | 30 comments
https://health.aws.amazon.com/health/status?t=2026-05-07

https://www.cnbc.com/2026/05/08/aws-outage-data-center-fandu...

https://www.theregister.com/off-prem/2026/05/08/aws-warns-of...


fabian2k 2 hours ago
I thought cooling was pretty much pre-planned in any data center, and you simply don't install more stuff than you can cool?

So did some cooling equipment fail here or was there an external reason for the overheating? Or does Amazon overbook the cooling in their data centers?

reply
DevelopingElk 2 hours ago
One of the data center's cooling loops broke.
reply
bdangubic 51 minutes ago
No backups?
reply
bradgessler 28 minutes ago
What happens when the backup breaks?
reply
noir_lord 10 minutes ago
You have a back up for the back up backup.

Turtles all the way down.

At AWS scale even unlikely hardware events become more common I guess.

reply
minimaltom 13 minutes ago
They absolutely have backups, I presume they were ineffective or also down for _reasons_.
reply
AdamJacobMuller 48 minutes ago
This is almost definitely an issue of equipment failure.

Cooling in datacenters is like everything else both over and under provisioned.

It's overprovisioned in the sense that the big heat exchange units are N+1 (or in very critical and smaller load facilities 2N/3N). This is done because you need to regularly take these down for maintenance work and they have a relatively high failure rate compared to traditional DC components and require mechanical repairs that require specialized labor and long lead times. In a bigger facility its not uncommon to have cooling be N+3 or more when N becomes a bigger number because you're effectively always servicing something or have something down waiting for a blower assembly which needs to be literally made by a machinist with a lathe because that part doesn't exist anymore but that's still cheaper than replacing the whole unit.

The system are also under-provisioned in the sense that if every compute capacity in the facility suddenly went from average power draw to 100% power draw you would overload the cooling capacity, you would also commonly overload things in the electrical and other paths too. Over provisioning is just the nature of the industry.

In general neither of these things poses a real problem because compute loads don't spike to 100% of capacity and when they do spike they don't spike for terribly long and nobody builds facilities on a knife-edge of cooling or power capacity.

The problem comes when you have the intersection of multiple events.

You designed your cooling system to handle 200% of average load which is great because you have lots of headroom for maintenance/outages.

Repair guy comes on Tuesday to do work on a unit and finds a bad bearing, has to get it from the next state over so he leaves the unit off overnight to not risk damaging the whole fan assembly (which would take weeks to fabricate).

The two adjacent cooling units are now working JUST A BIT harder to compensate and one of them also had a motor which was just slightly imbalanced or a fuse which was loose and warming up a bit and now with an increased duty cycle that thing which worked fine for years goes pop.

Now you're minus two units in an N+2 facility. Not really terrible, remember you designed for 200% of average load.

That 3rd unit on the other side of the first failed unit, now under way more load, also has a fault. You're now minus 3 in a N+2 facility.

Still, not catastrophic because really you designed for 200% of average load.

The thing is, it's now 4AM, the onsite ops guy can't fix these faults and needs to call the vendor who doesn't wake up till 7AM and won't be onsite till 9.

Your load starts ramping up.

Everything up above happens daily in some datacenter in the USA. It happens in every datacenter probably once a year.

What happens next is the confluence of events which puts you in the news.

One of your bigger customers decides now is a great time to start a huge batch processing job. Some fintech wants to run a huge model before market open or some oil firm wants to do some quick analysis of a new field.

They spin up 10000 new VMs.

Normally, this is fine, you have the spare capacity.

But, remember, you planned for 200% of AVERAGE cooling capacity and this is not nodes which are busy but not terribly busy, these are nodes doing intense optimized number crunching work which means they draw max power and thus expel max waste heat.

Not only has your load in terms of aggregate number of machines spiked but their waste heat impact is also greater on average.

Boom, cascading failure, your cooling is now N-4.

Server fans start ramping up faster which consumes more power.

Your cooling is now N-5.

Alarms are blaring all over the place.

Safeties on the cooling units start to trip as they exceed their load and refrigerant pressures rise.

Your cooling is now N-6.

Your cooling is now N-7.

Your cooling is now 0.

reply
minimaltom 10 minutes ago
This is a great writeup! thank you!!

Reminds when i did noogler training back in the day and one of the talks described a cascading failure at a datacenter, starting with a cat which was too curious near a power conditioner, and briefly conducted

reply
PunchyHamster 22 minutes ago
The cooling units dont fail just because they get to 100% duty cycle. That's pretty much "normal operation", you just get... higher efficiency coz the cooling side is warmer
reply
AdamJacobMuller 17 minutes ago
Of course not. They fail above 100%.

Some fail below 100% too.

reply
fabian2k 44 minutes ago
I'd expect someone like AWS to just throttle machines before overloading their cooling. Because they probably can do that, while e.g. a data center that just rents the space can't really throttle their customers nicely.
reply
cperciva 20 minutes ago
Reducing clock speeds, even if they could do that -- and I'm not sure they can, given how Nitro is designed -- would be problematic since a lot of customer workloads assume homogeneous nodes.

But they did load-shed. Perhaps not soon enough, but the reason this is publicly known is because they reduced the amount of heat being produced.

reply
AdamJacobMuller 18 minutes ago
Its harder and harder to throttle machines with hardware segmentation capabilities effectively passing through hardware components "intact"

A decade ago it was trivial to just tell the hypervisor to reduce the cpu fraction of all VMs by half and leave half unallocated. Now, it's much more complicated and definitely would be user visible.

reply
foota 34 minutes ago
Shouldn't there be a feedback system here preventing the scheduling of loads when cooling is degraded?
reply
AdamJacobMuller 20 minutes ago
With hyperscalers for sure.

But this is the physical world, shit happens.

The algorithm didn't know that fuse was lose and fine at 50% duty cycle but was high resistance and going to blow at 100%.

reply
OhMeadhbh 4 minutes ago
Just imagine how bad this would be if global warming wasn't a Chinese hoax.
reply
Havoc 2 hours ago
Could someone explain to me why they don't build these things near oceans? Like nuclear plants that need plenty cooling capacity too

Two loop cycle with heat exchanger to get rid of the heat

reply
mandevil 2 hours ago
So Ashburn VA is a datacenter hub because the very first non-government Internet Exchange Point (IXP) anywhere in the world was there (https://en.wikipedia.org/wiki/MAE-East). Back in the 1990's something like half of all internet traffic all over the world hit MAE-East. That in turn made AWS put their first region there (us-east-1 preceded eu-west-1 by 2 years and us-west-1 by 3 years). Then because there were lots of people who knew how to build DC's- and lots of vendors who knew how to supply them- the Dulles Corridor became a major hub for lots of companies datacenters. For AWS, because us-east-1 was the first, it's by far the most gnarly and weird- and a lot of control planes for other AWS services end up relying on it. Which is why it goes down more often than other regions, and when it does go down it makes national news, unlike, say, eu-south-2 in Spain.

But NoVA is basically the same sort of economic cluster that Paul Krugman won his Nobel Prize in Economics for studying, just for datacenters, not factories.

reply
jjmarr 56 minutes ago
Oceans have salt. Saltwater is bad for electronics beyond normal water. You also need a sufficient level of water depth otherwise it'll warm to surface temperature. It also needs to be price-competitive with traditional evaporative cooling.

Toronto is the textbook example of this working. It's on a freshwater lake that is deep relatively close to the shore, and the downtown has expensive real estate blocking traditional methods.

https://en.wikipedia.org/wiki/Deep_Lake_Water_Cooling_System

reply
dpe82 45 minutes ago
In a proper 2-loop cooling system, the primary loop (with direct electronics contact) and secondary loop (with seawater/external cooling source) are hydraulically isolated by a heat exchanger. The salt water or whatever never gets anywhere near the electronics.
reply
mschuster91 35 minutes ago
The problem is, it's still in contact with something, even if it's just the secondary loop. Saltwater is not just incredibly aggressive against metal, the major problem with using it for cooling is fouling. Fish, mussels, algae, debris, there are a lot of things that can clog up your entire setup.
reply
ikr678 2 hours ago
Off the top of my head: Ocean levels of salt in a water system are much more expensive to maintain (even the secondary loop).

Coastal land much more expensive. If you go to a remote coastal site, you probably won't have as good access to power.

Coastal sites usually exposed to more severe weather events.

Other fun unpredicatble things eg-Diablo Canyon nuclear facility has had issues with debris and jellyfish migration blocking their saltwater cooling intake.

https://www.nbcnews.com/news/world/diablo-canyon-nuclear-pla...

reply
kinow 2 hours ago
I had a class in my masters about data centers (HPC Infrastructures). The professor was using some data centers somewhere in the middle of USA, in an area with hot weather as example. He compared that with ideal scenario (weather, power source, etc.).

In one of the slides, there were factors that influence the decision of where to build a data center, and several of the items involved finding a place with enough space and skilled people to work at this data center. He also commented sometimes there is politics involved on choosing the place for a next data center.

reply
tempaccount5050 14 minutes ago
Lots of proposals to build them near Lake Michigan recently but the residents of Wisconsin only want auto parts stores and paper mills. They've been completely demonized. Cities and counties are passing no data center laws even though it's the perfect place for it.
reply
arjie 31 minutes ago
Amusingly I've been part of two critical downtime heating incidents at two different datacenters: one was when Hosting.com's SOMA datacenter got so hot that they were using hoses on the roof to cool it down; and the second one was when Alibaba's Chai Wan datacenter got so hot everything running there went down, including the control plane. So I imagine the proximity to the ocean does not yield any additional advantage in terms of emergency heat sinking. You have x capacity to pump heat out and it doesn't matter if you're next to the sea or in the middle of Nebraska because your entire system needs to be built to be rated for some performance.
reply
PunchyHamster 20 minutes ago
yeah but capacity is easier/cheaper to build/overbuild if you can access cold-ish water at all times
reply
sheept 2 hours ago
This is just a guess, but land near oceans is more expensive/populated, and water is comparatively cheap
reply
merek 20 hours ago
Related:

AWS EC2 outage in use1-az4 (us-east-1)

https://news.ycombinator.com/item?id=48057294

reply
tailscaler2026 42 minutes ago
us-east-1 is down? shocking! stop putting SPOF services there. this location has had frequent issues for the past 15 years.
reply
tcp_handshaker 2 hours ago
I bet post-mortem will say vibe coding confused fahrenheit and celsius, we run too hot...
reply