The hedging technique is a cool demo too, but I’m not sure it’s practical.
At a high level it’s a bit contradictory; trying to reduce the tail latency of cold reads by doubling the cache footprint makes every other read even colder.
I understand the premise is “data larger than cache” given the clflush, but even then you’re spending 2x the memory bandwidth and cache pressure to shave ~250ns off spikes that only happen once every 15us. There’s just not a realistic scenario where that helps.
Especially HFT is significantly more complex than a huge lookup table in DRAM. In the time you spend doing a handful of 70ns DRAM reads, your competitor has done hundreds of reads from cache and a bunch of math. It’s just far better to work with what you can fit in cache. And to shrink what doesn’t as much as possible.
That’s my main hang up as well. On one hand this is undeniably cool work, but on the other, efficient cache usage is how you maximize throughput.
This optimizes for (narrow) tail latency, but I do wonder at what performance cost. I would be super interested in hearing about real world use cases.
What’s better is to “race” against cache, which is 100x faster than DRAM. CPUs already of do this for independent loads via out-of-order execution. While one load is stalled waiting for DRAM, another can hit the cache and do some compute in parallel. It’s all already handled at the microarchitectural level.
Refresh avoidance is a tangential thing the memory controller happens to be able to do in a scheme like that, but you’d really have to be looking at it in a vacuum to bill it as a benefit.
Like I said, it’s all about cache. You’re not going to DRAM if you actually care about performance fluctuations at the scale of refresh stalls.
"IBM z17 implements an enhanced redundant array of independent memory (RAIM) design with the following features: ... Staggered memory refresh: Uses RAIM to mask memory refresh latency."
Isn't that rather trivial though as a source of tail latency? There's much worse spikes coming from other sources, e.g. power management states within the CPU and possibly other hardware. At the end of the day, this is why simple microcontrollers are still preferred for hard RT workloads. This work doesn't change that in any way.
https://news.ycombinator.com/item?id=43850950 (April 2025)
https://news.ycombinator.com/item?id=43847946 (April 2025)
https://news.ycombinator.com/item?id=42096833 (Nov 2024)
https://news.ycombinator.com/item?id=37275963 (Aug 2023)
https://news.ycombinator.com/item?id=35746140 (April 2023)
https://news.ycombinator.com/item?id=34537078 (Jan 2023)
https://news.ycombinator.com/item?id=33914274 (Dec 2022)
https://news.ycombinator.com/item?id=33311881 (Oct 2022)
https://news.ycombinator.com/item?id=30890360 (April 2022)
https://news.ycombinator.com/item?id=26628758 (March 2021)
https://news.ycombinator.com/item?id=26307811 (March 2021)
https://news.ycombinator.com/item?id=25561372 (Dec 2020)
https://news.ycombinator.com/item?id=24724281 (Oct 2020)
https://news.ycombinator.com/item?id=24458954 (Sept 2020)
https://news.ycombinator.com/item?id=24380545 (Sept 2020)
https://news.ycombinator.com/item?id=23170477 (May 2020)
The reason we haven't banned you yet is because you obviously know a lot of things that are of interest to the community. That's good. But the damage you cause here by routinely poisoning the threads exceeds the goodness that you add by sharing information. This is not going to last, so if you want not to be banned on HN, please fix it.
RAM Has a Design Tradeoff from 1966. I made another one on top.
The first tradeoff, of 6x fewer transistors for some extra latency, is immensely beneficial. The second, of reducing some of that extra latency for extra copies of static data, is beneficial only to some extremely niche application. Still a very educational video about modern memory architecture.
[EDIT: accidental extra copy of this comment deleted]
Laurie does an amazing job of reimagining Google's strange job optimisation technique (for jobs running on hard disk storage) that uses 2 CPUs to do the same job. The technique simply takes the result of the machine that finishes it first, discarding the slower job's results... It seems expensive in resources, but it works and allows high priority tasks to run optimally.
Laurie re-imagines this process but for RAM!! In doing this she needs to deal with Cores, RAM channels and other relatively undocumented CPU memory management features.
She was even able to work out various undocumented CPU/RAM settings by using her tool to find where timing differences exposed various CPU settings.
She's turned "Tailslayer" into a lib now, available on Github, https://github.com/LaurieWired/tailslayer
You can see her having so much fun, doing cool victory dances as she works out ways of getting around each of the issues that she finds.
The experimentation, explanation and graphing of results is fantastic. Amazing stuff. Perhaps someone will use this somewhere?
As mentioned in the YT comments, the work done here is probably a Master's degrees worth of work, experimentation and documentation.
Go Laurie!
Update: found the bypass via the youtube blurb: https://github.com/LaurieWired/tailslayer
"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.
"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."
1. Throw the video into notebooklm - it gives transcripts of all youtube videos (AFAIK) - go to sources on teh left and press the arrow key. Ask notbookelm to give you a summary, discuss anything etc.
2. Noticed that youtube now has a little Diamond icon and "Ask" next to it between the Share icon and Save icon. This brings up gemini and you can ask questions about the video (it has no internet access). This may be premium only. I still prefer Claude for general queries over Gemini.
https://news.ycombinator.com/item?id=47713090
I agree, not everyone has 54 minutes to watch a video full of fluff (I tried, but only got so far, even on 1.5x speed).
Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?
The three answers it found were:
- Avoiding lock-in to them: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1914
- Competitive advantage: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1852
- Perceived Lack of Use Case: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1971
Those points do actually exist in the video, I checked. If there are more, I don't know about them, as I haven't yet watched the rest of the video.
The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.
For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...
It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.
Another source of confusion is that some channels may not have it or some other unexplained reason: https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_as...
But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.
MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).
This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.
It is sad that read comprehension is dropping such that you interpreted my comment that way.
I kept squinting and scrutinizing it, looking for signs that it was rendered by a video model. Loss of coherence in long shots with continuity flaws between them, unrealistic renderings of obscure objects and hardware, inconsistent textures for skin and clothing, that sort of thing... nope, it was all real, just the result of a lot of hard work and attention to detail.
Trouble is, this degree of perfection is itself unrealistic and distracting in a Goodhart's Law sense. Musicians complain when a drum track is too-perfectly quantized, or when vocals and instruments always stay in tune to within a fraction of a hertz, and I do have to wonder if that's a hazard here. I guess that's where you're coming from? If you wanted to train an AI model to create this type of content, this is exactly what you would want to use as source material. And at that point, success means all that effort is duplicated (or rather simulated) effortlessly.
So will that discourage the next-generation of LaurieWireds from even trying? Or are we going to see content creators deliberately back away from perfect production values, in order to appear more authentic?
I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol
You can consume technical content in the background?
I guess I'm only allowed to have The Masked Singer on while I make dinner.
For years I've been thinking "I should watch the WWDC videos because there's a lot of Really Important Information" in there, but... they're videos. In general I find that I can't pay attention to spoken word (videos, presentations, meetings) that contain important information, probably because processing it costs a lot more energy than reading.
But then I tune out / fall asleep when trying to read long content too, lmao. Glad I never did university or do uni level work.
This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.
The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.
Particularly by experimenting on something proprietary like Graviton.
Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.
You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore. Honest question because I am out if touch.
Home PCs don’t do NUMA as much anymore because of the number of cores and threads you can get on one core complex. The technology certainly still exists and is still relevant.
The results are impressive, but for the vast, vast majority of applications the actual speedup achieved is basically meaningless since it only applies to a tiny fraction of memory accesses.
For the use case Laurie mentioned - i.e. high-frequency trading - then yes, absolutely, it's valuable (if you accept that a technology which doesn't actually achieve anything beyond transmuting energy into money is truly valuable).
For the rest of us, the last thing the world needs is a new way to waste memory, especially given its current availability!
Can you give more context on this? Opus couldn't figure out a reference for it
https://en.wikipedia.org/wiki/Happy_Eyeballs is the usual name. It's not quite identical, since you often want to give your preferred transport a nominal headstart so it usually succeeds. But yes, there are some similarities -- you race during connection setup so that you don't have to wait for a connection timeout (on the order of seconds) if the preferred mechanism doesn't work for some reason.
The main term I've seen for this particular approach is "request hedging" (https://grpc.io/docs/guides/request-hedging/, which links to the paper by Dean and Barroso).
there is a ton of info you can pull from: smbios, acpi, msrs, cpuid etc. etc. about cpu/ram topology and connecticity, latencies etc etc.
isnt the info on what controllers/ram relationships exists somewhere in there provided by firmware or platform?
i can hardly imagine it is not just plainly in there with the plethtora info in there...
theres srat/slit/hmat etc. in acpi, then theres MSRs with info (amd expose more than intel ofc, as always) and then there is registers on memory controller itself as well as socket to socket interconnects from upi links..
its just a lot of reading and finding bits here n there. LLms are actually really good at pulling all sorts of stuff from various 6-10k page documents if u are too lazy to dig yourself -_-
WTFV
Really enjoyed this video, and I'm pretty picky. I learned a lot, even though I already know (or thought I knew) quite a bit about this subject as it was a particular interest of mine in Comp Sci school. I highly recommend. Skip forward through chunks of the train part though where she is messing around. It does get more informative later though so don't skip all of the train part
1) Can we take this library and turn it into a a generic driver or something that applies the technique to all software (kernel and userspace) running on the system? i.e. If I want to halve my effective memory in order to completely eliminate the tail latency problem, without having to rewrite legacy software to implement this invention.
2) What model miniature smoke machine is that? I instruct volunteer firefighters and occasionally do scale model demos to teach ventilation concepts. Some research years back led me to the "Tiny FX" fogger which works great, but it's expensive and this thing looks even more convenient.
what I wished I had during this project is a hypothetical hedged_load ISA instruction. Issue two requests to two memory controllers and drop the loser. That would let the strategy work on a single thread! Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation. But, you’d have to convince Intel/AMD/someone else :)
2. It’s called a “smokeninja”. Fairly popular in product photography circles, it’s quite fun!
Yeah it would be neat to just flip a BIOS switch and put your memory into "hedge" mode. Maybe one day we'll have an open source hardware stack where tinkerers can directly fiddle with ideas like this. In the meantime, thanks for your extensive work proving out the concept and sharing it with the world!
Given that the controller can already defer refresh cycles, and the logic to determine when that happens sounds fairly complex, I suspect that might already be in CPU microcode.
...which raises the tantalizing possibility that this lockstep-mirrored behavior might also be doable in microcode.
Really enjoyed the video and feel that I (not being in the IT industry) better understand CPUs und and RAM now.
However, I do seem at least 2 downsides to this method.
Number one it is at least 2x the memory. That has for a decently long time been a large cost of a computer. But I could see some people saying 'whatever buy 8x'.
The second is data coherency. In a read only env this would work very nicely. In a write env this would be 2x the writes and you are going to have to wait for them to all work or somehow mark them as not ready on the next read group. Now it would be OK if the read of that page was some period of time after the write. But a different place where things could stall out.
Really liked her vid. She explained it very nicely. She exudes that sense of joy I used to have about this field.
(seems that in the earlier submission, https://news.ycombinator.com/item?id=47680023, jeffbee hinted that IBM zEnterprise is doing something to that effect)
Said that, I'm not convinced that this is a big issue in practice. If you really care about performance, you got to avoid cache misses.
The refresh that we do is run in parallel on the memory arrays inside the RAM chips completely bypassing any of the related IO machinery.
However I wonder if the core idea itself is useful or not in practice. With modern memory there are two main aspects it makes worse. First is cost, it needs to double the memory used for the same compute. With memory costs already soaring this is not good. Then the other main issue of throughout, haven’t put enough thought into that yet but feels like it requires more orchestration and increases costs there too.
Many of our maps' routes would be laid out in a predominately east or west-facing track to max out our staying within cache lines as we marched our rays up the screen.
So, we needed as much main memory bandwidth as we could get. I remember experimenting with cache line warming to try to keep the memory controllers saturated with work with measurable success. But it would have been difficult in Voxel Space to predict which lines to warm (and when), so nothing came of it.
Tailslayer would have given us an edge by just splitting up the scene with multiprocessing and with a lot more RAM usage and without any other code. Alas, hardware like that was like 15 years in the future. Le sigh.
That's fascinating to find out! I grew up a fan of Nova Logic, so I'll have to pay attention to this the next time I revisit their games.
Was this done for Comanche or did you also do this for Delta Force?
I did the first version of the matchmaking for the network play in Delta Force but didn't make it into the credits because I quit before it shipped. My psycho coworker built a custom web browser(!) that integrated directly with my from-scratch matchmaking server. At least they let me work in C for that project; most everything else I had to do for them was assembler because that was not a "sissy" programming language. That server code was by-far the coolest thing I wrote for many years afterward.
Unfortunately, my server code couldn't handle more than like 32 concurrents because the Windows NT 3.0 kernel would BSOD with more. My (extremely grumpy) manager and the Sega Saturn coder called me a few days after I had quit to ask how the code worked. I suspect I left data in the socket buffers too long (was trying to batch up my message broadcasting work at regular intervals) and the kernel panicked over that.
I recall learning later the TCP/IP stack was homegrown in NT by Microsoft at that time and they licensed a good one for later versions, so I can't be blamed, it wasn't me! :D
The reason facing east-west (or was it north-south, now I'm unsure) made such a difference in framerate was the color and height maps were ray marched in straight lines up from the bottom of the screen to the horizon. This meant you were zipping through the color map in straight lines, wrapping around to the other side if the ray went far enough.
When those straight lines lined up with the color and height map (north-south), life was good (and when a ray marched up a sheer canyon wall, life was VERY good.) But, when those straight lines went perpendicular (east-west) to the color and height map, you were blowing through the L2 cache constantly and going to main memory very often. I imagine on modern hardware these cache misses wouldn't amount to much measurable time, but on a 386dx with 8megs of RAM, the impact was very clear.
Novalogic was the only programming job I ever had where I got my own office with a door. ;) When I was with them, they had a policy of one game developer per game which I never saw again. Maximum cowboy coder energy, good times.
I bet Citadel already has reached out to Laurie :)
Citadel executes trades in about 10 microseconds, so a 500 nanosecond reduced execution time is a 5% improvement. For a company which executes trades for hundreds of billions a day, this translates to real money.
Your sarcasm indicates that you have no clue as to how important such an improvement can be for some actors. Some do though; the repo has almost 100 forks and 2K stars after just two days.
this is unnecessary.
But all the accounts are old/legit so I think that you and me have just become paranoid...
It's like when you interact with any other piece of language oriented media.
TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.
And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.
So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.
In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.
I think rather than AI it reminds me of when (long before AI) a few colleagues would converge on an article to post supportive comments in what felt like an attempt to manipulate the narrative and even at concentrations that I find surprisingly low it would often skew my impression of the tone of the entire comment section in a strange way. I guess you could more generally describe the phenomenon as fan club comments.
https://www.reddit.com/r/programming/comments/1sgtkdf/tailsl...
There are a few glazing comments there too though.
> Well he veered off of the technical and into the personal so I'm not surprised it's dead.
I don't know what he posted, but it is easy to see how a small fan group around Laurie can form?
She is an attractive girl not afraid to be cute (which is done so seldom by women in tech that I found a reddit thread trying to triangulate if she is trans. I am not posting that to raise the question, but she piques peoples interest) plus the impressively high effort put into niche topics PLUS the impressively high production value to present all that.
i would note that it also appears to be wrong, reading laurie's reply, though i am not an expert. rude + wrong is a bad combo.
the next comment by jeffbee is also quite rude, and ignores most of laurie's reply in favor of insulting her instead. i dont think it is a mystery why jeffbee's comments were flagged...
Not slop, seems mildly interesting
> Hold on a second. That's a really bad excuse. And technology never got anywhere by saying I accept this and it is what it is.
I guess this would have been a nice way to reduce HDD latency as a new RAID mode... oh well.