LLM Architecture Gallery
459 points by tzury 2 days ago | 34 comments

libraryofbabel 16 hours ago
This is great - always worth reading anything from Sebastian. I would also highly recommend his Build an LLM From Scratch book. I feel like I didn’t really understand the transformer mechanism until I worked through that book.

On the LLM Architecture Gallery, it’s interesting to see the variations between models, but I think the 30,000ft view of this is that in the last seven years since GPT-2 there have been a lot of improvements to LLM architecture but no fundamental innovations in that area. The best open weight models today still look a lot like GPT-2 if you zoom out: it’s a bunch of attention layers and feed forward layers stacked up.

Another way of putting this is that astonishing improvements in capabilities of LLMs that we’ve seen over the last 7 years have come mostly from scaling up and, critically, from new training methods like RLVR, which is responsible for coding agents going from barely working to amazing in the last year.

That’s not to say that architectures aren’t interesting or important or that the improvements aren’t useful, but it is a little bit of a surprise, even though it shouldn’t be at this point because it’s probably just a version of the Bitter Lesson.

reply
imjonse 11 hours ago
> On the LLM Architecture Gallery, it’s interesting to see the variations between models, but I think the 30,000ft view of this is that in the last seven years since GPT-2 there have been a lot of improvements to LLM architecture but no fundamental innovations in that area.

After years of showing up in papers and toy models, hybrid architectures like Qwen3.5 contain one such fundamental innovation - linear attention variants which replace the core of transformer, the self-attention mechanism. In Qwen3.5 in particular only one of every four layers is a self-attention layer.

MoEs are another fundamental innovation - also from a Google paper.

reply
libraryofbabel 11 hours ago
Thanks for the note about Qwen3.5. I should keep up with this more. If only it were more relevant to my day to day work with LLMs!

I did consider MoEs but decided (pretty arbitrarily) that I wasn’t going to count them as a truly fundamental change. But I agree, they’re pretty important. There’s also RoPE too, perhaps slightly less of a big deal but still a big difference from the earlier models. And of course lots of brilliant inference tricks like speculative decoding that have helped make big models more usable.

reply
phanarch 7 hours ago
I'd push back slightly on the "no fundamental innovations" read though — the innovations that stuck (MoE, GQA, RoPE) are almost entirely ones that improve GPU utilization: better KV-cache efficiency, more parallelism in attention, cheaper to serve per parameter. Mamba and SSM-based hybrids are interesting but kept running into hardwar friction.
reply
iroddis 19 hours ago
This is amazing, such a nice presentation. It reminds me of the Neural Network Zoo [1], which was also a nice visualization of different architectures.

[1] https://www.asimovinstitute.org/neural-network-zoo/

reply
khafra 5 hours ago
Nice! Last time I had a custom temporary tattoo made, I had to copy and paste from Attention is All You Need; this provides a much cleaner and more varied source.
reply
bicepjai 11 hours ago
Currently working on a similar project for myself. This looks like a great resource. Thanks for sharing. https://llm-lab.bicepjai.com/
reply
wood_spirit 20 hours ago
Lovely!

Is there a sort order? Would be so nice to understand the threads of evolutions and revolution in the progression. A bit of a family tree and influence layout? It would also be nice to have a scaled view so you can sense the difference in sizes over time.

reply
krackers 19 hours ago
There is https://magazine.sebastianraschka.com/p/technical-deepseek which shows an evolution in deepseek family
reply
andai 11 hours ago
> The goal of the proof verifier (LLM 2) is to check the generated proofs (LLM 1), but who checks the proof verifier? To make the proof verifier more robust and prevent it from hallucinating issues, they developed a third LLM, a meta-verifier.
reply
krackers 10 hours ago
The one thing I didn't quite understand (and wasn't mentioned in their paper unless I missed it), is why you can't keep stacking turtles. You probably get diminishing returns at some point, but why not have a meta-meta-verifier?
reply
gasi 19 hours ago
So cool — thanks for sharing! Here’s a zoomable version of the diagram: https://zoomhub.net/LKrpB
reply
imfing 5 hours ago
Thanks for putting all these model architectures together!
reply
cagz 8 hours ago
It is perhaps my eyes, but when I zoom in enough to make it readable, it gets blurry. A higher-res image would be much appreciated. Great idea otherwise.
reply
Slugcat 17 hours ago
What tool was used to draw the diagrams?
reply
travisgriggs 17 hours ago
Darn. I clicked here hoping we were having LLMs design skyscrapers, dams, and bridges.

I even brought my popcorn :(

reply
nxobject 17 hours ago
Thank you so much! As a (bio)statistician, I've always wanted a "modular" way to go from "neural networks approximate functions" to a high-level understanding about how machine learning practitioners have engineered real-life models.
reply
LuxBennu 17 hours ago
Interesting collection. The architecture differences show up in surprising ways when you actually look at prompt patterns across models. Longer context windows don't just let you write more, they change what kind of input structure works best.
reply
jasonjmcghee 16 hours ago
What's the structurally simplest architecture that has worked to a reasonably competitive degree?
reply
loveparade 16 hours ago
Competitiveness doesn't really come from architecture, but from scale, data, and fine-tuning data. There has been little innovation in architecture over the last few years, and most innovations are for the purpose of making it more efficient to run training or inference (fit in more data), not "fundamentally smarter"
reply
bigyabai 16 hours ago
If your definition of "competitive" is loose enough, you can write your own Markov chain in an evening. Transformer models rely on a lot of prior art that has to be learned incrementally.
reply
jasonjmcghee 16 hours ago
Not that loose lol.

I’m thinking it’s still llama / dense decoder only transformer.

reply
jrvarela56 17 hours ago
Would be awesome to see something like this for agents/harnesses
reply
vinhnx 7 hours ago
I think Cognition DeepWiki's or Google CodeWiki's code map does generated a architecture map (Mermaid style). Eg: https://deepwiki.com/openai/codex#project-purpose-and-archit...
reply
charcircuit 18 hours ago
I'm surprised at how similar all of them are with the main differences being the size of layers.
reply
hrmtst93837 6 hours ago
Most of the arch work is just scaling knobs.

If you swap in wierd layer types or move the objective much people run into ugly failure modes fast, so the field keeps circling the same Transformer blocks and then markets the change as novel when it's mostly a trianing and compute tradeoff.

reply
arikrahman 15 hours ago
Thank you for the high quality diagrams!
reply
mvrckhckr 20 hours ago
What a great idea and nice execution.
reply
neuroelectron 16 hours ago
An older post from this blog, the linked article was updated recently: https://news.ycombinator.com/item?id=44622608
reply
elophanto_agent 4 hours ago
[dead]
reply
stainlu 14 hours ago
[flagged]
reply
lambda 13 hours ago
Where are you seeing dense? Most of the larger competitive models are sparse. Sure, the smaller models are dense, but over 30B it's pretty much all sparse MoE.

And there are still plenty of hybrid architectures. Nemotron 3 Super 120B A12B just came out, it's mostly Mamba with a few attention layers, and it's pretty competitive for its size class.

But yeah, these different architectures seem to be relatively small micro-optimizations for how it performs on different hardware or difference in tradeoffs for how it scales with the context window, but most of the actual differentiation seems to be in training pipeline.

We are seeing substantial increases in performance without continuing to scale up further, we've hit 1T parameters in open models but are still having smaller models outperform that with better and better training pipelines.

reply
jawarner 13 hours ago
Looks like this may have received the HN Hug of Death. I'm getting "Too Many Requests" error trying to load the images.
reply
brianjking 13 hours ago
I'm getting that trying to load the content at all, text included.
reply
useftmly 16 hours ago
[dead]
reply
isotropic 20 hours ago
[dead]
reply
docybo 20 hours ago
[dead]
reply
SideLineLabs 21 hours ago
[flagged]
reply
FailMore 21 hours ago
Thanks! This is cool. Can you tell me if you learnt anything interesting/surprising when pulling this together? As in did it teach you something about LLM Architecture that you didn't know before you began?
reply
celltalk 9 hours ago
We’re literally seeing digital evolution in real-time. These are basically primitive life forms such as bacteria evolving just with tiniest differences.

Right now we’re engineering every bit of it to make it better but in the long run this is unsustainable. It’s going to be so complex that even these digital life forms won’t be able to understand their own digital DNAs, like us.

We know we have DNA, we can measure every letter but it doesn’t mean we understand what’s going on our 14 trillion cells and how each and every one of them is regulated.

I think this analogy not only explains us, or digital beings we see today. It explains everything, quite literally. Still it would be amazing to think about these systems from the perspective of biology, and try to understand the parts analogous to existing frame that we already have. Then we might figure out what to optimize better. For instance if we figure out a certain part of a layer corresponds to “genes” then we might find out there is alternative splicing within it. Wild but worth a shot.

reply