Show HN: Microgpt is a GPT you can visualize in the browser
268 points by b44 2 days ago | 25 comments
very much inspired by karpathy's microgpt of the same name. it's (by default) a 4000 param GPT/LLM/NN that learns to generate names. this is sorta an educational tool in that you can visualize the activations as they pass through the network, and click on things to get an explanation of them.

kengoa 14 hours ago
Amazing work! Reminded me of LLM Visualization (https://bbycroft.net/llm) except this is a lot easier to wrap my head around and that I can actually run the training loops, which makes sense given the simplicity of the original microgpt.

To give a sense of what the loss value means, maybe you can add a small explainer section as a question and add this explanation from Karpathy’s blog:

> Over 1,000 steps the loss decreases from around 3.3 (random guessing among 27 tokens: −log(1/27)≈3.3) down to around 2.37.

to reiterate that the model is being trained to predict the next token out of 27 possible tokens and is now doing better than the baseline of random guess.

reply
interloxia 14 hours ago
The linked inspiration has a good blog post of microgpt implemented in python.

https://karpathy.github.io/2026/02/12/microgpt/

It was submitted to hn a few days ago but only received a few comments. https://news.ycombinator.com/item?id=47000263

reply
krackers 24 hours ago
There used to be this page that showed the activations/residual stream from gpt-2 visualized as a black-white image. I remember it being neat how you could slowly see order forming from seemingly random activations as it progressed through the layers.

Can't find it now though (maybe the link rotted?), anyone happen to know what that was?

reply
RugnirViking 14 hours ago
I was a little confused by "see, its much better" when the output is stuff like isovrak and kucey. What is it supposed to be generating?
reply
b44 13 hours ago
the untrained model is literally just generating random characters, whereas your examples are at least pronouncable. you can add more layers to get progressively better results.
reply
lucrbvi 14 hours ago
It's just hallucinating training data, the model is very small so it cannot be useful at all
reply
armcat 2 hours ago
Really nicely presented, well done!
reply
prakashdep 5 hours ago
It reminds me the anything+GPT era of 2022-2024
reply
stevage 19 hours ago
I'd love to understand how LLMs work, but this site assumed a bit too much knowledge for me to get much from it. Looks cool though.
reply
alansaber 13 hours ago
I think this blog post in particular might be helpful here https://sebastianraschka.com/blog/2023/self-attention-from-s...
reply
BloondAndDoom 19 hours ago
This is the best content I’ve found to to learn how LLMs really work : https://youtu.be/7xTGNNLPyMI?si=Gk0u4suz8pv39tP4
reply
ramon156 13 hours ago
My Android phone was not a fan of this site, but on my desktop it works great! Cool stuff
reply
umairnadeem123 18 hours ago
nice. visualizing the prompt->tool->output graph is underrated, it makes failure modes (and cost) obvious. do you track token/call cost per node + cache hits, or is it purely structural right now? also curious if you let users diff two runs (same prompt, different model/tool) and see which node diverged first.
reply
keepamovin 12 hours ago
I can't help but think there has to be a cheaper way to LLM.
reply
kfsone 2 days ago
Minor nit: In familiarity, you gloss over the fact that it's character rather than token based which might be worth a shout out:

"Microgpt's larger cousins using building blocks called tokens representing one or more letters. That's hard to reason about, but essential for building sentences and conversations.

"So we'll just deal with spelling names using the English alphabet. That gives us 26 tokens, one for each letter."

reply
mips_avatar 2 days ago
Using ascii characters is a simple form of tokenization with less compression
reply
b44 2 days ago
hm. the way i see things, characters are the natural/obvious building blocks and tokenization is just an improvement on that. i do mention chatgpt et al. use tokens in the last q&a dropdown, though
reply
msla 2 days ago
About how many training steps are required to get good output?
reply
alansaber 13 hours ago
Depends on the model size, batch size, input sequence length, ... etc. With a small model like this you'll never get a 'good' output but you can maximise its potential.
reply
WatchDog 23 hours ago
I trained 12,000 steps at 4 layers, and the output is kind of name-like, but it didn't reproduce any actual name from it's training data after 20 or so generations.
reply
b44 2 days ago
not many. diminishing returns start before 1000 and past that you should just add a second/third layer
reply
youio 13 hours ago
really well done
reply
GaggiX 18 hours ago
Wtok and Wpos should be 26-dim along one of the axis but it shows a 16x16 matrix be default, fc1 instead 16x64 with the default settings (not 16x16).
reply
b44 18 hours ago
good catch - i intentionally cap node visualizations at 16 so it doesn't get super long, but the sidebar shouldn't have that
reply
darepublic 24 hours ago
thank you for this
reply
nivcmo 13 hours ago
[dead]
reply