Ternary Bonsai: Top Intelligence at 1.58 Bits
17 points by nnx 3 days ago | 4 comments
yodon 34 minutes ago
So excited to see this - the big advantage of 1.58 bits is there are no multiplications at inference time, so you can run them on radically simpler and cheaper hardware.
replywmf 58 minutes ago
Yet again they're comparing against unquantized versions of other models. They would probably still win but by a much smaller size margin.
replyDumbledumb 2 minutes ago
Wouldnt the margin be higher? All other models being moved from unquantized to quantized would lower their performance, while bonsai stays. I get what you see if it was in regards to score/modelsize, but not for absolute performance
reply
I also have yet to see any of these at a larger scale. For example, can you try one of these at 100 billion parameters?