Is it possible to rig this up so it really is realtime, displaying the transcription within a second or two of the user saying something out loud?
The Hugging Face server-side demo at https://huggingface.co/spaces/mistralai/Voxtral-Mini-Realtim... manages that, but it's using a much larger (~8.5GB) server-side model running on GPUs.
This isn't even close to realtime on M4 Max. Whisper's ~realtime on any device post-2022 with an ONNX implementation. The extra inference cost isn't worth the WER decrease on consumer hardware, or at least, wouldn't be worth the time implementing.
The first cut will probably not be a streaming implementation
That said, I now agree with your original statement and really want Voxtral support...
We went from impossible to centralised to local in a couple of years and the "cost" is 2.5gb of hard drive.
I use Parakeet V3 in the excellent Handy [1] open source app. I tried incorporating the C-language implementation mentioned by others, into Handy, but it was significantly slower. Speed is absolutely critical for good UX in STT.
There's a distinction between tokens per second and time to first token.
Delays come for me when I have to load a new model, or if I'm swapping in a particularly large context.
Most of the time, since the model is already loaded, and I'm starting with a small context that builds over time, tokens per second is the biggest impactor.
It's worth noting I don't do much fancy stuff, a tiny bit of agent stuff, I mainly use qwen-coder 30a3b or qwen2.5 code instruct/base 7b.
I'm finding more complex agent stuff where multiple agents are used can really slow things down if they're swapping large contexts. ik_llama has prompt caching which help speed this up when swapping between agent contexts up until a point.
tldr: loading weights each time isn't much of a problem, unless you're having to switch between models and contexts a lot, which modern agent stuff is starting to.
Could I reasonably use this to get LLM-capability privately on a machine (and get decent output), or is it still in the "yeah it does the thing, but not as well as the commercial stuff" category?
Nano's stored in localstorage with shared access across sites (because Google), so users only need to download it once. Which I don't think is the case with Mistral, etc.
There's some other production stats around adoption, availability and performance that were interesting as well:
https://sendcheckit.com/blog/ai-powered-subject-line-alterna...
panorama panorama panorama panorama panorama panorama panorama panorama� molest rist moundothe exh� Invothe molest Yan artist��������� Yan Yan Yan Yan Yanothe Yan Yan Yan Yan Yan Yan Yan
Right now everything is either archaic or requires too much RAM. CPU isn't as big of an issue as you'd think because the pi zero 2 is comparable to a pi 3.
I'm interested in your cubecl-wgpu patches. I've been struggling to get lower than FP32 safetensor models working on burn, did you write the patches to cubecl-wgpu to get around this restriction, to add support for GGUF files, or both?
I've been working on something similar, but for whisper and as a library for other projects: https://github.com/Scronkfinkle/quiet-crab
Reading the first three sentences of this README. 43 words, I would consider 15 terms to be jargon incomprehensible to the layman.
> The Q4 GGUF quantized path (2.5 GB) runs entirely client-side in a browser tab via WASM + WebGPU. Try it live.
Excluding names (Mistral's Voxtral Mini 4B Realtime), you have 1 pretty normal sentence introducing what this is (Streaming speech recognition running natively and in the browser) and the rest is technical details.
It's like complaining that a car description Would contain engine size and output in the third sentence.
https://huggingface.co/Teaspoon-AI/Voxtral-Mini-4B-INT4-Jets...
or... not talking anything generate random German sentences.
Anything I can do to fix/try it on Brave?
chrome://flags/#enable-unsafe-webgpu
I have my own fork here: https://github.com/HorizonXP/voxtral.c where I’m working on a CUDA implementation, plus some other niceties. It’s working quite well so far, but I haven’t got it to match Mistral AI’s API endpoint speed just yet.
how does someone get started with doing things like these (writing inference code/ cuda etc..). any guidance is appreciated. i understand one doesn't just directly write these things and this would require some kind of reading. would be great to receive some pointers.