> Algorithmic complexity improvements dominate language-level optimisations. Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
Yet they still have chosen to put the “Rust rewrite” part in the title. I almost think it's a click bait.
It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.
edit wasn't Astral, but here's the blog post I was thinking of. https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html
That said, your point is very much correct, if you watch or read the Jane Street tech talk Astral gave, you can see how they really leveraged Rust for performance like turning Python version identifiers into u64s.
It's also worth noting that unsafe Rust != C, and you are still battling these rules. With enough experience you gain an understanding of these patterns and it goes away, and you also have these realy solid tools like Miri for finding undefined behavior, but it can be a bit of a hastle.
Anyway, dubious claim since a Python interpreter will take 10s of milliseconds just to print out its version.
Do you have any evidence? I can point at techempower benchmarks showing IO bound tasks are still 10-100x faster in native languages vs Python/JS.
That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
Okay, so the Rust code would be 3x as fast. Feels arbitrary, but sure.
> You are the one not making sense, we are talking about application performance, why are you not measuring that in milliseconds.
I explained why your post made no sense already...
> That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
That's not how anything works. Different languages will perform differently on IO work, different runtimes will degrade under IO differently, etc. That's why even basic echo HTTP servers perform radically differently in Python vs Rust.
This isn't how computers work and it's not even how math works.
This conversation has become nonsensical. The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
> This isn't how computers work and it's not even how math works.
What are you disagreeing with? There's some baseline amount of I/O that the kernel does for you, that's what I'm assuming is 50ms, and everything else like runtime degrading is overhead due to the language/platform choice. I'm saying Rust is upwards of 100x faster in that regard thanks to its zero cost abstraction philosophy. You can't just include the I/O baseline in a claim about Rust's performance advantage. You'll be really disappointed when Rust doesn't download your files 100x as fast as the Python file downloader.
Anyway, I'm sorry I provoked your antagonism with my terse messages, I wasn't trying to be blase. I believe uv is the sort of tool that wouldn't suffer much from the downsides of Python and that in most situations the reduced runtime overhead of Rust would have a negligible impact on the user experience. I'm not arguing that they shouldn't build uv in Rust. Most situations is not all situations, and when a tool is used so widely you'll hit all edge cases, from the point where the 10s of milliseconds of startup time matters to the point where Pythons I/O overhead matters at scale.
IO is executed by kernel, file system or network drivers. IO performance is not dependent at all on which language makes the syscalls.
> The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
In this thread, we are talking about the speed of uv in terms of user experience - how long a person waits for command line operations to complete. Things that pip takes multiple seconds to do, uv will do in dozens of milliseconds. If uv were written in python, it would take dozens of ms + a few dozens more, which means absolutely fuck all nothing in the context of the thousands of milliseconds saved over pip.
Its possible a user might perceive a slight difference in larger projects, but if pip had been uv-but-in-python, the uv-in-rust project would never have been started in the first place because no one would have bothered switching.
> This conversation has become nonsensical.
Agreed. No one in this thread is disputing that Rust code is faster than Python, only that in this case it is completely insignificant in the face of all the useless file and network I/O that pip is doing, and uv is not.
> uv is fast because of what it doesn’t do, not because of what language it’s written in. The standards work of PEP 518, 517, 621, and 658 made fast package management possible. Dropping eggs, pip.conf, and permissive parsing made it achievable. Rust makes it a bit faster still.
So the claim is not well supported at all by the article as you stated, in fact the claim is literally disproven by the article.
> uv is fast because of what it doesn’t do, not because of what language it’s written in.
The fact that the language had a small effect ("a bit") does not invalidate the statement that algorithmic improvements are the reason for the relative speed. In fact, there's no reason to believe that rust without the algorithmic version would be notably faster at all. Sure, "all" is an exaggeration, but the point made still stands in the form that most readers would understand it: algorithmic improvements are the important difference between the systems.
The specific claim I was responding to was that all of uv’s performance improvements come from algorithms rather than the language. My point was just that this is a stronger claim than what the article supports, the article itself says Rust contributes “a bit” to the speed, so it’s not purely algorithmic.
I do agree with the broader point that algorithmic and architectural choices are the main reason uv is fast, and I tried to acknowledge that, apparently unsuccessfully, in my very my first comment (“I don't doubt that a lot of uv's benefits are algo. But everything?”).
One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.
Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.
Kinda is. We came up with abstractions to help reason about what really matters. The more you need to deal with auxillary stuff (allocations, lifetimes), more likely you will miss the big issue.
Yes, sprinkling your code logic with malloc, .clone() or lifetime annotations on the other hand brings algorithmic enlightenment.
Is your argument that the average Python or Typescript dev gets to think and care more about algorithms than the average C/C++/Rust dev?
You still do get some latency from the event loop, because postMessage gets queued as a MacroTask, which is probably on the order of 10μs. But this is the price you have to pay if you want to run some code in a non-blocking way.
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
They say they measured that cost, and it was most of the runtime in the old version (though they don't give exact numbers). That cost does not exist at all in the new version, simply because of the language.
If they used raw byte structures, implemented the caching improvements on the wasm side, the copies might not be as bad.
But they still have an issue with multi-language stack: complexity also has a cost.
Python/C combo does not have this issue because you can work with Python types natively in C, but otherwise, this is a cross-language conversion issue, and not a Rust issue at all.
Edit: fixed phone typos
This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.
Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.
AFAIK, you can create a shared memory block between WASM <-> JS:
https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
Then you'd only need to parse the SharedArrayBuffer at the end on the JS side
Looks inside
“The old implementation had some really inappropriate choices.”
Every time.
For a parser specifically, you're probably spending a lot of time creating and discarding small AST nodes. That's exactly the kind of workload where V8's generational GC shines and where WASM's manual memory management becomes a liability rather than an asset.
The interesting question is whether this scales. A parser that runs on small inputs in a browser is a very different beast from one processing multi-megabyte files in a tight loop. At some point the WASM version probably wins - the question is whether that workload actually exists in your product.
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
That final summary benchmark means nothing. It mentions 'baseline' value for the 'Full-stream total' for the rust implementation, and then says the `serde-wasm-bindgen` is '+9-29% slower', but it never gives us the baseline value, because clearly the only benchmark it did against the Rust codebase was the per-call one.
Then it mentions: "End result: 2.2-4.6x faster per call and 2.6-3.3x lower total streaming cost."
But the "2.6-3.3x" is by their own definition a comparison against the naive TS implementation.
I really think the guy just prompted claude to "get this shit fast and then publish a blog post".
I understand your frustration with AI writing though. We are a small team and given our roadmap it was either use LLMs to help collate all the internal benchmark results file into a blog or never write it so we chose the former. This was a genuinely surprising and counterintuitive result for us, which is why we wanted to share it. Happy to clarify any of the numbers if helpful.
It was able to beat XZ on its own game by a good margin:
This is apparent. xz's own game is not "a specialized compression pre-processor for x86_64 ELF binaries.". xz's own game is a general-purpose compression utility suited for a range of tasks, not optimized for one ridiculously specific domain. Also, any compression benchmark really ought to include speed of de/compression, not only compression ratio, as compression algorithms occupy along a scale trying to maximize one trade-off or another.
btw goal of the project was not building a production ready solution. It was curious case of black box software development. Compression is great because input and output are precise bits. As for speed, I think it's comparable since it's using most of XZ infra anyways.
The most obvious approach would be to let LLMs generate code and render it but that introduces problems like safety, UI consistency and speed. OpenUI solves those problems and provides a safe, consistent and token optimized runtime for the LLMs to render live UI
> converts internal AST into the public OutputNode format consumed by the React renderer
Why not just have the LLM emit the JSON for OutputNode ? Why is a custom "language" and parser needed at all? And yes, there is a cost for marshaling data, so you should avoid doing it where possible, and do it in large chunks when its not possible to avoid. This is not an unknown phenomenon.
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
So it's more so a story about architectural mistakes.
That said, Rust does have real problems. Manual memory management sucks. People think GC is expensive? Well, keep in mind malloc() and free() take global locks! People just have totally bogus mental models of what drives performance. These models lead them to technical nonsense.
In their worst case it was just x5. We clearly have some progress here.
Claude tells me this is https://www.fumadocs.dev/
Not sold about the fundamental idea of OpenUI though. XML is a great fit for DSLs and UI snippets.
The primary motivation was speed and schema cohesion. We were running a JSON based format, Thesys C1, in production for a year before we realized we cannot add features fast enough because we were fighting the LLMs at multiple levels. It's probably too much to write in a comment but we'd like to write about the motivation and all the things we tried ona a separate blog soon
The other day, someone linked back to this 2018 post on finding a cache coherency bug in the Xbox 360 CPU:
https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-d...
So much more genuinely engaging than any of the AI-“enhanced” sloppy, confused, trite writing that gets to the front page here daily because it’s been hyper-optimized for upvotes.
So you're reinventing JSON but binary? V8 JSON nowadays is highly optimized [1] and can process gigabytes per second [2], I doubt it is a bottleneck here.
[1] https://v8.dev/blog/json-stringify [2] https://github.com/simdjson/simdjson
Rust.
WASM.
TypeScript.
I am slowly beginning to understand why WASM did not really succeed.
You could also try pretty fast fft: https://github.com/JorenSix/pffft.wasm
I don't think that's actually out yet, and more importantly, it doesn't change anything at runtime -- your code still runs in a JS engine (V8, JSC etc).
The port had been done in a weekend just to see if we could use Python in production. The C++ code had taken a few months to write. The port was pretty direct, function for function. It was even line for line where language and library differences didn't offer an easier way.
A couple of us worked together for a day to find the reason for the speedup. Just looking at the code didn't give us any clues, so we started profiling both versions. We found out that the port had accidentally fixed a previously unknown bug in some code that built and compared cache keys. After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.
We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound. We soon found out that we could make algorithmic improvements so much more quickly, so a lot of the slowest things got a lot faster than they had ever been. And, most importantly, we (the software developers) got quite a bit faster.
This was particularly true for one of the projects I've worked with in the past, where Python was chosen as the main language for a monitoring service.
In short, it proved itself to be a disaster: just the Python process collecting and parsing the metrics of all programs consumed 30-40% of the processing power of the lower end boxes.
In the end, the project went ahead for a while more, and we had to do all sorts of mitigations to get the performance impact to be less of an issue.
We did consider replacing it all by a few open source tools written in C and some glue code, the initial prototype used few MBs instead of dozens (or even hundreds) of MBs of memory, while barely registering any CPU load, but in the end it was deemed a waste of time when the whole project was terminated.
Turns out the metrics just rounded to the nearest 5MB
The main lesson of the story. Just pick Python and move fast, kids. It doesn’t matter how fast your software is if nobody uses it.
I'd rather not use python. The ick gets me every time.
The reason nobody uses your software could be that it is too slow. As an example, if you write a video encoder or decoder, using pure Python might work for postage-stamp sized video because today’s hardware is insanely fast, but even, it likely will be easier to get the same speed in a language that’s better suited to the task.
One of the reasons the project was killed was that we couldn't port it to our line of low powered devices without a full rewrite in C.
Please note this was more than a decade ago, way before Rust was the language it was today. I wouldn't chose anything else besides Rust today since it gives the best of both worlds: a truly high level language with low level resource controls.
The mentality was "the language is fast, so as long as it compiles we're good"... Yeah that worked out about as well as you'd expect.
If you're writing FastAPI (and you should be if you're doing a greenfield REST API project in Python in 2026), just s/copy/steal/ what those guys are doing and you'll be fine.
You are not the same.
Just write the parsing loop in something faster like C or Rust, instead of the whole thing.
Pure speculation, but I would guess this has something to do with a copy constructor getting invoked in a place you wouldn't guess, that ends up in a critical path.
Not because they are brilliant, but because they are pretty good at throwing pretty much all known techniques at a problem. And they also don't tire of profiling and running experiments.
If you have a comprehensive test suite or a realistic benchmark, saying "make tests pass" or "make benchmark go up" works wonders.
LLMs are really good at knowing patterns, we still need programmers to know which pattern to apply when. We'll soon reach a point where you'll be able to say "X is slow, do autoresearch on X" and X will just magically get faster.
The reason we can't yet isn't because LLMs are stupid, it's because autoresearch is a relatively new (last month or so) concept and hasn't yet entered into LLM pretraining corpora. LLMs can already do this, you just need to be a little bit more explicit in explaining exactly what you need them to do.
Recently I tried Codex/GPT5 with updating a bluetooth library for batteries and it was able to start capturing bluetooth packets and comparing them with the libraries other models. It was indefatigable. I didn't even know if was so easy to capture BLE packets.
Flakey internet connection: most of current 'soy devs' would be useless. Even more with boosted up chatbots.
That has not been my experience. JS/TS requires the most hand-holding, by far. LLMs are no doubt assumed to be good at JS due to the sheer amount of training data, but a lot of those inputs are of really poor quality, and even among the high quality inputs there isn't a whole lot of consistency in how they are written. That seems to trip up the LLMs. If anything, LLMs might finally be what breaks the JS camel's back. Although browser dominance still makes that unlikely.
> Very few people will then take the pain of optimizing it
Today's LLMs rarely take the initiative to write benchmarks, but if you ask it will and then will iterate on optimizing using the benchmark results as feedback. It works fairly well. There is a conceivable near future where LLMs or LLM tools will start doing this automatically.
But yes I see what you mean and I think people are trying to solve it with skills and harnesses at the application layer but its not there yet
It's true that writing code in C doesn't automatically make it faster.
For example, string manipulation. 0-terminated strings (the default in C) are, frankly, an abomination. String processing code is a tangle of strlen, strcpy, strncpy, strcat, all of which require repeated passes over the string looking for the 0. (Even worse, reloading the string into the cache just to find its length makes things even slower.)
Worse is the problem that, in order to slice a string, you have to malloc some memory and copy the string. And then carefully manage the lifetime of that slice.
The fix is simple - use length-delimited strings. D relies on them to great effect. You can do them in C, but you get no succor from the language. I've proposed a simple enhancement for C to make them work https://www.digitalmars.com/articles/C-biggest-mistake.html but nobody in the C world has any interest in it (which baffles me, it is so simple!).
Another source of slowdown in C is I've discovered over the years that C is not a plastic language, it is a brittle one. The first algorithm you select for a C project gets so welded into it that it cannot be changed without great difficulty. (And we all know that algorithms are the key to speed, not coding details.) Why isn't C plastic?
It's because one cannot switch back and forth between a reference type and a value type without extensively rewriting every use of it. For example:
If you want to switch between reference and value, you've got to go through all your code swapping . and ->. It's just too tedious and never happens. In D: I discovered while working on D that there is no reason for the C and C++ -> operator to even exist, the . operator covers both bases!Crazy how many stories like this I’ve heard of how doing performance work helped people uncover bugs and/or hidden assumptions about their systems.
They found that they had fewer bugs in Python so they continued with it.
Meanwhile my experience has been that whenever there has been a performance issue severe enough to actually matter, it's often been the result of some kind of performance bug, not so much language, runtime, or even algorithm choices for that matter.
Hence whenever the topic of how to improve performance comes up, I always, always insist that we profile first.
But, of course, profiling is always step one.
I hit the flag button on the comment and suggest others do too.
I was not actually sure this one was a bot, despite LLM-isms and, sadly, being new. But you can look at the comment history and see.
Would be kind of cool if e. g. python or ruby could be as fast as C or C++.
I wonder if this could be possible, assuming we could modify both to achieve that as outcome. But without having a language that would be like C or C++. Right now there is a strange divide between "scripting" languages and compiled ones.
I suspect it’s more likely to be something like passing std::string by value not realising that would copy the string every time, especially with the statement that the mistake would be hard to express in Python.