Claude Code / Codex CLI / etc are all great because they know how to drive Bash and other Linux tools.
The browser is probably the best sandbox we have. Being able to run an agent loop against a WebAssembly Linux would be a very cool trick.
I had a play with v86 a few months ago but didn't quite get to the point where I hooked up the agent to it - here's my WIP: https://tools.simonwillison.net/v86 - it has a text input you can use to send commands to the Linux machine, which is pretty much what you'd need to wire in an agent too.
In that demo try running "cat test.lua" and then "lua test.lua".
That exists: https://github.com/container2wasm/container2wasm
Unfortunately I found the performance to be enough of an issue that I did not look much further into it.
This thing is really inescapable those days.
I should have replied there instead, my mistake.
I'm excited about them and I think discussion on how to combine two exciting technologies are exactly what I'd like to see here.
I mean I don’t have to remember the horrible git command line anymore which already improves my exprience as a dev 50%.
It’s not all hype bs this time.
Every time I see a comment like this, I have to wonder what the heck other devs were doing. Don’t you know there were shell aliases, and snippet managers, and a ton of other tools already? I never had to commit special commands to memory, and I could always reference them faster than it takes to query any LLM.
You can't just roll in to a random post to tell people about your revolutionary new AI agent for the 50th time this week and expect them not to be at least mildly annoyed.
The entire thing is just quotes and a retelling of events. The closest thing to a "take" I could find is this:
> I have no idea how this one is going to play out. I’m personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.
Which effectively says nothing. It doesn't add anything the discussion around the topic, informed or not, and the post doesn't seem to serve any purpose beyond existing as an excuse to be linked to and siphon attention away from the original discussion (I wonder if the sponsor banner at the top of the blog could have something to do with that...?)
This seems to be a pattern, at least in recent times. Here's another egregious example: https://simonwillison.net/2026/Feb/21/claws/
Literally just a quote from his fellow member of the "never stops talking about AI" club, Karpathy. No substance, no elaboration, just something someone else said or did pasted on his blog followed by a short agreement. Again, doesn't add anything or serve any real purpose, but was for some reason submitted to HN[1], and I may be misremembering but I believe it had more upvotes/comments than the original[2] at one point.
That second Karpathy example is from my link blog. Here's my post describing how I try to add something new when I write about things on my link blog: https://simonwillison.net/2024/Dec/22/link-blog/
In the case of that Karpathy post I was amplifying the idea that "Claw" is now the generic name for that class of software, which is notable.
Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.
Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together. https://www.youtube.com/watch?v=kGBeT8lwbo0
For a full-stack demo see: https://vitedemo.browserpod.io/
To get an idea of our previous work: https://webvm.io
~20x slower for a naive recursive Fibonacci implementation in Python (1300 ms for fib(30) in this VM vs 65ms on bare metal. For comparison, CPython directly compiled to WASM without VM overhead does it in 140ms.)
~2500x slower for 1024x1024 matrix multiplication with NumPy (0.25 GFLOPS in VM vs 575 GFLOPS on bare metal).
WebVM is based on x86 emulation and JIT compilation, which at this time lowers vector instructions as scalar. This explains the slowdowns you observe. WebVM is still much faster than v86 in most cases.
BrowserPod is based on a pure WebAssembly kernel and WebAssembly payload. Performance is close to native speed.
The performance is pretty amazing. fib(35) runs in 60ms, compared to 65ms in NodeJS on Desktop.
But I can't find a shell. Is there only support for NodeJS at the moment?
See the launch blog post for our full timeline: https://labs.leaningtech.com/blog/browserpod-10
Also, could I ask you to quickly edit your previous comment to clarify you were benchmarking against the older project?
(I assume this works on Macs too, both being Unixes, roughly speaking :)
Well, there it is, the dumbest thing I'll read on the internet all week.
Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.
Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.
Besides, prompt injection or simpler exploits should be addressed first than making a virtual computer in a browser and if you are simulating a whole computer you have a huge performance hit as another trade off.
On the other hand using the browser sandbox that also offers a UI / UX that the foundation models have in their apps would ease their own development time and be an easy win for them.
tldr; devcontainers let you completely containerize your development environment. You can run them on Linux natively, or you can run them on rented computers (there are some providers, such as GitHub Codespaces) or you can also run them in a VM (which is what you will be stuck with on a Mac anyways - but reportedly performance is still great).
All CLI dev tools (including things like Neovim) work out of the box, but also many/most GUI IDEs support working with devcontainers (in this case, the GUI is usually not containerized, or at least does not live in the same container. Although on Linux you can do that also with Flatpak. And for instance GitHub Codespaces runs a VsCode fully in the browser for you which is another way to sandbox it on both ends).
Do you know if there's a cli or something that would make this easier? The GitHub org seems to be more focused on the spec.
Even in this thread alone https://news.ycombinator.com/item?id=47314929 some commenters here are clearly annoyed with the way AI is being shoved in each place where they do not want it.
I don't care, but I can see why many here are getting tired of it.
For a more open-source version, check out container2wasm (which supports x86_64, riscv64, and AArch64 architectures): https://github.com/container2wasm/container2wasm
It looks like container2wasm uses a forked version of Bochs to get the x86-64 kernel emulation to work. If one pulled that out separately and patched it a bit more to have the remaining feature support it'd probably be the closest overall. Of course one could say the same about patching anything with enough enthusiasm :).
(For APX I have patches at https://lore.kernel.org/qemu-devel/20260301144218.458140-1-p... but I have never tested them on system emulation).
Even though it has no JIT. Truly magic :)
> Access to Internet is possible inside the emulator. It uses the websocket VPN offered by Benjamin Burns (see his blog). The bandwidth is capped to 40 kB/s and at most two connections are allowed per public IP address. Please don't abuse the service.
[1] For example:
https://www.ioccc.org/2020/yang/index.html#:~:text=tcc%200.9...
https://www.ioccc.org/2018/yang/index.html#:~:text=tcc%200.9...
My hobby OS itself is not very useful, but it's fun if you're in the right mood.
But then again, I've never understood why Buddhist monks create sand mandalas[1] and then let them be blown away (the mandalas not the monks!).
I think one should see it from the authors PoV instead of thinking "what is in it for me". If I were to use this, then to create digital sand mandalas in the browser! ;)
These companies don't have any imagination. Their management has no vision. They could not create anything new and wonderful if they tried. People like Fabrice do and we are all richer for it. If your asking about the practical use you are likely in the exploitative mindset which is understandable on HN. The hacker/geek mindset enjoys this for what it is.
[1] https://blog.persistent.info/2025/03/infinite-mac-os-x.html
apk add nmap
nmap your.domain.com
However, the speed is heavily throttled. You can even use ssh and login to your own server.It can also be used as a very cheap way to provide a complete build environment on a single website, for example to teach C/C++. Or to learn the shell. You don't have to install anything.
Any advice on how to create a JSLinux clone with a specific file pre-installed and auto-launching would be much appreciated!
For a classroom with Windows PCs this is close to ideal - zero install, no admin rights, works in any browser. Students get a real gcc toolchain and shell without touching the host OS.
From "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents" (2026) https://news.ycombinator.com/item?id=46825119 :
>>> How to run vscode-container-wasm-gcc-example with c2w, with joelseverin/linux-wasm?
>> linux-wasm is apparently faster than c2w
From "Ghostty compiled to WASM with xterm.js API compatibility" https://news.ycombinator.com/item?id=46118267 :
> From joelseverin/linux-wasm: https://github.com/joelseverin/linux-wasm :
>> Hint: Wasm lacks an MMU, meaning that Linux needs to be built in a NOMMU configuration
From https://news.ycombinator.com/item?id=46229385 :
>> There's a pypi:SystemdUnitParser.
x86_64:
x86 (i.e. 32 bit): riscv64: Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.
[1] http://hoult..rg/primes.txt
I don't really think this bears out in practice. RISC-V is easy to emulate but this does not make it fast to emulate. Emulation performance is largely dominated by other factors where RISC-V does not uniquely dominate.
> newer gcc versions have significantly better optimization passes
So what you're saying is that with a modern compiler RISC-V would win by even more?
TBH I doubt much has changed with register allocation on register-rich RISC ISAs since 2018. On i386, yeah, quite possible.
Also MIPS code is much larger.
http://blog.schmorp.de/2015-06-08-emulating-linux-mips-in-pe...