The difference in perf without glue is crazy. But not surprising at all. This is one of the things I almost always warn people about, because it's such a glaring foot gun when trying to do cool stuff with WASM.
The thing with components that might be addressed (maybe I missed it) is how we'd avoid introducing new complexity with them. Looking through the various examples of implementing them with different languages, I get a little spooked by how messy I can see this becoming. Given that these are early days and there's no clearly defined standard, I guess it's fair that things aren't tightened up yet.
The go example (https://component-model.bytecodealliance.org/language-suppor...) is kind of insane once you generate the files. For the consumer the experience should be better, but as a component developer, I'd hope the tooling and outputs were eventually far easier to reason about. And this is a happy path, without any kind of DOM glue or interaction with Web APIs. How complex will that get?
I suppose I could sum up the concern as shifting complexity rather than eliminating it.
In my experience people are often disappointed by the shared-nothing architecture of the component model. I guess that shared-nothing architecture makes it impossible to properly share GC objects across component boundaries. But they can still be shared across core module boundaries.
Then there is the single array of memory that makes modern memory allocators not really work, resulting in every WASM compiler scrounging up something that mashes the assumptions of the source language into a single array.
Example subsets:
- (mainly textual) information sharing
- media sharing
- application sharing with, small standard interface like WASI 2 or better yet including some graphics
- complex application sharing with networking
Smaller subsets of the giant web API would make for a better security situation and most importantly make it feasible for small groups to build out "browser" alternatives for information sharing, media or application sharing.
This is likely to not be pursued though because the extreme size of the web API (and CSS etc.) is one of the main things that protects browser monopolies.
Even further, create a standard webassembly registry and maybe allow people to easily combine components without necessarily implementing full subsets.
Do webassembly components track all of their dependencies? Will they assume some giant monolithic API like the DOM will be available?
What you're doing is essentially creating a distributed operating system definition (which is what the web essentially is). It can be designed in such a way that people can create clients for it without implementing massive APIs themselves.
The chance for that ever materializing is most like zero though.
What would really change perception is not just better benchmarks, but making the boring path easy: compile with the normal toolchain, import a Web API naturally, and not have to become a part-time binding engineer to build an ordinary web app.
"Java the language is almost irrelevant. It's the design of the Java Virtual Machine. And I've seen compilers for ML, compilers for Scheme, compilers for Ada, and they all work. Not many people use them, but it doesn't matter: they all work." --James Gosling
Then Microsoft happened. MS realized that "Write Once, Run Anywhere" kills their OS monopoly, so they polluted Java with brilliant Embrace, Extend, Extinguish strategy (Sun vs. Microsoft revealed the emails where the stated goal was "Kill cross-platform Java" by growing the "polluted" Java market.):
Embrace: Microsoft licensed Java from Sun Microsystems and built the MSJVM. It was the fastest JVM for some time.
Extend: They created a programmer tool for Java with proprietary Windows-specific "extensions" and also removed standard features like RMI and JNI.
Extinguish: Developers using MS tools (90% of devlopers at the time) produced "Write Once, Run Only on Windows" software and killed it and pivoted to C# and .NET
Before Java 6 in 2006 JVM wasn't a good target for dynamically typed languages. In Java 6 they added some support but it wasn't very efficient. In 2008 they started serious work on fixing this, and that work went into Java 7 in 2011.
The .NET CLR on the other hand was designed from the start to be a good target for all types of language and was superior to JVM at this from the start through at least Java 7.
The solution was to sandbox the whole VM but this breaks all the existing code designed for partial sandboxing (e.g. most of the standard library). WASM uses this approach from the start.
Don't be surprised if Google adds something similar to Chrome for WASM.
https://component-model.bytecodealliance.org/
It includes high level concepts, practical code samples and more that introduce the really powerful parts of WebAssembly.
With regards to the JS ecosystem specifically there are 3 projects to know:
https://github.com/bytecodealliance/StarlingMonkey
https://github.com/bytecodealliance/ComponentizeJS
https://github.com/bytecodealliance/jco
The most mature tool chain right now is Rust, but there is good support for most things with LLVM underneath (C/C++ via clang). Golang, python and support for other languages is getting better and better (tinygo and big go) and there’s even more to come.
One of the goals of WebAssembly is to melt right into your local $TOOLCHAIN as a compilation target, and we are getting closer every week.
Won't bother trying going through differences/how-this-is-not-that but I'll say this: This time, it's slightly better, just like every time before.
I'd even go so far as to say this iteration is much better than what came before, and the speed of adoption by multiple language toolchains, platforms, operating systems, browsers proves that.
Zero IDE integration, no right mouse click to generate or consume interfaces/stubs, no debugging tools, no integration with existing toolchains like those alternatives, no wire debugging,....
It feels designed for those that never left the command line, vim and emacs kind of world.
That said, people have worked on IDE integration (it’s not zero, ex. WIT syntax highlighting), there is existing integration with upstream language tool chains, but trying to debate that seems silly. Whether tech is good or worth exploring is not dictated by IDE support, I think!
There has been substantial work on improving debugging, DX and documentation! Hopefully in the LLM age the existing can move even faster
(to elaborate: WASM works just fine without the component model, it's not "the future of WebAssembly", just an option built on top of it, and of questionable value tbh)
No, I don't think so at all. 80% of the article is about other problems (JS bindings are complicated to generate and a leaky abstraction, additional tools beyond the compiler are required, compiler authors don't want to deal with them, they increase the friction for getting started, they're hard to debug, ...)
Emscripten exists, DWARF debugging support exists and works (I can step from C/C++ code into JS code and back with the WASM DWARF debugging extension for VSCode), other language ecosystems just need to catch up.
It's perfectly good content, sir!
> (to elaborate: WASM works just fine without the component model, it's not "the future of WebAssembly", just an option built on top of it, and of questionable value tbh)
WebAssembly absolutely works fine without the component model, and I'd argue it's much better with the component model.
Here's my simple pitch.
world before component model:
> be me
> build a webassembly core module
> give it to someone
> they ask what imports it needs
> they ask how to run it
> they ask how to provide high level types to it
world after component model: > be me
> write an IDL (WIT[0]) interface which specifies what the component should do
> write the webassembly component
WebAssembly gives us an incredible tool -- a new compilation target that is secure, performant and extensible. We could crudely liken this to RISCV. In $CURRENT_YEAR it doesn't make sense to stop at the RISCV layer and then let everyone create their own standards and chaos in 50 directions on what the 1/2/3 step higher abstractions should be.Emscripten carried the torch (and still does great work of course) in building this layer that people could build on top of, but it didn't go far enough. Tools like wasm_bindgen in Rust work great but lack cross-platform usage.
The Component Model is absolutely the future of WebAssembly. Maybe not the future of WebAssembly core, but if you want to be productive and do increasingly interesting things with WebAssembly, the Component Model is the standards-backed, community-driven, cross-platform, ambitious way to do things with WebAssembly.
To be incredibly blunt, the failure of other attempts to centralize community, effort, and bring people along to build a shared thing that is just low cost enough that everyone can build on top is impressive. Nothing against other efforts, but I just can't find any similar efforts that others have standardized on in any meaningful way.
We're talking about a (partially already here) world where every popular programming language just outputs to WebAssembly natively and computers on every popular architecture/platform have an easy time running those binaries and libraries? If that's not the future, paint me a different one/show me movement in that direction -- genuinely would love to see what I'm missing!
And in all of this, the Component Model is optional -- if you don't like it, don't use it. If WebAssembly core works for you, you are absolutely free to build! wasm32-unknown-unknown is right there, waiting for you to target it (in Rust at least).
(there was also some more recent discussion in here: https://news.ycombinator.com/item?id=47295837)
E.g. it feels like a lot of over-engineering just to get 2x faster string marshalling, and this is only important for exactly one use case: for creating a 1:1 mapping of the DOM API to WASM. Most other web APIs are by far not as 'granular' and string heavy as the DOM.
E.g. if I mainly work with web APIs like WebGL2, WebGPU or WebAudio I seriously doubt that the component model approach will cause a 2x speedup, the time spent in the JS shim is already negligible compared to the time spent inside the API implementations, and I don't see how the component model can help with the actually serious problems (like WebGPU mapping GPU buffers into separate ArrayBuffer objects which need to be copied in and out of the WASM heap).
It would be nice to see some benchmarks for WebGL2 and WebGPU with tens-of-thousands of draw calls, I seriously doubt there will be any significant speedup.
And besides performance, I think there are developer experience improvements we could get with native wasm component support (problems 1-3). TBH, I think developer experience is one of the most important things to improve for wasm right now. It's just so hard to get started or integrate with existing code. Once you've learned the tricks, you're fine. But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
What are examples of such applications? Honest question - I'm curious to learn more about issues such applications have in production.
> But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
If the toolchain does it for them, they don't need to be experts, no more than people need to be DWARF experts to debug native applications.
I agree tools could be a lot better here! But as I think you know, my position is that we can move faster and get better results on the tools side.
any application you use today that is written in JavaScript rendering to the DOM, is much harder to write in not-JavaScript
Slack, Teams, Outlook, Word, OpenAI, Anthropic, Github, Twitter, Instagram (web), Notion, Google Docs, ...
Now, maybe there aren't many because of performance - maybe they haven't used wasm because it was too slow. But I would appreciate seeing data on that - an application that tried wasm and gave up after seeing the overhead, at the least. But I would also expect to see apps that use wasm even despite some DOM overhead, because of the speedup on non-DOM code - and I'd like to see data on how much DOM overhead they are currently suffering.
I am asking because I'm familiar with a lot of apps ported to wasm, and they don't do this. That may just be because I am seeing one particular slice of the ecosystem! So I am very curious to learn about other parts.
Hopefully native component support makes that "aha, I get it now" moment happen earlier.
What you don't get much is people doing standard SPA DOM manipulation apps in WASM (e.g. the TodoMVC that they benchmarked) because the slowdown is large. By fixing that performance issue you enable new usecases.
Integrating the component model into browsers just for faster string marshalling is 'using cannons to shoot sparrows' as the German saying goes.
If there would be a general fast-path for creating short-lived string objects from ArrayBuffer slices, the entire web ecosystem would benefit, not just WASM code.
That is a useful benefit, not the only benefit. I think the biggest benefit is not needing glue, which means languages don't need to agree on any common set of JS glue, they can just directly talk DOM.
Being able to complete on efficiency with native apps is an incredible example of purposeful vision driving a significant standard, exactly the kind of thing I want for the future of the web and an example of why we need more stewards like Mozilla.
Performance is already as good as it gets for "raw" WASM, the proposed component model integration will only help when trying to use the DOM API from WASM. But I think there must be less complex solutions to accelerate this specific use case.
The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.
If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?
Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.
I'm not exactly sure how this works when binding it to GC languages.
[1] https://component-model.bytecodealliance.org/design/wit.html...
Maybe they should have spent some time wondering how previous component models work, e.g. COM, CORBA, RMI, .NET Remoting,....
SendMessage itself is frustratingly dumb. You have excessively bit fiddly or obnoxiously slow as your options. I think for data you absolutely know you’re sending over a port there should be an arena allocator so you can do single copy sends, versus whatever we have now (3 copy? Four?). It’s enough to frustrate use of worker threads for offloading things from the event loop. It’s an IPC wall, not a WASM wall.
Instead of sending bytes you should transfer a page of memory, or several.
And now that we're getting close to have the right design principles and mitigations in place and 0-days in JS engines are getting expensive and rare... we're set on ripping it all out and replacing it with a new and even riskier execution paradigm.
I'm not mad, it's kind of beautiful.
Browsers are millions of lines of code, the amount of UAFs, overflows, etc so far is not the bottleneck.
By the same token, was Java or Flash more dangerous than JS? On paper, no - all the same, just three virtual machines. But having all three in a browser made things fun back in the early 2000s.
WASM today has no access to anything that isn't given to it from JS. That means that the only possible places to exploit are bugs in the JIT, something that exists as well for JavaScript.
Even WASM gets bindings to the DOM, it's surface area is still smaller as Javascript has access to a bunch more APIs that aren't the DOM. For example, WebUSB.
And even if WASM gets feature parity with Javascript, it will only be as dangerous as Javascript itself. The main actual risk for WASM would be the host language having memory safety bugs (such as C++).
So why was Java and Flash dangerous in the browser (and activex, NaCL).
The answer is quite simple. Those VMs had dangerous components in them. Both Java and Flash had the ability to reach out and scribble on a random dll in the operating system or to upload a random file from the user folder. Java relied HEAVILY on the security manager stopping you from doing that, IDK what flash used. Javascript has no such capability (well, at least it didn't when flash and Java were in the browser, IDK about now). For Java, you were running in a full JVM which means a single exploit gave you the power to do whatever the JVM was capable of doing. For Javascript, an exploit on Javascript still bound you to the javascript sandbox. That mostly meant that you might expose information for the current webpage.
Taking this argument to its extreme, does this mean that introducing new technology always decreases technology? Because even if the technology would be more secure, just the fact that it's new makes it less secure in your mind, so then the only favorable move is to never adopt anything new?
Supposedly you have to be aware of some inherent weakness in WASM to feel like it isn't worth introducing, otherwise shouldn't we try to adopt more safe and secure technologies?
I assume you mean "decreases security" by context. And in that case - purely from a security standpoint - generally speaking the answer is yes. This is why security can often be a PITA when you're trying to adopt new things and innovate, meanwhile by default security wants things that have been demonstrated to work well. It's a known catch-22.
JS required the time and effort because it's a clown-car nightmare of a design from top to bottom. How many person-hours and CPU cycles were spent on papering over and fixing things that never should have existed in the first place?
This doesn't even count as a sunk cost fallacy, because the cost is still being paid by everyone who can't even get upgraded to the current "better" version of everything.
The sooner JavaScript falls out of favor the better.
JS didn't magically become more "secure". Multiple things happened:
- ActiveX and Flash got booted from browsers
- Browsers got much better sandboxes
Essentially limited what untrusted code can run and sandboxed that untrusted code. Before NaCl and PNaCl it was wild west in browsers.
The same sandbox runs WASM. It even goes through the same runtime in every browser. It's no different from compiling language of your choice to subset of JavaScript (see asm.js).
I think you may be confusing Javascript the language, with browser APIs. Javascript itself is not insecure and hasn't been for a very long time, it's typically the things it interfaces with that cause the security holes. Quite a lot of people still seem to confuse Javascript with the rest of the stuff around it, like DOM, browser APIs, etc.
V8 does some crazy stuff that makes JS one of the fastest interpreted language there is. But had JS been designed differently (Dart was an attempt at fixing some of those mistakes), it’s likely there would be less security vulnerabilities in its interpreter.
Most or many of secure mobile operating system zero days are caused by image parsing. There is a threat at least as big, if not bigger, in parsing complex file formats.
Sure gopher or gemini would be more secure, but even without JS the web ecosystem would be venerable.
And it probably is. The sandboxing and security have been around a very long time.
If Python were the de-facto browser language, people would also blame it for "security problems", and would be just as paranoid about python running when they visit a website. I know whatever language it would be, people would still be paranoid.
I personally don't see any problem with Javascript. If someone knows how to use it, it can be very simple and powerful.
Before Javascript ever existed, I was wishing that websites had a scripting language. I didn't really care what it was, but Javascript answered my prayers rather nicely. But it wouldn't really matter what the language is, I'd still be coding for the web browser, and other people would be hating it for whatever reasons.
Isnt this what an OS is supposed to do? Mobile operating systems have done a pretty good job of this compared to the desktop OS.
https://github.com/WebAssembly/component-model/blob/main/des...
For end users, they should just see their language's native concurrency primitives (if any). So if you're running Go, it'll be go routines. JS, would use promises. Rust, would have Futures.
Suppose the Go people make a special version of Go for Wasm. What do you think are the chances of that being supported in 5 years time?
I think most languages could pretty easily use WASM GC. The main issue comes around FFI. That's where things get nasty.
In java land the fact that you effectively don't have pointers but rather everything is an object reference, this ends up not being an issue.
I wonder if the WASM limitation is related to the fact that JavaScript has pretty similar semantics with no real concept of a "pointer". It means to get that interior pointer, you'd need to also introduce that concept into the GC of browsers which might be a bit harder since it'd only be for WASM.
Limiting WASM to what is capable in JavaScript is quite a silly thing to do. But at the same time there are vastly different GC requirements between runtimes so it's a challenging issue. Interior pointers is only one issue!
I know this is pedantic, but they aren't. At least not in the sense of what it means for something to be a pointer.
Object references are an identifier of an object and not a memory pointer. The runtime takes those object references and converts them into actual memory addresses. It has to do that because the position of the object in memory (potentially) changes every time a GC runs.
This does present it's own problems. Different runtimes make different choices around this. Go and python do not move objects in memory. As a result, it's a lot easier for them to support interior and regular pointers being actual pointers. But that also means they are slower to allocate, free, and they have memory fragmentation issues.
I'm not sure about C# (the only other language I saw with interior pointers). I think C# semi-recently switched over to a moving collector. In which case, I'm curious to know how they solved the interior pointer problem.
Objects references are just pointers in .NET. See the JIT disassembly below. It's been using a moving GC for a long time, too.
https://sharplab.io/#v2:C4LghgzgtgNAJiA1AHwAICYCMBYAUKgZgAIM...
> How does C# guard against someone doing something silly like turning a pointer into a long and then back into a pointer again later?
I don't think it does. You can't do most of these things without using unsafe code, which needs a compiler flag enabled and code regions marked as `unsafe`.
Quite often it comes with a mandatory service worker which has to be communicated to in a specific fashion, then some specific headers need to be available server side, etc. I'm not saying it's not required but ... I imagine most Web developer are used to requiring a library, calling its function, getting the result. Until it reaches that stage then JavaScript fallbacks will be preferred until there is absolutely no alternative but the WASM binary.
PS: this might sound like such a low bar... but the alternative is giving up entirely on either, starting a container with a REST API then calling it with a client. That's very easy and convenient when you've done it once and you decouple. So maybe I'm finicky but when very popular alternatives exist I believe the tipping point won't happen unless it becomes radically easier than what exists.
As I see it, WASM is used to augment the JS/WebAPI ecosystem. For example, when you need to do heavy bit manipulation, complex numerical processing. The round-trip JS->WASM->JS is an overhead. So the WASM modules should perform a substantial amount of processing to offset that inefficiency.
I frequently find that V8 optimisations yield sufficient performance without needing to delve into WASM.
IMHO if you want to write WebApps in Rust, you're holding it wrong.
What can you even modify there, when all the structure is flattened into a single layer
Possibly disabled now as they announced VBScript would be disabled in 2019.
I believe that the need for JavaScript glue code has significantly hindered the appeal and interest in WASM, as well as the resulting ecosystem growth.
The DOM is not a static interface, it changes both across browsers based on implemention status and also based on features enabled on a per page load basis.
The multi browser ecosystem also mainly works because of polyfills.
It's not clear how to polyfill random methods on a WIT interface or how to detect at runtime which methods exist.
OTOH the JS bridge layer we use today means you can load JS side polyfills and get wasm that's portable across browsers with no modifications. There's more to the ecosystem than just performance.
WRT WebAssembly Components though, I do wish they'd have gone with a different name, as its definition becomes cloudy when Web Components exist, which have a very different purpose. Group naming for open source is unfortunately, very hard. Everyone has different usages of words and understanding of the wider terms being used, so this kind of overlap happens often.
I'd be curious if this will get better with LLM overseers of specs, who have wider view of the overall ecosystem.
Not that I necessarily think it's unwarranted. While I appreciate the simplicity of the current approach to interop because it gives you free reign and is easy to grasp, I think anyone who has spent some time rawdogging JS-WebAssembly integration has considered inventing their own WASM IDL analog. If that can be specified as part of the standard it can also be made quicker.
I would love something like this for native applications; I'm so tired having to wear C's skin every time I want to do bind together code written in different languages.
From the code sample, it looks like this proposal also lets you load WASM code synchronously. If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load. Currently WASM code can only be loaded async.
[1] https://exercism.org/profiles/mikestaas/solutions
[2] https://github.com/mikestaas/wasmfizzbuzz/blob/main/fizzbuzz...
Which is to say, dancing around the core issue, which is direct DOM access as well as anything else JS has privileged access to.
If Wasm modules can be loaded like this:
<wasm src="/module.wasm" start="main" data="..." />
...and if Wasm could access the DOM like this: import dom, json, re, html, urllib.parse
from datetime import datetime
params = urllib.parse.urlencode({"category": "news", "limit": 10})
data = json.loads(await (await fetch(f"https://api.example.com/data?{params}")).string())
container = js.document.getElementById("app")
container.innerHTML = "".join(f"..." for i in data["items"])
...then developers would have jumped in right away.> "We've developed Foo 2.0. It does not have feature parity with Foo 1.0, but it's more architecturally elegant in ways that are meaningless to the public. It took ten thousand man hours and was done instead of much needed upgrades to Foo 1.0."
*one year later*
> "Why is no one adopting Foo 2.0? Yes, it doesn't have feature parity with Foo 1.0, but it's frankly irresponsible of the public not to understand that 2.0 is a higher number than 1.0."
*ten years later*
> "Remember Foo? What a mess. Thank god Bar came along."
*eleven years later*
> "Announcing Bar 2.0... "
> There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web
It would be nice if WebAssembly would really succeed, but I have to be honest: I gave up thinking that it ever will. Too many things are unsolved here. HTML, CSS and JavaScript were a success story. WebAssembly is not; it is a niche thing and getting out of that niche is now super-hard.
Since WebAssembly instructions are much easier to reason about, you could probably auto-optimize away a lot of the obfuscation, like "this is a silly way to do X, so we can just do X directly".
Hear me out: Web APIs need to devolve into... APIs. DOM needs to devolve into a UI API. We have PWA's, File System APIs, USB APIs, peripheral device APIs, all the things a native webui client would be doing.
Yes, it is "back to square one" , we already have websites masquerading as native apps via electron and tauri.
On the other end you have a handful of tech companies dictating how our computing experience should be because they control browsers.
WASM should be the bytecode format for executing untrusted code from the network and running a UI-capable application with controlled access to the system, that runs in a secure sandbox. This is java applets but better.
You have the same issue on mobile where regular websites create apps, so they can be persistent and have access to things they shouldn't, and be all naughty. This happens because somehow we treat "native" differently than "web". it should all be restricted like web apps are, sandboxed tightly, but given access to resources like any electron app would (but not your entire file system, or entire anything, ever!)
There shouldn't be any "installing" anything, perhaps bookmarking an app instead. I'm not saying let's do away with proper native apps, non-GUI apps still have a place, as do system services and extensions which are a whole other class of system applications. But your banking app isn't one of those, neither is a social media app, or a photo editor, a game, uber,etc.. all these can run in WASM, and WASM in turn gets native access to APIs similar to but not exactly web apis. I learned in that other comment thread that replicating the web DOM APIs for WASM is a foolish effort since it is all built with JS in mind. However, an HTML5 compatible DOM layer, that is distinct from DOM-manipulation layers, and has a low-level styling layer (not a best like CSS, but something CSS can be compiled into, or that WASM styling code would natively compile into).
You will still need something to run the WASM like browsers do today, but here is the biggest value of my proposal: Unlike browsers, this would be heavily standardized, and as far as how your WASM renders and manipulates the DOM, that would be extremely consistent across WASM browsers, mainly because it would be so low-level there won't be any opinionated subjective interpretations between host apps. Unlike JS, there won't be any script runtime, unlike CSS, there is no styling engine. The responsibility of a beast like V8 is divided so that DOM, styling,security and API interactions are strongly defined in bytecode/ABI by the standard for the WASM host/browser, the actual UI of the app (tabs, themes, extensions,bookmarks, history,etc..) would not be different between WASM host/browsers, and of course the app's logic would be defined in whatever language, which will be compiled into WASM bytecode compatible with the aforementioned standard. This should result in consistent UI, fully networked apps with controlled resource access, that run ephemerally (caching as desired), storing persistent data as needed (no software updates per-app).
It is a lot of effort but consider the state of computing, between mobile apps, things like flatpak, electron, tauri, "vendoring", PWA apps, bloated chat apps like slack, teams, element, discord, web framework mess,etc.. is this the chaos we want to leave the next generation?
It might take a long time, but isn't it good to "build trees under whose shade you'll never sit"?
(though i do like the open code nature of the internet even if a lot of the javascript source code is unreadable and/or obfuscated)
There is a massive, pathetic drop in quality from the system designs of the past compared to the garbage we are seeing nowadays
You can tell they are fully embracing slop and stupid design
Tehy are trading efficiency and elegance for bloat, and it is absolute trash
They have stopped engineering, they abstraction junkies
OGs don't want to be associated with any of that trash, and it shows
And to be clear, style isn't the only problem. This comment can be summarized as "WebAssembly can now interact with the DOM directly instead of through JavaScript, making it the better choice for more types of problems". One sentence instead of a paragraph of cliches ("...change how people think about this...chicken-and-egg loop..."), uncanny phrases ("...the hot-path optimization niche"), and inaccurate claims ("...the only viable use cases were compute-heavy workloads like codecs and crypto").
(For anyone who doesn't believe me, check the user's comment history)
Could be for fun. I remember fun.
no, it didn't mean that, because the overhead is not a deal breaker:
1) you don't have to do the glue code (libs can do it for you)
2) there's overhead due to glue, but the overhead is so small that WASM web frameworks easily can compete with fast JS frameworks in DOM heavy scenarios.
Source: Analysis of the creator of Leptos (a web framework based on WASM): https://www.youtube.com/watch?v=4KtotxNAwME
Tbf, Emscripten has solved this problem long ago - I don't quite understand what's the problem for other language ecosystems.
The JS shim is still there, but you don't need to deal with it, you just include a C header and "link with a library".
Some of the Emscripten-specific C APIs are also much saner than their web counterparts, which is an important aspect that would be lost with an automatic binding approach. And EM_JS (e.g. directly embedding JS code into C/C++ files) is just pure bliss, because it allows to easily write 'non-standard' binding layers that go beyond a simple 1:1 mapping.
Those features won't go away of course, I just feel like the work could be spent on solutions that provide more 'bang for the buck' (yeah, I've never been a fan of the component model to begin with).
I tried using it for crypto, but WASM does not have instructions for crypto. So it basically falls back to be non-hw-accelerated. Tried to find out why and the explanation seems to be that it's not needed because JS has a `crypto` API which uses hw intrinsics.
And games, which the web is now a viable platform for a huge range of them, albeit not the top of the range, AAA and all that (yet?). Also some new graphical editors taking advantage of it, probably Figma being the most famous example so far.
https://news.ycombinator.com/newsguidelines.html#generated
>Don't post generated comments or AI-edited comments. HN is for conversation between humans.
People have the impression that WebAssembly has failed. After so many years, I sort of agree with that notion. WebAssembly is soon 10 years old by the way:
I've created a proposal to add a fine-grained JIT interface: https://github.com/webassembly/jit-interface
It allows generating new code one function at a time and a robust way to control what the new code can access within the generating module.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I think web apps are dead anyway and the browser is heading towards being a legacy software. The future is small ephemeral UIs generated on the fly by LLMs that have access to datasources. WASM is too late.
JavaScript is the right abstraction for running untrusted apps in a browser.
WebAssembly is the wrong abstraction for running untrusted apps in a browser.
Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented.
WebAssembly is statically typed and its most fundamental abstraction is linear memory. It's a poor fit for the web.
Sure, modern WebAssembly has GC'd objects, but that breaks WebAssembly's main feature: the ability to have native compilers target it.
I think WebAssembly is doomed to be a second-class citizen on the web indefinitely.
> WebAssembly is the wrong abstraction for running untrusted apps in a browser
WebAssembly is a better fit for a platform running untrusted apps than JS. WebAssembly has a sandbox and was designed for untrusted code. It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
> Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
There are dynamic languages, like JS/Python that can compile to wasm. Also I don't see how dynamic typing is required to have API evolution and compt. Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
> Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented
The first major language for WebAssembly was C++, which is object oriented.
To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance.
The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform.
The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.
Even more to the point, for the past couple of decades the browser's programming model has just been "write JavaScript". Of course it's going to fit JavaScript better than something else right now! That's an emergent property though, not something inherent about the web in the abstract.
There's an argument to be made that we shouldn't bother trying to change this, but it's not the same as arguing that the web can't possibly evolve to support other things as well. In other words, the current model for web programming we have is a local optimum, but statements like the the one at the root of this comment chain talk like it's a global one, and I don't think that's self-evident. Without addressing whether they're opposed to the concept or the amount of work it would take, it's hard to have a meaningful discussion.
So does JavaScript.
> It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
They have that infrastructure because JS has access to the browser's API.
If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because:
- You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data.
- You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers.
> There are dynamic languages, like JS/Python that can compile to wasm.
If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM.
If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside.
> Also I don't see how dynamic typing is required to have API evolution and compt.
Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.
> Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
We're talking about the browser, which is a particular platform. Not all platforms are the same.
The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
> The first major language for WebAssembly was C++, which is object oriented.
But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes.
> To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
Language portability is a big feature. There's a lot of code that's not JS out there. And JS isn't a great compilation target for a lot of languages. Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.
> Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load. > The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
I don't think any Web API exposed directly to Wasm would have a single fixed ABI for that reason. We'd need to have the user request a type signature (through the import), and have the browser maximally try and satisfy the import using coercions that respect API evolution and compat. This is what Web IDL/JS does, and I don't see why we couldn't have that in Wasm too.
> Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
Wasm is not a dud on the web. Almost 6% of page loads use wasm [1]. It's used in a bunch of major applications and libraries.
[1] https://chromestatus.com/metrics/feature/timeline/popularity...
I still think we can do better though. Wasm is way too complicated to use today. So users of wasm today are experts who either (a) really need the performance or (b) really need cross platform code. So much that they're willing to put up with the rough edges.
And so far, most investment has been to improve the performance or bootstrap new languages. Which is great, but if the devex isn't improved, there won't be mass adoption.
It's also worth noting that Wasm wasn't born into a vacuum like JS was (and Java for that matter), so it is competing[1] in a crowded space. Wasm is making inroads into languages that already have well-developed toolchains and ecosystems like Java, Kotlin, Rust, Scala, and Go. I think the Wasm network effect is happening, it is just very slow because it's primarily been a deployment platform and not a development platform.
It's also worth noting that Wasm advancement is pretty decentralized and there are a lot of competing interests, particularly outside the web. Basically every other language had at least one massive investment in both language development and tooling from the get-go. Java=sun, C#=Microsoft, Go=Google, JavaScript=browsers, Scala=foundation, etc.
[1] "competing" only in the sense of adding value over the mainstream or mainline implementations of these languages.
> Wasm is way too complicated to use today. So users of wasm today are experts who either (a) really need the performance or (b) really need cross platform code. So much that they're willing to put up with the rough edges.
I believe we can do better; we've been counting on languages that come to Wasm as a secondary deployment strategy and have their primary devex focused on another platform where then can debug better and offer better tooling.
It's a big feature of JS. JS's dynamism makes it super easy to target for basically any language.
> Google switched to compiling Java to Wasm-GC instead of JS and got a lot of memory/speed improvements.
That's cool. But that's one giant player getting success out of a project that likely required massive investment and codesign with their browser team.
Think about how sad it is that these are the kinds of successes you have to cite for a technology that has had as much investment as wasm!
> Almost 6% of page loads use wasm
You can disable wasm and successfully load more than 94% of websites.
A lot of that 6% is malicious ads running bitcoin mining.
> Wasm is way too complicated to use today.
I'm articulating why it's complicated. I think that for those same reasons, it will continue to be complicated
It's not really a dud on the web. It sees a ton of use in bringing heavier experiences to the browser (i.e Figma, the Unity player, and so on).
Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.
> Where it is currently fairly painful is in writing traditional websites, given all the glue code required to interact with the DOM - exactly what these folks are trying to solve.
I don't think they will succeed at solving the pain, for the reasons I have enumerated in this thread.
Most of the web also doesn't use the Video element, but it isn't 'a dud' either.
Video and wasm are critical for a small subset of the web. That subset includes YouTube and Netflix for Video, and Figma and Photoshop and Unity games for wasm.
I'm trying to explain to you why attempts to make wasm mainstream have failed so far, and are likely to continue to fail.
I'm not expressing an "opinion"; I'm give you the inside baseball as a browser engineer.
> Getting rid of the glue layer
I'm trying to elucidate why that glue layer is inherent, and why JS is the language that has ended up dominating web development, despite the fact that lots of "obviously better" languages have gone head to head with it (Java, Dart sort of, and now wasm).
Just like Java is a fantastic language anywhere but the web, wasm seems to be a fantastic sandboxing platform in lots of places other than the web. I'm not trying to troll you folks; I'm just sharing the insight of why wasm hasn't worked out so far in browsers and why that's likely to continue
JS was dominating web development long before WASM gained steam. This isn't the same situation as "JS beating Java/ActivX for control of the web" (if I follow the thrust of your argument correctly).
WASM has had less than a decade of widespread browser support, terrible no-good DevEx for basically the whole time, and it's still steadily making it's way into more and more of the web.
> terrible no-good DevEx for basically the whole time
I'm telling you why.
> still steadily making it's way into more and more of the web.
It is, but you can still browser the web without it just fine, despite so much investment and (judging by how HN reacts to it) overwhelming enthusiasm from devs
Depending on how you count, it took JS about 20 years and billions of dollars plowed into it to do the same, so why expect any less from wasm?
I don't understand this objection. If you compile code that doesn't call a function, and then put that artifact on a server and send it to a browser, how is it broken when that function is removed?
If it gets stuck as a second-class citizen like you're predicting, it sounds a lot more like it's due to inflexibility to consider alternatives than anything objectively better about JavaScript.
(I'm not a fan of the WASM component model either, but your generalized points are mostly just wrong)
My points are validated by the reality that most of the web is JavaScript, to the point that you'd have a hard time observing degradation of experience if you disabled the wasm engine.
- https://floooh.github.io/tiny8bit/
- https://floooh.github.io/sokol-webgpu/
- https://floooh.github.io/visualz80remix/
- https://floooh.github.io/doom-sokol/
All those projects also compile into native Windows/Linux/macOS/Android/iOS executables without any code changes, but compiling to WASM and running in web browsers is the most painless way to get this stuff to users.
Dealing with minor differences of web APIs in different browsers is a rare thing and can be dealt with in WASM just the same as in JS: a simple if-else will do the job, no dynamic type system needed (apart from that, WASM doesn't have a "type system" in the first place, just like CPU instruction sets don't have one - unless you count integer and float types as type system"). Alternatively it's trivial to call out into Javascript. In Emscripten you can even mix C/C++ and Javascript in the same source file.
E.g. for me, WASM is already a '1st class citizen of the web' no WASM component model needed.
This is such a bizarre take that I don't know whether it's just a trolling attempt or serious...
Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS? The two technologies live side by side, each with specific advantages and disadvantages, they are not competing with each other.
I'm being serious.
> Why should web-devs switch to WASM unless they have a specific problem to solve where WASM is the better alternative to JS?
They mostly shouldn't. There are very few problems where wasm is better.
If you want to understand why wasm is not better, see my other posts in this thread.
Better late than never I guess.
[1] https://github.com/WebAssembly/interface-types/commit/f8ba0d...
[2] https://wingolog.org/archives/2023/10/19/requiem-for-a-strin...
[3] https://queue.acm.org/detail.cfm?id=3746174
I personally have always cared about DOM access, but the Wasm CG has been really busy with higher priority things. Writing this post was sort of a way to say that at least some people haven't forgotten about this, and still plan on working on this.
I mean, surely it does not come to a surprise to anyone that either of these is a huge deal, let alone both. It seems clear that non-Web runtimes have had a huge influence on the development priorities of WebAssembly—not inherently a bad thing but in this case it came at the expense of the actual Web.
> WebIDL is the union of JS and Web API's, and while expressive, has many concepts that conflict with those goals.
Yes, another part of the problem, unrelated to the WIT story, seems to have been the abandonment of the idea that <script> could be something other than JavaScript and that the APIs should try to accomodate that, which had endured for a good while based on pure idealism. That sure would have come useful here when other languages became relevant again.
(Now with the amputation of XSLT as the final straw, it is truly difficult to feel any sort of idealism from the browser side, even if in reality some of the developers likely retain it. Thank you for caring and persisting in this instance.)
Given that, do you really think goal #1 non-Web APIs really added much additional delay on top of the delay necessitated by goal #2 anyway?
This problem hasn't been solved outside the web either (at least not to the satisfaction of Rust fanboys who expect that they can tunnel their high level stdlib types directly to other languages - while conveniently ignoring that other languages have completely different semantics and very little overlap with the Rust stdlib).
At the core, the component model is basically an attempt to tunnel high level types to other languages, but with a strictly Rust-centric view of the world (the selection of 'primitive types' is essentially a random collection of Rust stdlib types).
.NET's Common Type System was supposed to be the neutral ground for dozens of languages. In practice, it had strong C# biases — try using unsigned integers from VB or F#'s discriminated unions from C#. The CLR "primitive types" were just as much a random collection as the WIT primitives are being described here.
The practical lesson from two decades of cross-runtime integration: stop trying to tunnel high-level types. The approaches that survive in production define a minimal shared surface (essentially: scalars, byte buffers, and handles) and let each side do its own marshaling. It's less elegant but it doesn't break every time one side's stdlib evolves.
WASM's linear memory model actually gets this right at the low level — the problem is everyone wants the convenience layer on top, and that's where the type-system politics start.
Cool parts of the webassembly technology aside - this should be no surprise to anybody. News at 11.
The non-sequitur in the title of this post should be enough to give everybody pause.
Waiting for someone to chime in and tell me that the "web" in "webassembly" wasn't meant to refer to the "world wide web". Go on. I dare you!
Like the assembly part means low-level and meant as a compilation target, not CPU instructions.
So websssembly is an assembly language for the web, like webgl is opengl for the web and webgpu are gpu APIs for the web. And behold none of those can access DOM APIs
But is isn't, at most WAT is (the WASM text format). WASM itself is a bytecode format. Nobody calls CPU machine code 'assembly' (nitpicking, I know, but the 'web' part of the name makes a lot more sense than the 'assembly' part).
WASM was designed as a successor to asm.js, and asm.js was purely a web thing. While non-web-platforms were considered as a potential use case (in the sense of "using WASM outside the web should be possible", it wasn't clear at the time what the successful usages outside browsers would even look like).
At least that's been my experience whenever I find it in production.
Then some folks rediscovered UNCOL from 1958, all the systems influenced by it, and started to sell the dream of the bytecode that was going to save the world.
I'm building a new Wasm GC-based language and I'm trying to make as small as binaries as possible to target use cases like a module-per-UI-component, and strings are the biggest hinderance to that. Both for the code size and the slow JS interop.
Here's a quote from the "requiem for stringref" article mentioned above:
> 1. WebAssembly is an instruction set, like AArch64 or x86. Strings are too high-level, and should be built on top, for example with (array i8).
> 2. The requirement to support fast WTF-16 code unit access will mean that we are effectively standardizing JavaScript strings.
Apple perceives web-based applications as chipping away at their app store (which makes them money), and so they cripple their Safari browser and then force all mobile browsers on iOS to use their browser engine, no exceptions, so that developers are forced to make a native app where Apple can then charge the developers (and thus the users) for a cut of any sales made through the app.
It's one reason the DOJ started suing Apple, but I fear that may have been sidelined due to politics.
https://www.justice.gov/archives/opa/media/1344546/dl?inline
These are things that literally everyone but Google thinks are terrible ideas.
Why don't you flip the conspiracy around and ask yourself why Google, the world's largest advertising agency and data hoover, wants browsers, a category dominated by Google, to have unmediated access to ever more user, system, and local network data?
I could not disagree more
https://www.justice.gov/archives/opa/media/1344546/dl?inline
I do, however, notice that I have never used a program that was built with either a web stack or a gc language stack, that wasn't getting slower over time, wouldn't cause strange issues, and wouldn't have crippled UI to match whatever the stack's limitations have been at the time. IMO the right direction is developing (or adopting) modern native languages. If the "price" for that is some web standard being stuck, I personally am totally okay with that.
I am sick of this idea that the web browser is almost an OS. It was supposed to serve web pages.