Personally, I write a lot of Vue, so using a "first party" environment has a lot of advantages for me. Perhaps if you are a React developer, the swap might be even more straightforward.
I also think it's important to take into consideration the other two packages mentioned in this post (oxlint & oxfmt) because they are first class citizens in Vite (and soon to be Vite+). Bun might be a _technically_ faster dev server, but if your other tools are still slow, that might be a moot point.
Also, Typescript also "just works" in Vite as well. I have a project on work that is using `.ts` files without even an `tsconfig` file in the project.
I, too, like to fiddle with optimizations and tool configuration puzzles but I need to get things done and get them done now. It doesn't seem fast, it seems cumbersome and inconsistent.
I think the point of this project is to provide an opinionated set of templates aimed at shipping instead of tinkering, right? "Don't tinker with the backend frameworks, just use this and focus on building the business logic."
Even after consideration of measurements radical performance improvements are most typically the result of the code's organization and techniques employed than the language its written in. But, of course, that cannot be validated without evidence from comparison of measurements.
The tragic part of all this is that everybody already knows this, but most front end developers do not measure things and may become hostile when measurements do occur that contradict their favorite techniques.
Unless of course you are not showing them improvements and are instead just shitting on their work. Yes, people do get hostile to that approach.
It's almost like there are genuine UX improvements being done
People want faster software... until they are confronted by challenging decisions. JavaScript can be very fast. JavaScript, in the browser, reports a page load of about 0.06 seconds for my large personal SPA and that includes state restoration. That is determined by using: performance.getEntries()[0].duration in the browser.
When conflicts arise people most frequently become emotional and complain about the situation than make any decision towards resolution one way or the other. That is a psychological problem called cognitive conservatism[1]. About the half the time that emotional output is some form of deflection, such as hostility. Cognitive conservatism is only allowed to exist when there is insufficient pressure on the thought leaders to impose a resolution.
Its okay to say you don't really want to be faster.
[1] https://en.wikipedia.org/wiki/Conservatism_(belief_revision)
See also cognitive complexity: https://en.wikipedia.org/wiki/Cognitive_complexity#In_psycho...
I've used this setup for my last few projects and it's so painless, and with recent versions of Node.js which can strip TypeScript types I don't even need a build step for the server code.
Edit: oops, I didn't see nkzw-tech/fate-template, which has something like this, but running client and server separately instead
(I opened an issue against typescript-go to flag this https://github.com/microsoft/typescript-go/issues/2825 )
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
i get why the rust/go tools exist - the perf gains are measurable. but the cognitive overhead is real. new engineer joins, they now need 3 different mental models just to make a PR. not sure AI helps here either honestly, it just makes it easier to copy-paste configs you don't fully understand
In my experience, the bottleneck has always been backend dev and testing.
I was hoping "tooling" meant faster testing, not yet another layer of frontend dev. Frontend dev is pretty fast even when done completely by hand for the last decade or so. I have and have also seen others livecode on 15 minute calls with stakeholders or QA to mock some UI or debug. I've seen people deliver the final results from that meeting just a couple of hours later. I say this as in, that's what's going to prod minus some very specific edge case bugs that might even get argued away and never fixed.
Not trying to be defensive of pure human coding skills, but sometimes I wonder if we've rolled back expectations in the past few years. All this recent stuff seems even more complicated and more error prone, and frontend is already those things.
The upshot of all these projects to make JS tools faster is a fractured ecosystem. Who if given the choice would honestly want to try to maintain Javascript tools written in a mixture of Rust and Go? Already we've seemingly committed to having a big schism in the middle. And the new tools don't replace the old ones, so to own your tools you'll need to make Rust, Go, and JS all work together using a mix of clean modern technology and shims into horribly legacy technology. We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
All I really see is an explosion of complexity.
Each of these tools provides real value.
* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.
* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)
* Code formatters automatically ensure consistent formatting.
* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)
As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.
And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?
In want the JS toolchain to stay written in JS but I want to unify the design and architecture of all those tools you mentioned so that they can all use a common syntax tree format and so can share data, e.g. between the linter and the formatter or the bundler and the type checker.
* Rolldown is compatible to Rollup's API and can use most Rollup plugins * Oxlint supports JS plugins and is ESLint compatibel (can run ESLint rules easily) * Oxfmt plans to support Prettier plugins, in turn using the power of the ecosystem * and so on...
So you get better performance and can still work with your favorite plugins and extend tools "as before".
Regarding the "mix of technology" or tooling fatigue: I get that. We have to install a lot of tools, even for a simple application. This is where Vite+[0] will shine, bringing the modern and powerful tools together, making them even easier to adopt and reducing the divide in the ecosystem.
[0] https://viteplus.dev/
Supports… some ESLint rules. It is not “easy” to add support to Oxlint for the rules it does not.
The projects at my work that “switched” to it now use both Eslint and Oxlint. It sucks, but at least a subset of errors are caught much faster.
I completely agree but maintenance is a maintainer problem, not the consumer or user of the package, at least according to the average user of open source nowadays. One of two things are come out of this: either the wheels start falling off once the community can no longer maintain this fractured tooling as you point out, or companies are going to pick up the slack and start stewarding it (likely looking for opportunities to capture tooling and profit along the way).
Neither outcome looks particularly appealing.
Based on current trends, I don't think people care about knowing how all the parts work (even before these powerful LLMs came along) as long as the job gets done and things get shipped and it mostly works.
I thought this was the point of all development in the JavaScript/web ecosystem?