I feel like part of the blame for the situation is that JavaScript has always lacked a standard library which contains the "atomic architecture" style packages. (A standard library wouldn't solve everything, of course.)
Edit: Removed a reference to node and bun.
- range, clamp, inIvl, enumerate, topK
- groupBy (array to record), numeric / lexical array sorts
- seeded rng
- throttling
- attachDragListener (like d3's mousedown -> mousemove -> mouseup)
- Maps / Sets that accept non-primitive keys (ie custom hash)
So basically functions that every *dash variant includes.Math.clamp is a big one (it’s a TC39 proposal). I’d also love to have the stats functions that Python has (geometric mean, median, etc.).
On the more ambitious end: CSV reading/writing and IPv4/IPv6 manipulation.
https://developer.mozilla.org/en-US/docs/Web/API
The above seems fairly expansive, even if we remove all the experimental ones in the list.
The main cause of bloat is not polyfills or atomic packages. The cause of bloat is bloat!
I love this quote by Antoine de Saint-Exupéry (author of the Little Prince):
"Perfection is achieved, not when there is nothing left to add, but nothing to take away."
Most software is not written like that. It's not asking "how can we make this more elegant?" It's asking "what's the easiest way to add more stuff?"
The answer is `npm i more-stuff`.
> Every sentence must do one of two things—reveal character or advance the action.
Or Quintilian's praise of Demosthenes and Cicero: "To Demosthenes nothing can be added, but from Cicero nothing can be taken away."
Use the time of a total stranger in such a way that he or she will not feel the time was wasted.
Give the reader at least one character he or she can root for.
Every character should want something, even if it is only a glass of water.
Every sentence must do one of two things—reveal character or advance the action.
Start as close to the end as possible.
Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.
Write to please just one person. If you open a window and make love to the world, so to speak, your story will get pneumonia.
Give your readers as much information as possible as soon as possible. To heck with suspense. Readers should have such complete understanding of what is going on, where and why, that they could finish the story themselves, should cockroaches eat the last few pages.
The greatest American short story writer of my generation was Flannery O'Connor (1925-1964). She broke practically every one of my rules but the first. Great writers tend to do that.
This, too, is the problem with movies and TV shows today. They worry so much about offending anyone they lose the interest of everyone. When was the last time you laughed hard and out loud?
I don’t need to know the color of the walls if it does neither.
if you're developing some sort of dystopia where everyone is heavily medicated, better to show a character casually take the medication rather than describe it.
of course, that's not a rule set in stone. you can do whatever the fuck you want.
You mean the character of a place?
JavaScript seems to be unique in that you want your code to work in browsers of the past and future—so a lot of bloat could come from compatibility, as mentioned in the article—and it's a language for UIs, so a lot of bloat in apps and frameworks could come from support for accessibility, internationalization, mobile, etc.
Lots of developers don't even say they are JS devs but React devs or something. This is normal given that the bandwidth and power of targets are so large nowadays. Software is like a gas, it will fill all the space you can give it since there is no reason to optimize anything if it runs ok.
I've spent countless hours optimising javascript and css to work across devices that were slow and outdated but still relevant (IE7, 8 and 9 were rough years). Cleverness breads in restrictive environments where you want to get the most out of it. Modern computers are so large that its hard for you to hit the walls when doing normal work.
Every C++ app I install in linux requires 250 packages
Every python app I install and then pip install requirements uses 150 packages.
10GB of build artifacts for the debug target.
With that said, there are plenty of small game engines out there, but couple Rust's somewhat slow compile times with the ecosystems preferences for "many crates" over "one big crate", and yeah, even medium-scale game engines like Bevy take a bunch of time and space to compile. But it is a whole game engine after all, maybe not representative of general development in the community.
There's no way the average C++ app uses 250 packages though. It's usually more like 5. C++ packaging is a huge pain so people tend to use them only when absolutely necessary, and you get huge libraries like Boost primarily because of the packaging difficulty.
I would say Python varies but 150 sounds high. Something more like 50-100 is typical in my experience.
For example we often see posts on HN about, "see, it's possible to write very fast software in language foo!" And most of the time yes, especially on modern hardware, most languages do allow you to write surprisingly fast software!
It's just that the people who actually want their software to run fast -- and who want it enough to prioritize it against other, competing values -- those people will generally reach for other languages.
With JavaScript, the primary "value" is convenience. The web as a platform is chosen because it is convenient, both for the developer and the user. So it stands to reason that the developer will also make other choices in the name of convenience.
Of course, there's weirdos like me who take pride in shipping games JS that are eight kilobytes :) But there are not very many people like that.
Not to the language but its users. Not to bash them, but most of them did not study IT on a university, did not learn about the KISS principle etc.
They just followed some tutorials to hack together stuff, now automated via LLM's.
So in a way the cause is the language as it is so easy to use. And the ecosystem grew organically from users like this - and yes, the ecosystem is full of bloat.
(I think claude nowdays is a bit smarter, but when building standalone html files without agents, I remember having to always tell chatgpt to explicitely NOT pull in yet another libary, but use plain vanilla js for a standard task, which usually works better and cleaner with the same lines of code or maybe 2 or 3 more for most cases. The standard was to use libaries for every new functionality )
It sure seems like it is because JS devs, by and large, suck at programming. C has a pretty sparse standard library, but you don't see C programmers creating shared libraries to determine if a number is odd, or to add whitespace to a string.
Believe me, if C had a way to seamlessly share libraries across architectures, OSes, and compiler versions, something similar would have happened.
Instead you get a situation where every reasonably big modern C project starts by implementing their own version of string libraries, dynamic arrays, maps (aka dictionaries), etc. Not much different really.
One to generate zip files, one for markdown parsing, connecting to postgres, etc... most of them have no sub dependencies.
We always reach out first to what nodejs lib have, try to glue ourself small specific piece of code when needed.
The app is very stable and we have very few frustrations I used to have before. Note that we used to have way more but bit by bit removed them.
Now I would whitelist anything from the deno std* lib, they did a great job with that, even if you don't use Deno, with what ever your runtime provide plus deno std you never need more than a few packages to build anything.
JS is doing pretty good if you are mindful about it.
Ancient browser support is a thing, but ES5 has been supported everywhere for like 13 years now (as per https://caniuse.com/es5).
So unlikely to change unless everyone stops using their popular packages.
Every now and again people get worked up and try to bully them about it, which is unfortunate because they seem like generally good people, and their arguments in favor of their positions are pretty well documented.
Ironically, what often happens is that developers configure Babel to transpile their code to some ancient version, the output is bloated (and slower to execute, since passes like regenerator have a lot of overhead), and then the website doesn't even work on the putatively supported ancient browsers because of the use of recent CSS properties or JS features that can't be polyfilled.
I've even had a case at work where a polyfill caused the program to break. iirc it was a shitty polyfill of the exponentiation operator ** that didn't handle BigInt inputs.
Also, there has been a huge amount of churn on the tooling side, and if you have a legacy app, you probably don't wanna touch whatever build program was cool that year. I've got a react app which is almost 10 years old, there has to be tons of stuff which is even older.
There is. Break compatibility for it, and whatever poor bastard that is still maintaining software that is targeting a PalmPilot is free to either pin to an older version of your library, or fork it. Yes, that's a lot of pain for him, but it makes life a little easier for everyone else.
In other words, if you're pulling in e.g. regenerator-runtime, you're already cutting out a substantial part of the users you're describing.
So that's my cutoff.
Android Studio has a nifty little tool that tells you what percentage of users are on what versions of Android. 99.2% of users are on Android 7 or later. I predict that next year, a similar percentage of users will be on Android 8 or later.
This isn’t the desire of people to build legacy support, it’s a broken, confusing and haphazard build system built on the corpses of other broken, confusing and haphazard build systems.
And weird browser support.
People use the oddest devices to do "on demand" jobs (receiving a tiny amount of money for a small amount of work). Although there aren't that many, I've seen user agents from game consoles, TVs, old Androids, iPod touch, and from Facebook and other "browser makers", with names such as Agency, Herring, Unique, ABB, HIbrowser, Vinebre, Config, etc. Some of the latter look to be Chrome or Safari skins, but there's no way to tell; I don't know what they are. And I must assume that quite a few devices cannot be upgraded. So I support old and weird browsers. The code contains one externally written module (stored in the repository), so it's only a matter of the correct transpiler settings.
All pre-signal Angular code must be compiled down to JS which replaces native async with Promise.
Why is that so? For a long time Angular's change detection worked by overriding native functions like setTimeout, addEventListener etc. to track these calls and react accordingly. `async` is a keyword, so it's not possible to override it like that.
Signals don't require such trickery and also allow to significantly decrease the surface area of change detection, but to take advantage of all of that one has to essentially rewrite the entire application.
It would take a well-respected org pushing a standard library that has clear benefits over "package shopping."
Don't confuse "one idiot who wants to support Node 0.4 in 2026" with "JS developers". Everybody hates this guy and he puts his hands into the most popular packages, introducing his junk dependencies everywhere.
https://nodejs.org/en/about/previous-releases
Here's a list of known security vulnerabilities affecting old versions of nodejs:
https://nodejs.org/en/about/eol
In my opinion, npm packages should only support maintained versions of nodejs. If you want to run an ancient, unsupported version of nodejs with security vulnerabilities, you're on your own.
Maybe "professional" is the problem: they're incentivised to make work for themselves so they deliberately add this fragility and complexity, and ignore the fact that there's no need to change.
Nobody argues what we currently have is great and that we shouldn't look to improve it. Reducing it to "JS developers bad" is an embarrassing statement and just shows ignorance, not only of the topic at hand, but of an engineering mindset in general.
Science advances one funeral at a time applies to software with things going at a faster pace so a good software engineer needs to fake a few funerals or really be senior at 4 years to be dead by 7.
Like seriously... at 50 million downloads maybe you should vendor some shit in.
Packages like this which have _7 lines of code_ should not exist! The metadata of the lockfile is bigger than the minified version of this code!
At one point in the past like 5% of create-react-app's dep list was all from one author who had built out their own little depgraph in a library they controlled. That person also included download counts on their Github page. They have since "fixed" the main entrypoint to the rats nest though, thankfully.
https://www.npmjs.com/package/has-symbols
Is this an ego thing or are people actually reaping benefits from this?
Anthropic recently offered free Claude to open source maintainers of repositories with over X stars or over Y downloads on npm. I suppose it is entirely possible that these download statistics translate into financial gain...
The incentives are pretty clear: more packages, more money.
The guy who wrote is even/odd was for ages using a specifically obscure method that made it slower than %2===0 because js engines were optimising that but not his arcane bullshit.
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
> 6/28/2024
Really escapes me who it was.
If you're working with Javascript people, this is referred to as "reinventing the wheel" or "rolling your own", or any variation of "this is against best practice".
Like I legit think that we are all imagining this cultural problem that's widespread. My claim (and I tried to do some graph theory stuff on this in the past and gave up) is that in fact we are seeing something downstream of a few "bad actors" who are going way too deep on this.
I also dislike things like webpack making every plugin an external dep but at least I vaguely understand that.
Just as the cloud is simply someone else's computer, a package is just someone else's reinvented wheel.
The problem is half the wheels on npm are fucking square and apparently no one in the cult of JavaScript realises it.
As the article points out, there are competing philosophies. James does a great job of outlining his vision.
Education on this domain is positive. Encouraging naming of dissenters, or assigning intent, is not. Folks in e18e who want to advance a particular set of goals are already acting constructively to progress towards those goals.
What people are criticizing is the approach in pushing this philosophy into the ecosystem for allegedly personal gain.
The fact that this philosophy has been pushed by a small number of individuals shows this is not a widespread belief in the ecosystem. That they are getting money out of the situation demonstrates that there is probably more to the philosophy than the technical merits of it.
This is a discussion that needs to happen.
https://www.npmjs.com/package/is-number - and then look and see shit like is odd, is even (yes two separate packages because who can possibly remember how to get/compare the negated value of a boolean??)
Honestly for how much attention JavaScript has gotten in the last 15 years it's ridiculous how shit it's type system really is.
The only type related "improvement" was adding the class keyword because apparently the same people who don't understand "% 2" also don't understand prototypal inheritance.
Let's compromise and say that whoever is responsible for involving (javascript|electron fields) in the display of a website, should each understand their respective field.
I don't expect a physicist or even an electrical engineer or cpu designer to necessarily understand JavaScript. I don't expect a JavaScript developer to understand electron fields.
I do expect a developer who is writing JavaScript to understand JavaScript. Similarly I would expect the physicist/etc to understand how electrons work.
It sounds like you expect everyone to understand 100% of a language before they ever write any code in it, and that strikes me as silly; not everyone learns the same way, and some people learn better through practice than by reading about thinks without practice. People sometimes have the perception that anyone who prefers a different way of learning than them is just lazy or stupid for not being able to learn in the way that they happen to prefer, and I think that's both reductive and harmful.
One thing I kinda understand is users who want to use a more performant browser (safari really does sip memory I’ve found compared to chrome) but that’s kind of a side point. But if your company decides this is the browser(s) we support, then it makes sense and is the right way to go about it.
Seriously what kind of business today needs to support ES3 browsers? Even banking sites should refuse to run on such old devices out of security concerns.
Upgrading eg Webpack and Babel and polyfill stacks and all that across multiple major versions is a serious mess. Lots of breaking changes all around. Much better to just ship features. If it ain't broke, don't fix it!
> There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
https://news.ycombinator.com/item?id=45447390
or
https://github.com/A11yance/axobject-query/pull/354
This user actively gets paid off of how many downloads their packages get, which makes sense why there are so many. As well as the attitude to change others repositories to use his packages
Easy enough for y’all with techie salaries, but as one of the millions of poor folks whose paychecks barely (or don’t even) pay the bills, it’d be really nice if we didn't have to junkheap our backbreakingly expensive hardware every few years just cuz y’all are anorexically obsessed with lean code, and find complex dependancies too confusing/bothersome to maintain.
The GP's comment - that we have to upgrade our hardware because devs are "anorexically obsessed with lean code, and find complex dependancies too confusing/bothersome" - is surely the exact opposite of reality? We have to upgrade to faster hardware because the bloat slows everything down!
My knee-jerky reaction to the author’s blithe exhortation to upgrade stems from pain of watching as my prized workhorse (a 2015 MacBook) dies in my arms despite its magnificently healthy and powerful body.
for example, javascript runs in a browser or on microcontrollers. you can write code that work for both natively [1].
TypeScript-- a mechanism that needs to compile first into javascript
React-- a mechanism that needs to compile first into javascript
Configuration-de-jour-- Depending on how you need to string your TypeScript and React together, there's a thing you need to manage your javascript managers. Vite is the best option in this field, since it recognizes exposing tools to fine tune how to optimize your resulting javascript from your typescript and react is a terrible idea that leads to mass fragmentation on a global scale for what it even means to "spin up a js project"
In conclusion, is javascript a compile target like assembly or a language that people can handcode to eek performance out of like assembly?
> [...]
> Each of these having only one consumer means they’re equivalent of inline code but cost us more to acquire (npm requests, tar extraction, bandwidth, etc.).
It costs FAR more than dep install time. It has a runtime cost too, especially if in frontend code using bundlers where it also costs extra bundlespace and extra build time.
If you're using third-party NPM packages to do "Vanilla", you're will probably run into the same problem.
If you import React directly from a CDN, you won't.
Instead they've elevated it to a cultural pillar and think they've come up with a great innovation. It's like talking to antivaxers
Atomic packages brings more money to the creators.
If you have two useful packages it's hard to ask for money, even if they're used by Babel or some popular React dependency.
If you have 900 packages that are transitive dependencies the same couple deps above, it's way easier to get sponsorship. This is a way to advertise themselves: "I maintain 1000 packages".
The first guy that did this in a not-nice way was a marketing/salesperson and has mentioned that they did on purpose to launch their dev career.
TLDR: This is just some weird ass pyramid thing to get Github sponsors or clout.
The notion that atomic architecture came about because people are stupid and performative is not really useful. Its fairly misanthropic and begs the question why it became so prevalent in JS specifically.
Of course, like most things, when taken to an extreme it becomes absurd and you end up with isOdd.
The added problem with the atomic approach is that it makes it very easy for these fringes to spread throughout the ecosystem. Mostly through carelessness, and transitive dependencies.
For personal objects I always prompt the AI to write JS directly, never introduce nodejs stack unless absolutely have to.
Turns out you don't always need Nodejs/Reactto make a functional SPA.
That’s awesome. Could be hooked as a pre-commit for agents to do the grunt work of migration.
> false = 4
false = 4
^^^^^
Uncaught SyntaxError: Invalid left-hand side in assignment
Fascinatingly enough though, you can assign a value to NaN. It doesn't stick tho. > NaN
NaN
> NaN = 42
42
> NaN
NaN
>
(map behaves as described.)And we're seeing rust happily going down the same path, especially with the micro packages.
Unless you're talking about an "environment" eg Node or the like
Rather unfortunately, JS has no native precompiler. For the SQLite project we wrote our own preprocessor to deal with precisely that type of thing (not _specifically_ that thing, but filtering code based on, e.g., whether it's vanilla, ESM, or "bunlder-friendly" (which can't use dynamically-generated strings because of castrated tooling)).
Also, how is this going to look over time with multiple ES versions?
Is the need for tree-shaking not 100% a side-effect of dependency-mania? Does it not completely disappear once one has ones dependencies reduced to their absolute minimum?
Maybe i'm misunderstanding what tree-shaking is really for.
Someday, packages may just be "utility-shaped holes" in which are filled in and published on the fly. Package adoption could come from 80/20 agents [1] exploring these edges (security notwithstanding).
However, as long as new packages inherit dependencies according to a human author's whims, that "voting" cycle has not yet been replaced.
It couldn't inine them, but it could replace ponyfils with wrappers for native impls, and drop the fallback. It could provide simple modern implementations of is-string, and dedupe multiple major versions, tho that begs the question what breaking change lead to a new mv and why?
You think the average site owner plus wix/squarespace is going to spend a lot of money beefing up their CPU and RAM to marginally "improve user experience" when they could and have been offloading rendering client side all these years?
For $client we've taken a very minimal approach to JavaScript, particularly on customer facing pages. An upcoming feature finally replaces the last jquery (+ plugin) dependent component on the sales page, with a custom implementation.
That change shaved off ~100K (jquery plus a plugin removed) and for most projects now that probably seems like nothing.
The sales page after the change is now just 160K of JS.
The combination of not relying on JS for everything and preferring use-case-specific implementations where we do, means we aren't loading 5 libraries and using 1% of each.
I'm aware that telling most js community "developers" to "write your own code" is tantamount to telling fish to "just breathe air".
Updating dependencies is a task a person does, followed by committing the changes to the repo.
I am aware a lot of these ideas are heretical to a lot of software developers these days.
Bundlers handle this by automatically creating bundles for shared modules. But if you optimize to avoid all shared modules, you end up with hundreds of tiny files. So most bundlers enforce a minimum size limit. That's probably fine for a small app. But one or more of these things happens:
1. Over time everybody at the company tends to join one giant SPA because it's the easiest way to add a new page. 2. Code splitting works so well you decide to go ham and code split all of the things - modals, below-the-fold content, tracking scripts, etc.
Now you'll run into situations where 20 different unrelated bundles happen to share a single module, but that module is too small for the bundler to split out, and so you end up downloading it N times.
But the real cause of JS bloat is the so-called "front-end frameworks". Especially React.
First of all, why would you want to abstract away the only platform your app runs on? What for? That just changes the shape of your code but it ends up doing the same thing as if you were calling browser APIs directly, just less efficiently.
Second of all, what's this deal with mutating some model object, discarding the exact change that was made, and then making the "framework" diff the old object with the new one, call your code to render the "virtual DOM", then diff that, and only then update the real DOM tree? This is such an utterly bonkers idea to me. Like, you could just modify your real DOM straight from your networking code, you know?
Seriously, I don't understand modern web development. Neither does this guy who spent an hour and some to try to figure out React from the first principles using much the same approach I myself apply to new technologies: https://www.youtube.com/watch?v=XAGCULPO_DE
You can also use your underparts as a hat. It doesn't mean its a good idea.
The main issue is the tooling. JSX is nice enough (not required though) to want a transpiler that will also bundle you app. It’s from that point things get crazy. They want the transpiler to also be a bundler so that it manages their css as well. They also want it to do minification and dead code elimination. They want it to support npm dependencies,etc…
This is how you get weird ecosystems.
I am a core maintainer of Astro, which is largely based around the idea that you don't need to always reach for something like React and can mostly use the web platform. However even I will use something like React (or Solid or Svelte or Vue etc) if I need interactivity that goes beyond attaching some event listeners. I don't agree with all of its design decisions, but I can still see its value.
https://youtu.be/Q9MtlmmN4Q0?t=519&is=Wt3IzexiOX4vMPZf
Also, why do you use SQL and databases? Couldn’t you just modify files on the filesystem?
> Also, why do you use SQL and databases? Couldn’t you just modify files on the filesystem?
Anyone can read a MySQL data file. IIRC the format is pretty straightforward. The whole point of doing it through the real MySQL server is to make use of indexes, the query optimizer, and proper handling of concurrency, at least. Sure you can reimplement those things, but at this point congrats, you've just reimplemented the very database system you were trying to avoid, just worse.
badge.textContent = count > 99? '99+' : count
badge.classList.toggle('show', count > 0)
paper.classList.toggle('show', count > 0)
fire.classList.toggle('show', count > 99)
The declarative example also misses the 99+ case. I don't think this example describes the difference between imperative and declarative well.To be fair, React is especially wasteful way to solve that problem. If you want to look at the state od the art, something like Solid makes a lot more sense.
It's much easier to appreciate that problem if you actually try to build complex interactive UI with vanilla JS (or something like jQuery). Once you have complex state dependency graph and DOM state to preserve between rerenders, it becomes pretty clear.
I just render as much as possible on the server and return commands like "hide the element with that ID" or "insert this HTML after element with that ID" in response to some ajax requests. Outside of some very specific interactive components, I avoid client-side rendering.
SPA was mean for UI that relies on the client state mostly, not on the server data (figma and other kind of online editors).
Show me a website where client side interaction is implemented in perl.
There will be almost no bloat to worry about.
The other two, atomic architecture and ponyfills, are simply developer inexperience (or laziness). If you're not looking at the source of a package and considering if you actually need it then you're not working well enough. And if you've added code in the past that the metrics about what browsers your visitors are using show isn't needed any more, then you're not actively maintaining and removing things when you can. That's not putting the user first, so you suck.
People keep telling me the approach I am taking won't scale or will be hard to maintain, yet my experience has been that things stay simple and easy to change in a way I haven't experienced in dependency-heavy projects.
I think this is because the whole web dev knowledge ecosystem of youtubers and tutorial platforms is oriented around big frameworks and big tooling. People think it is much harder than it actually is to build without frameworks or build tools, or that the resulting web app will perform much worse than it actually will. A typical react codebase ported to a fully vanilla codebase ends up just as modular and around 1.5x the number of lines of code, and is tiny in total footprint due to the lack of dependencies so typically performs well.
To be clear though: I’m not arguing the dependencies are bad or don’t have any benefits at all or that vanilla coding is a superior way. Coding this way takes longer and the resulting codebase has more lines of code, and web components are “uglier” than framework components. What I’m saying is that most web developers are trapped in a mindset that these dependencies must be used when in reality they are optional and not always the best choice.
Come to think of it, I should write up the techniques I use, too...e.g. I have simple wrappers around querySelector() and createElement() with a bit of TypeScript gymnastics in a JSDoc annotation to add intellisense + type checking for custom elements.
Would you be open to a pull request with a page on static analysis/type checking for vanilla JS? (intro to JSDoc, useful patterns for custom elements, etc.) If not, that's totally OK, but I figure it could be interesting to readers of the site.
And agreed on vanilla/dependency-free not being a silver bullet. There aren't really one-size-fits-all solutions in software, but I've found a vanilla approach (and then adding dependencies only if/when necessary) tends to help the software evolve in a natural way and stay simple where possible.
If you do it long enough, presumably you start to develop your own miniature "framework" (most likely really just some libraries, i.e. not inverting control and handing it over to your previous work). After all, it's still programming; JS isn't exceptional even if it has lots of quirks.
Anyway, love the website concept, just a quick petition: would it be possible to apply some progressive enhancement/graceful degradation love to <x-code-viewer> such that there's at least the basic preformatted text of the code displayed without JS?
Depending on the use case, minimizing dependancies can also decrease attack vectors on the page/app.
The client has not had to pay a cent for any sort of migration work.
Let alone having to check all licenses...
Of course, it also means you have to be cautious about problems that dependencies promise to solve (e.g. XSS), but at the same time, bringing in a bunch of third-party code isn't a substitute for fully understanding your own system.
I have worked at employer, where one could have done the frontend easily in a traditional server side templating language since most of the pages where static information anyway and very little interactive. But instead of doing that and have 1 person do that, making an easily accessible and standard-conforming frontend, they decided to go with nextjs and required 3 people fulltime to maintain this, including all the usual churn and burn of updating dependencies and changing the "router" and stuff. Porting a menu from one instance of the frontend to another frontend took 3 weeks. Fixing a menu display bug after I reported it took 2 or 3 months.
From human society's PoV, you sound like a 10X engineer and wonderful person.
But from the C-suite's PoV ...yeah. You might want to keep quite about this.
What do you use for model updates?
It seems best practice to use the component's attributes directly. So the component is subscribed to its attributes change lifecycle and renders updates.
https://en.wikipedia.org/wiki/Ajax_(programming)
The idea of reactivity started in the 1990's in production.
When Gmail was released this technology is what made a website behave like a desktop app (plus the huge amount of storage)
If we were to look into today's equivalent of doing this, it might be surprising what exists in the standard libraries.
What's more, given the tools we have today, it fits really well with agentic engineering. It's even easier to create and understand a homegrown version of a dependency you may have used before.
that's in contrast with the sort of stuff that invariably shows up when something falls over somewhere in a dependency:
it's not fun to step through or profile that sort of code either...