Video.js is used by billions of people every month, on sites like Amazon.com, Linkedin, and Dropbox, and yet it wasn’t in great shape. A skeleton crew of maintainers were doing their best with a dated architecture, but it needed more. So Sam from Plyr, Rahim from Vidstack, and Wes and Christain from Media Chrome jumped in to help me rebuild it better, faster, and smaller.
It’s in beta now. Please give it a try and tell us what breaks.
Recently I am using a variation that looks like gruvbox a bit but has some tweaks to it https://marketplace.visualstudio.com/items?itemName=hrose.am...
1. No playback rates under 1
2. No volume rocker on mobile
3. Would appreciate having seek buttons on mobile too
4. No (easily apparent) way to add an accent color, stuck with boring monochrome
5. Docs lacked clear example/demo/playground so I wasn't sure what it would look like until implemented
- On Mac with Increase Contrast turned on in accessibility settings the control bar ends up being white-on-light-grey
- When focusing the volume control with a keyboard, you can only mute or un-mute, not use up or down to adjust the volume. To do that you have to tab again into the volume slider field
- Don’t seem to be able to enter picture-in-picture mode with the keyboard
- Purely from a first class citizen point of view, it’d be nice to have all the accessibility options (transcripts, etc) shown in the homepage demo
The simplest option is to use some basic object storage service and it'll usually work well out of the box (I use DO Spaces with built-in CDN, that's basically it).
That means when you're encoding the downscaled variants, the encoder wants to know the size of the file segments so it can insert those IDR frames. Therefore it's common to do the encoding and segmentation in a single step (e.g. with ffmpeg's "dash" formatter).
You can have variable-duration or fixed-duration segments. Supposedly some decoders are happier with fixed-duration segments, but it can be fiddly to get the ffmpeg settings just right, especially if you want the audio and video to have exactly the same segment size (here's a useful little calculator for that: https://anton.lindstrom.io/gop-size-calculator/)
For hosting, a typical setup would be to start with a single high-quality video file, have an encoder/segmenter pipeline that generates a bunch of video and audio chunks and DASH (.mpd) and/or HLS (.m3u8) manifests, and put all the chunks and manifests on S3 or similar. As long as all the internal links are relative they can be placed anywhere. The video player will start with the top-level manifest URL and locate everything else it needs from there.
We learned some tough lessons with media-chrome[1] and Mux Player, where we tried to just write web components. The React side of things was a bit of a thorn, so we created React shims that provided a more idiomatic React experience and rendered the web components...which was mostly fine, but created a new set of issues. The reason we chose web components was to not have to write framework-specific code, and then we found ourselves doing both anyway.
With VJS 10 I think we've landed on a pretty reasonable middle ground. The core library is "headless," and then the rendering layer sits on top of it. Benefit is true React components and nice web components.
If you mean "why do I need React / any kind of bundling; why can't I just include the minified video.js library as a script tag / ES6 module import?" — I'm guessing you can, but nobody should really want to, since half the point here is that the player JS that registers to back the custom elements, is now way smaller, because it's getting tree-shaken down to just the JS required to back the particular combination of custom elements that you happen to use on your site. And doing that requires that, at "compile time", the tree-shaking logic can understand the references from your views into the components of the player library. That's currently possible when your view is React components, but not yet possible (AFAIK) when your view is ordinary HTML containing HTML Custom Elements.
I guess you could say, if you want to think of it this way, that your buildscript / asset pipeline here ends up acting as a web-component factory to generate the final custom-tailored web-component for your website?
Hope this new iteration is exceptionally successful.
It happened to me personally - LLMs and agentic coding tools enabled me to pick up old side projects and actually finish them. Some of these projects were in the drawer for years, and when Sonnet 4 released I gave them another try and got up to speed really quickly. I suspect this happened to many developers.
In the core JS of Video.js v10 we're building without the assumption of there even being a browser involved so we can point to future JS-based platforms like React Native.
We've also had issues getting frame accuracy when navigating the video stream. There's some sort of "security" that randomizes/rounds the returned value of currentTime that I cannot wrap my head around as how that is security related. Lots of effort spent on getting stock HTML5 video element to be frame accurate.
For the primary question - this is a tough one, specifically because v10 is a completely new, ground up architecture. Part of this will be feature parity - v8 does many things/handles many cases that v10 doesn't do yet. That may seem like that is an unfair comparison, and, in some sense, that's true. However, this is in fact part of the ethos of our new architecture: by building a highly composable, loosely coupled player framework with well defined architectural "seams"/contracts, you can more easily pull in "all and only what you need for your use case" (a phrase I've been bandying about). While v8 allows for some of this, it's still much harder and you still end up pulling in stuff you probably don't need for a lot of use cases.
Another one is the UI layer - v8 ended up building an entire component implementation. At the time of building, it kind of had to. v10, on the other hand, can "stand on the shoulders of giants", building on top of e.g. custom elements, or React, or any future frameworks we decide to target (and our architecture makes that comparatively easy as well).
I do suspect that once we hit true feature parity, the numbers will be much closer for "the kitchen sink." The thing is, few people (if any) need the kitchen sink.
Thanks for the tough question!
I hope the plugin directory get an overhaul too and a prominent place an the webpage. The plugin ecosystem was for me a huge benefit for Video.js
Even though some of them are outdated, they were a good source of inspiration.
In the new version the core player itself is built as many composable components rather than one monolithic player, so we're going to invite more people to contribute their "plugins" to the core repo as more of those composable components. Versioning plugins and keeping them up to date has always been a challenge, so we're thinking this will help keep the whole ecosystem working well together.
Some background: our store[1] which was inspired by Zustand[2] is created and passed down via context too. This is the central state management piece of our library and where we imagine most devs will build on for extending and customizing to their needs.
Updates are handled via simple store actions like `store.play()`, `store.setVolume(10)`, etc. Those actions are generally called in response to DOM events.
On the events side of things, rather than registering event listeners directly, in v10 you'd subscribe to the store instead. Something like `store.subscribe(callback)`, or in React you'd use our `usePlayer`[3] hook. The store is the single source of truth, so rather than listening to the underlying media element directly, you're observing state changes.
---
So far with v10 we haven't been thinking about "plugins" in the traditional sense either. If I had to guess at what it would look like, it'd be three things:
1. Custom store slices[4] so plugins can extend the store with their own state and actions
2. A middleware layer that plugs into the store's action pipeline so a plugin could intercept or react to actions before or after they're applied, similiar to Zustand middleware, or even in some ways like Video.js v8 middleware[5]
3. UI components that plugins can ship which use our core primitives for accessing the store, subscribing to state, etc.
I believe that'd cover the vast majority of what plugins needed in v8. We haven't nailed down the exact API yet but that's the direction we're leaning towards. We're still actively working on both the library and our docs so I don't have somewhere I can link to for these just yet (sadly)! We're likely targeting sooner, but GA (end of June) is the deadline.
I should also add... one thing we prototyped early on that may return: tracking end-to-end requests through the store. A DOM event triggers a store action like play, which calls `video.play()`, which then waits for the media event response (play, error, etc.). It worked really well and lines up nicely with the middleware direction.
[1]: https://github.com/videojs/v10/tree/main/packages/store
[2]: https://github.com/pmndrs/zustand
[3]: https://videojs.org/docs/framework/react/reference/use-playe...
[4]: https://zustand.docs.pmnd.rs/learn/guides/slices-pattern#sli...
There are no immediate plans to deprecate React Player and I think it holds a special place in the ecosystem, but there will be overlap with video.js v10 and if there's specific features you care about or feel are missing, or if you think we're doing a bad job, please voice it here.
It was a similar story with Vidstack and Plyr, with Mux first sponsoring the projects. That's how I met Rahim and Sam, and how we got talking about a shared vision for the future of players.
We’re taking a new approach to the library with a lot of new concepts, so your feedback would help us a ton during Beta as we figure out what’s working well and what isn’t.
Granted, my knowledge on the matter is rather limited, but I had some long running streams (weeks) and with HLS the playlist became quite large while with dash, the mpd was as small as it gets.
Outside of that, though, the standards themselves have different pain points and tradeoffs. Some things are "cleaner"/"less presumptuous" in DASH, but DASH also has a lot of design details that were both "design by committee" (aka different competing interests resulting in a spec that has arguably too many ways to do the same thing) and overrepresented by server-side folks (so many playback complexities/concerns/considerations weren't thought through). It is also sometimes not constrained enough, at least by default (things like not guaranteeing time alignment across media representations). For what it's worth, I think there are lots of pain points from the HLS design decisions as well, but focusing on DASH here just given the framing of your question.
On the flip side, if you stay within certain bounds, the difference between HLS and DASH simply become text files: one XML manifest (MPD) for DASH and a few playlists (M3U8s) for HLS. There's a lot of effort being made to this end, including: https://cdn.cta.tech/cta/media/media/resources/standards/cta... and the CMAF-HAM-inspired model (https://github.com/streaming-video-technology-alliance/commo... from CML and https://github.com/videojs/v10/blob/main/packages/spf/src/co... in our own playback engine library), just to name a few.
HLS also has newer features that address the growing manifest issues you were seeing. [2]
All that said, I think a lot of people would feel more comfortable if the industry's adaptive streaming standard wasn't completely controlled by Apple.
Did the private equity buy the domain videojs.org (did it take control of the project and you somehow regained control after selling) or was this domain (and the project) always under your control?
I'm a one-man operation. In the order of hundreds of videos served a week. All I want is control over my own destiny. If this and a VPS can do that, that'll be amazing. Thank you for doing this.
Basically few kB for CSS and few kB for a thin “framework” layer for managing attr to prop mapping, simple lifecycle, context, and so on.
In the meantime, we’re hoping our custom elements will act as a good stopgap. Most frameworks including Svelte support them well, and we’re pouring love into the APIs so they feel good to use regardless of which framework.
If you’re interested in peeking under the hood, architecturally we’re taking a similar approach to TanStack and separating out a shared core from the beginning, but with one added step of splitting out the DOM as well to aid in supporting RN one day.
Throws Uncaught (in promise) TypeError: AbortSignal.any is not a function on volume-slider-data-attrs.BOpj3NK1.js
https://github.com/Qbix/Platform/blob/main/platform/plugins/...
We currently already use video.js, and our framework us used all over the place, so we’d be the perfect use case for you guys.
How would we use video.js 10 instead, and for what? We would like to load a small video player, for videos, but which ones? Only mp4 files or can we somehow stream chunks via HTTP without setting up ridiculous streaming servers like Wowsa or Red5 in 2026?
What are you supporting today that requires Wowza or Red5? The short answer is Video.js is only the front-end so it won't help the server side of live streaming much. I'm of course happy to recommend services that make that part easier though.
So I'm just wondering whether we can do streaming that way, and video.js can "just work" to play the video as we fetch chunks ahead of it ("buffering" without streaming servers, just basic HTTP range requests or similar).
WebRTC being a more open model for real-time streaming, but nowhere near as easy or scalable HTTP-based streaming today.
However we can all also start getting excited about MoQ [1][2].
[1] https://moq.dev/
I had one question I couldn't answer reading the site: what makes this different from the native html video element?
AFAICT just the transport controls?
Generally, the video tag is great and has come a very long way from when Video.js was first created. If the way you think about video is basically an image with a play button, then the video tag works well. If at some point you need Video.js, it'll become obvious pretty quick. Notable differences include:
* Consistent, stylable controls across browsers (browsers each change their native controls over time)
* Advanced features like analytics, ABR, ads, DRM, 360 video (not all of those are in the new version yet)
* Configurable features (with browsers UIs you mostly get what you get)
* A common API to many streaming formats (mp4/mp3, HLS, DASH) and services (Youtube, Vimeo, Wistia)
Of course many of those things are doable with the video tag itself, because (aside from the iframe players) video.js uses the video tag under the hood. But to add those features you're going to end up building something like video.js.
Of course, AI explanations often also fail at this unless you give them "ELI5" or other relevant prompting (I'm looking at you Perplexity).
I understand the use-case for this, but I find it working against the spirit of free software, which is bringing control back to the user.
If someone providing video content wants to run ads as part of making the video available to you, that's up to them. It's also up to you if you want to attempt to view the video without those ads or skip watching altogether. But to the dev of video.js, you're personal choices of consuming AVOD content are irrelevant.
P.S i built movie streaming and tv broadcasting player for country of Georgia and supported environments from 2009 LG Smart TVs to modern browsers.
(And why does that matter? Dynamic bitrate adjustment. The chunks are slightly easier to cache as well.)
Most can via media source extensions.