But not being able to "just" load the file into a browser locally seems to defeat a lot of the point.
[1] https://en.wikipedia.org/wiki/Television_pilot#Backdoor_pilo...
Hell, html is probably what word processor apps should be saving everything as. You can get pixel-level placement of any element if you want that.
Yes, they're both approximately the same in terms of size on disk and even network traffic for a fully loaded page, one is a much better browser experience.
> You can get pixel-level placement of any element if you want that.
You may well be able to, but it is largely anathema to the goals of html.
On this case I wonder if the format can be further optimized. For example, .js files are supported for loading locally and albeit a very inefficient way to load assets, it could overcome this local disk limitation and nobody reads the HTML source code in either way so it won't need to win any code beauty contests. I'll later look into this theory and ping the author in case it works.
As final wish list, would be great to have multiple versions/crawls of the same URL with deduplication of static assets (images, fonts) but this is likely stretching too much for this format.
I certainly could be missing something (I've thought about this problem for all of a few minutes here), but surely you could host "warcviewer.html" and "warcviewer.js" next to "mycoolwarc.warc" "mycoolwrc.cdx" with little to no loss of convenience, and call it a day?
And if you choose to require separate files and break single-file, then you have many options.
> surely you could host "warcviewer.html" and "warcviewer.js" next to "mycoolwarc.warc" "mycoolwrc.cdx"
I'm not familiar with warcviewer.js and Googling isn't showing it. Are you thinking of https://github.com/webrecorder/wabac.js ?
To expand what I have in mind, it'd be a script like Gwtar, except it loads WARCs through URLs to CDX files. Alternatively, it might also load WARC files fully to memory, where an index could be constructed on the fly. In the latter case, that would allow the same viewer to be used with or without a web server. Though, I can imagine that loading archives without a web server was probably out-of-scope for Gwtar, otherwise something could have been figured out (e.g., putting the tar in a <textarea>'s RCDATA; do browsers support "binary" data in there correctly?).
While the WARC specs are a mess (sometimes quite ambiguous), I've never had much trouble reading or writing them. As for why WARC, having the option to preserve request/response metadata, as well as having interoperability with anything else in the WARC ecosystem, would be nice. Also, a separate viewer would naturally be updateable without changing the archive files themselves.
> I imagine that loading archives without a web server was probably out-of-scope for Gwtar
More that it's just not important to us. I don't even look at the archives 'locally'. They are all archives of public web pages, which I just rehost publicly. When I want to look at them, I open them on Gwern.net like anyone else!
And if I really needed to, for some reason, it's literally a Bash one-liner (already provided inside the Gwtar as well as my writeup) to turn them back into a normal multi-file HTML. (This is a lot more than you can say for a WARC...) So my reaction to the complaints about lacking local viewing is mostly just ¯\_(ツ)_/¯
> (e.g., putting the tar in a <textarea>'s RCDATA; I wonder how well browsers support "binary" data in there?)
I don't know the details but you can just base-encode them, so I suppose that's an option, as long as you rewrote the ranges appropriately, maybe?
(Also worth noting that you can go the other way: if you really desperately want to preserve the raw header responses, you can just use the flexibility of Gwtar to append the WARC to the end of the file. As long as the range requests work, users won't download that part. The duplication is not so great for long-term storage, but you can just XZ them and that should remove duplication and overhead.)
Well, yes. That's why we created Gwtar and I didn't just use SingleFileZ. We would have preferred to not go to all this trouble and use someone else's maintained tool, but if it's not implemented, then I can't use it.
(Also, if it had been obvious to you how to do this window.stop+range-request trick beforehand, and you just hadn't gotten around to implementing it, it would have been nice if you had written it up somewhere more prominent; I was unable to find any prior art or discussion.)
Edit: Actually, SingleFile already calls window.stop() when displaying a zip/html file from HTTP, see https://github.com/gildas-lormeau/single-file-core/blob/22fc...
I implemented this in the simplest way possible because if the zip file is read from the filesystem, window.stop() must not be called immediately because the file must be parsed entirely. In my case, it would require slightly more complex logic to call window.stop() as early as possible.
Edit: Maybe it's totally useless though, as documented here [1]: "Because of how scripts are executed, this method cannot interrupt its parent document's loading, but it will stop its images, new windows, and other still-loading objects." (you mentioned it in the article)
[1] https://developer.mozilla.org/en-US/docs/Web/API/Window/stop
Edit #2: Since I didn't know that window.call() was most likely useless in my case, I understand your approach much better now. Thank you very much for clarifying that with your question!
Edit: I've just implemented the "good enough of my machine fix" aka the "easy fix", https://github.com/gildas-lormeau/single-file-core/commit/a0....
Edit #2: I've just understood that "parent" in "this method cannot interrupt its *parent* document's loading" from the MDN doc probably means the "parent" of the frame (when the script is running into it).
Similar to the window.stop() approach, requests would truncate the main HTML file while the rest of that request would be the assets blob that the service worker would then serve up.
The service worker file could be a dataURI to keep this in one file.
Of course, since it's on an HTTP server, it could easily handle doing multiple requests of different files, but sometimes that's inconvenient to manage on the server and a single file would be easier.
Maybe this is downstream of Gwern choosing to use MediaWiki for his website?
> Maybe this is downstream of Gwern choosing to use MediaWiki for his website?
This has nothing at all to do with the choice of server. The benefit of being a single-file, with zero configuration or special software required by anyone who ever hosts or rehosts a Gwtar in the future, would be true regardless of what wiki software I run.
(As it happens, Gwern.net has never used MediaWiki, or any standard dynamic CMS. It started as Gitit, and is now a very customized Hakyll static site with a lot of nginx options. I am surprised you thought that because Gwern.net looks nothing like any MediaWiki installation I have seen.)
Works locally, but it does need to decompress everything first thing.
How does it bypass the security restrictions which break SingleFileZ/Gwtar in local viewing mode? It's complex enough I'm not following where the trick is and you only mention single-origin with regard to a minor detail (forms).
Beyond that, depending on how badly the server is tampering with stuff, of course it could break the Gwtar, but then, that is true of any web page whatsoever (never mind archiving), and why they should be very careful when doing so, and generally shouldn't.
Now you might wonder about 're-archiving': if the IA serves a Gwtar (perhaps archived from Gwern.net), and it injects its header with the metadata and timeline snapshot etc, is this IA Gwtar now broken? If you use a SingleFile-like approach to load it, properly force all references to be static and loaded, and serialize out the final quiescent DOM, then it should not be broken and it should look like you simply archived a normal IA-archived web page. (And then you might turn it back into a Gwtar, just now with a bunch of little additional IA-related snippets.) Also, note that the IA, specifically, does provide endpoints which do not include the wrapper, like APIs or, IIRC, the 'if_/' fragment. (Besides getting a clean copy to mirror, it's useful if you'd like to pop up an IA snapshot in an iframe without the header taking up a lot of space.)
what if a web server on localhost happens to handle the request? why not request from a guaranteed unaccessable place like http://0.0.0.0/ or http://localhost:0/ (port zero)
I find it easier to just mass delete assets I don't want from the "pageTitle_files/" directory (js, images, google-analytics.js, etc).
If you really just want the text content you could just save markdown using something like https://addons.mozilla.org/firefox/addon/llmfeeder/.
Yes I have. I tried maff, mht, SingleFile and some others over the years. MAFF was actually my goto for many years because it was just a zip container. It felt future-proof for a long time until it wasn't (I needed to manually extract contents to view once the supporting extension was gone).
I seem to recall that MHT caused me a little more of a conversion problem.
It was my concern for future-proofing that eventually led me back to "Save As..".
My first choice is "Save as..." these days because I just want easy long-term access to the content. The content is always the key and picking and choosing which asset to get rid of is fairly easy with this. Sometimes it's just all the JS/trackers/ads, etc..
If "Save as..." fails, I'll try 'Reader Mode' and attempt "Save as.." again (this works pretty well on many sites). As a last resort I'll use SingleFile (which I like too - I tested it on even DOS browsers from the previous century and it passed my testing).
A locally saved SingleFile can be loaded into FF and I can always perform a "Save As..." on it if I wanted to for some reason (eg; smaller file, js-trackers, cleaner HTML, etc).
I prefer it because it can save without packing the assets into one HTML file. Then it's easy to delete or hardlink common assets.
Yes. A web browser can't just read a .zip file as a web page. (Even if a web browser decided to try to download, and decompress, and open a GUI file browser, you still just get a list of files to click.) Therefore, far from satisfying the trilemma, it just doesn't work.
And if you fix that, you still generally have a choice between either no longer being single-file or efficiency. (You can just serve a split-up HTML from a single ZIP file with some server-side software, which gets you efficiency, but now it's no longer single-file; and vice-versa. Because if it's a ZIP, how does it stop downloading and only download the parts you need?)
Now, maybe you mean something like, 'a web server could additionally run some special CGI software or a plugin or do some fancy Lua scripting in order to munge a ZIP and split it up on the fly so as to do something like serve it to clients as a regular efficient multi-file HTML page'. Sure. I already cover that in the writeup, as we seriously considered this and got as far as writing a Lua nginx script to support special range requests. But then... it's not single-file. It's multi-file - whatever the additional special config file, script, plugin, or executable is.
Tar is sequential. Each entry header sits right before its data. If the JSON manifest in the Gwtar preamble says an asset lives at byte offset N with size M, the browser fires one Range request and gets exactly those bytes.
The other problem is decompression. Zip entries are individually deflate-compressed, so you'd need a JS inflate library in the self-extracting header. Tar entries are raw bytes, so the header script just slices at known offsets. No decompression code keeps the preamble small.
I've done this before for reading/extracting files inside ISO images from browsers. It was fast and avoided need to download a whole 2.4GB ISO just to grab a few files inside.
Would W3C Web Bundles and HTTP SXG Signed Exchanges solve for this use case?
WICG/webpackage: https://github.com/WICG/webpackage#packaging-tools
"Use Cases and Requirements for Web Packages" https://datatracker.ietf.org/doc/html/draft-yasskin-wpack-us...
As far as I know, we do not have any hash verification beyond that built into TCP/IP or HTTPS etc. I included SHA hashes just to be safe and forward compatible, but they are not checked.
There's something of a question here of what hashes are buying you here and what the threat model is. In terms of archiving, we're often dealing with half-broken web pages (any of whose contents may themselves be broken) which may have gone through a chain of a dozen owners, where we have no possible web of trust to the original creator, assuming there is even one in any meaningful sense, and where our major failure modes tend to be total file loss or partial corruption somewhere during storage. A random JPG flipping a bit during the HTTPS range request download from the most recent server is in many ways the least of our problems in terms of availability and integrity.
This is why I spent a lot more time thinking about how to build FEC in, like with appending PAR2. I'm vastly more concerned about files being corrupted during storage or the chain of transmission or damaged by a server rewriting stuff, and how to recover from that instead of simply saying 'at least one bit changed somewhere along the way; good luck!'. If your connection is flaky and a JPEG doesn't look right, refresh the page. If the only Gwtar of a page that disappeared 20 years ago is missing half a file because a disk sector went bad in a hobbyist's PC 3 mirrors ago, you're SOL without FEC. (And even if you can find another good mirror... Where's your hash for that?)
> Would W3C Web Bundles and HTTP SXG Signed Exchanges solve for this use case?
No idea. It sounds like you know more about them than I do. What threat do they protect against, exactly?
- an executable header
- which then fuse mounts an embedded read-only heavily compressed filesystem
- whose contents are delivered when requested (the entire dwarf/squashfs isn't uncompressed at once)
- allowing you to pack as many of the dependencies as you wish to carry in your archive (so, just like an appimage, any dependency which isn't packed can be found "live"
- and doesn't require any additional, custom infrastructure to run/serve
Neat!
https://gwern.net/doc/philosophy/religion/2010-02-brianmoria...
I will try on Chrome tomorrow.
I don't know if anyone else gets "unemployed megalomaniacal lunatic" vibes, but I sure do.
The Lighthaven retreat in particular was exceptionally shady, possibly even scam-adjacent; I was shocked that he participated in it.
Apparently every important browser has supported it for well over a decade: https://caniuse.com/mdn-api_window_stop
Here's a screenshot illustrating how window.stop() is used - https://gist.github.com/simonw/7bf5912f3520a1a9ad294cd747b85... - everything after <!-- GWTAR END is tar compressed data.
Posted some more notes on my blog: https://simonwillison.net/2026/Feb/15/gwtar/
But could be very interesting for use cases where the main logic lives on the server and people try to manually implement some download- and/or lazy-loading logic.
Still probably bad unless you're explicitly working on init and redirect scripts.
I made my own bundler skill that lets me publish artifacts https://claude.ai/public/artifacts/a49d53b6-93ee-4891-b5f1-9... that can be decomposed back into the files, but it is just a compressed base64 chunk at the end.
I guess the next question will be if it does work in environments that let you share a single file, will they disable this ability once they find out people are using it.
Php has a similar feature called __halt_compiler() which I've used for a similar purpose. Or sometimes just to put documentation at the end of a file without needing a comment block.