https://varun.ch (at the bottom of the page)
There's also a couple directories/network graphs https://matdoes.dev/buttons https://eightyeightthirty.one/
One of the happiest moments of my childhood (I'm exagerating) was when my button was placed in that website that I loved to visit everyday. It was one of the best validations I ever received :)
What confuses me are the reflexive "why would I publish if I'm not getting the ad revenue" and "why would anyone take their time w/o getting paid" type remarks.
Same comments about music: nobody will record songs without getting paid. And games: what's even the point in playing a shooter without dropping loot?
The last one encapsulates the whole problem well.
Over on /r/division2 a majority of players are baffled by a one month only "Realism" mode (all March, worth trying!) that turns off loot boxes and loot drops from tangos. You can solo or co-op the Division 2 Warlords of New York expansion, set in Manhattan, receiving a couple additional base weapons and weapon mods each mission completed. It's refreshing to enjoy beating scenarios while liberated from opening every scrap pile on the street then sorting through inventory for hours.
Gamers on reddit seem universally convinced the gameplay loop for a tactical PvE shooter should be about getting the next loot, rather than executing a mission cleanly or enjoying a strategically cooperative evening with friends defeating a zip code and its boss.
"I won't play a game that's not rewarding." "I won't write a song that doesn't make me a millionaire." "I won't capture my thoughts on a subject unless I get $0.003 an eyeball."
Somewhere we lost just enjoying the play.
There's a story, I can't find the page at the moment, of someone who was getting pranked all the time (his house TPed or egged or something). So he offered the miscreants $1 to do it tomorrow. He kept on doing it like this, and then a few days later, he offered a quarter. By the time he had got down to a dime, they said "there's no way we're going to do it for such a measly sum" and left.
Better sourced examples also exist: fewer citizens supported a decision to build a nuclear waste repository in Switzerland enjoyed more support if they would be offered compensation: https://www.bsfrey.ch/wp-content/uploads/2021/08/crowding-ef... p. 96 (sixth page of the PDF).
I published free content during the 90s and early 2000s in the internet, so I lived through that moment when you write something just for the pleasure of it. What I think it changed is that back at the time, it was you and your keyboard and that was your gun. The best content (that is, the best idea+writing) won. People would share in forums, MSN, emails, with friends, etc. It was more democratic in the sense that we were all equal.
Today that doesn't work anymore. You can write a very good piece but no one will discover it because the behaviour has changed. You probably will have to invest in ads, or being someone already known in the topic, etc. And I am talking before AI, with all the AI noise/slop/content, it's impossible today. So if I am going to fight against big media who are also writing shitting content about the same topics, or Instagram influencers who are posting silly memes, and I need to invest money, may as well try to earn something back.
PS: I may write an article about it.
I remember going through all the blogs linked on terry tao's blog - out of like 50 there were only 8-ish still alive :(
I follow the same set of websites with my feed reader too. There is an OPML file at the end of that page that I use with my feed reader. I keep the list intentionally small so that I can realistically read every post that appears on these websites on a regular basis.
Although I usually read new posts in my feed reader, I still visit each website on the list at least, roughly, once a month, just to see these personal sites in their full glory. These are blogs I have been following for years, in fact some of them for a couple of decades now! So when a new post appears on one of these websites, I make time to read it. It is one of the joys of browsing the Web that I have cherished ever since I first got online on the information superhighway.
Keeping the list small also makes it easy for me to notice when a website goes defunct. Over all these years a few websites did indeed sadly disappear, which I then removed from my list.
The frequent posts also let me quickly try out new methods of telling stories or presenting information or new techniques. I think this tends to speed up how often I post larger effort things cuz I can practice skills with frequent posts.
A good comparison would be a youtuber with a patreon. The youtube gets the produced media, whereas the patreon gets "cell phone in the moment" updates.
but i totally agree that when folks are finding things to post about that can be problematic and annoying.
it might be true, but there are exceptions, like acoup (history-focused), which is written by ancient history professor.
There's a lot more to fixing search than prioritizing recency. In fact, I think recency bias sometimes makes search worse.
> Blog has recent posts (<7 days old)
This may be different than inclusion criteria for websites in general, but on first read it looks like it has to be very active.
I might have missed something while skimming it, but would assume others would miss it as well.
* The blog must have a recent post, no older than 12 months, to meet the recency criteria for inclusion.
* Criteria for posts to show on the website: Blog has recent posts (<7 days old), The website can appear in an iframe
The latter criteria is for the website / post to appear in Kagi's random Small Web feature, where they display the blog post in an iframe. (So I think only posts from the last week are displayed there.) Being on the list should ensure that any new posts could be displayed in Small Web though, and presumably that the website is indexed in Kagi's Teclis index as well. At least, I really hope that the Teclis index is including all of those old blog posts too, and not discarding them.
EDIT: I just realized freediver actually is Vladimir - I'd love to know if Teclis does index all those older blog posts too. I assume it does index everything that is still present in the RSS feeds?
It is kind of sad that the entire size of this small web is only 30k sites these days.
I think that's naive.
But maybe thats just because my blog wasn't on the list :)
Not sure if you've used this as a source too but there's a lot of tiny personal sites in this directory too. https://melonland.net/surf-club
I would expect a raw link in the top bar to the page shown, to be able to bookmark it etc.
How would I check if my site is included?
But it currently does not appear in the search results here: <https://kagi.com/smallweb/?search=zahlman>. The reason appears to be this:
"If the blog is included in small web feed list (which means it has content in English, it is informational/educational by nature and it is not trying to sell anything) we check for these two things to show it on the site: • Blog has recent posts (<7 days old) [...]"
(Source: https://github.com/kagisearch/smallweb#criteria-for-posts-to...)
I can't think of a single blog that I read these days (small or not), yet there are loads of small "old school" sites out there that are still going strong.
I am not associated with this project, so this would be a question for the project maintainer. As far as I understand, the project relies on RSS/Atom feeds to fetch new posts and display them in the search results. I believe, this is an easier problem to solve than using a full blown web crawler.
However, as far as I know, Kagi does have its own full blown crawler, so I am not entirely sure why they could not use it to present the Small Web search results. Perhaps they rely on date metadata in RSS feeds to determine whether a post was published within the last seven days? But having worked on an open source web crawler myself, many years ago, I know that this is something a web crawler can determine too if it is crawling frequently enough.
So yes, I think you have got a good point and only the project maintainer can provide a definitive answer.
Neither choice is right or wrong, but I like the idea of a cool community amidst the enshittification of the rest of the web.
> March 15 there were 1,251 updates [from feed of small websites ...] too active, to publish all the updates on a single page, even for just one day. Well, I could publish them, but nobody has time to read them all.
if the reader accumulates a small set of whitelist keywords, perhaps selected via optionally generating a tag cloud ui, then that est. 1,251 likely drops to ~ single page (most days)
if you wish to serve that as noscript it would suffice to partition in/visible content eg by <section class="keywords ..." and let the user apply css (or script by extension or bookmarklet/s) to reveal just their locally known interests
I have a blog filter that does something similar (https://alexsci.com/rss-blogroll-network/discover/), but the UI I ended up with isn't great and too many things are uncategorized.
In fact I took your topmost entry with no helpful site/update tags and dove in a little to try to understand why a RSS friendly blogger might not be passing along ~ tags for better reader discovery.
Turns out my scarce info test case blogger has a mastodon that immediately lists all these tags about himself [I've stripped it down] ...
#FrontEnd Developer #CSS #Halifax #London #Singapore Technical writer and rabbit-hole deep-diver Former Organiser for https://londonwebstandards.org & https://stateofthebrowser.com Interests: #Bushcraft #Outdoors #DnD #Fantasy #SciFi #HipHop #CSS #Eleventy #IndieWeb #OpenSource #OpenWeb
I conclude if he knew such site and post tags getting to RSS would be of use, he'd probably make the tiny effort to wire the descriptions.
Nonetheless I merely crawled links for a minute to found this info, so I imagine something like the free tier of the Cloudflare crawling api might suffice over time for a simplistic automated fix to hint decorate blog sites.
I mean, given that we're not trying to recreate pagerank, but just trying to tip the balance in favor of desirable initial discovery.
Crawling related sites for tags could work (open graph tags on the website are another good source). I'm wary of mixing data across contexts though. A blog and a Mastodon profile may intend to present a different face to the world or could discuss different topics.
But it doesnt need to be thia way: small web can also be about sustainable monetization. In fact there's a whole page on that on https://indieweb.org/business-models
There's nothing wrong with "publishers" aspiring to get paid.
We should want indie developers, writers, etc to make money so that the only game in town doesn't end up being those who didn't care about being ethical. </rant>
We could say: that's Javascript. But some Javascript operates only on the DOM. It's really XHR/fetch and friends that are the problem.
We could say: CSS is ok. But CSS can fetch remote resources and if JS isn't there, I wonder how long it would take for ad vendors to have CSS-only solutions...or maybe they do already?
That would make the Small Web bigger but it would get to the main point. I'd be fine with a site like the New Yorker that has more bells and whistles be included as long as I could experience it without a tracked ad from DoubleClick.
Right now any serious outfit simply cannot be included in the Small Web but we really need companies there.
Interestingly, I’ve noticed that some users find this suspicious because there's no cookie banner ! People may have become so used to seeing them that a site without one can look dubious or unprofessional. And I'm pretty sure some maintainers include them just to conform with common practice or due to legal uncertainty.
Maybe a simple, community-driven, public declaration might help. Something like a "No-Tracking Web Declaration". It could be a short document describing fair practices that websites could reference, such as "only first-party session cookies", "server logs used only for operational purposes", etc.
A website could then display a small statement such as "This site follows the No-Tracking Web Declaration v1.0". This might help legitimate the approach, and give visitors and operators confidence that avoiding usual bells and whistles can actually be compliant with applicable regulations.
I (and AI) drafted something here, contributions would be highly welcomed: https://github.com/fbilhaut/no-tracking
Its not just JavaScript, it's cookies, it's "auto loading" resources (e.g. 1x1 pixels with per-request unique URLs), it's third-party http requests to other domains (which might art cookies too).
I think the XKCD comic about encryption-vs-wrench has never been more apt for Gemini the protocol...
Anyone interested in seeing what the web when the search engines selects for real people and not SEO optimized slop should check out https://marginalia-search.com .
It's a search engine with the goal of finding exactly that - blogs, writings, all by real people. I am always fascinated by what it unearths when using it, and it really is a breath of fresh air.
It's currently funded by NLNet (temporarily) and the project's scope is really promising. It's one of those projects that I really hope succeeds long term.
The old web is not dead, just buried, and it can be unearthed. In my opinion an independent non monetized search engine is a public good as valuable as the internet archive.
So far as I know marginalia is the only project that instead of just taking google's index and massaging it a bit (like all the other search engines) is truly seeking to be independent and practical in its scope and goals.
Regarding the financials, even though the second nlnet grant runs out in a few weeks, I've got enough of a war chest to work full time probably a good bit into 2029 (modulo additional inflation shocks). The operational bit is self-funding now, and it's relatively low maintenance, so if worse comes to worst I'll have to get a job (if jobs still exist in 2029, otherwise I guess I'll live in the shameful cardboard box of those who were NGMI ;-).
If Google is ranking small web results better than Marginalia, that’s actionable.
If the best result isn’t in the index and it should be, that’s actionable.
There are no PMs breathing down your neck to inject more ads in the search results, you don’t depend on any broken internal bespoke tools that you can’t fix yourself, and you don’t need anybody’s permission to deploy a new ranking strategy if you want to.
I don't think they do that. Instead, "usefulness" is mostly synonymous with commercial intent: searching for <x> often means "I want to buy <x>".
Even for non-commercial queries, I think the sad reality is that most people subconsciously prefer LLM-generated or content-farmed stuff too. It looks more professional, has nice images (never mind that they're stock photos or AI-generated), etc. Your average student looking for an explanation of why the sky is blue is more interested in a TikTok-style short than some white-on-black or black-on-gray webpage that gives them 1990s vibes.
TL;DR: I think that Google gives the average person exactly the results they want. It might be not what a small minority on HN wants.
The reason Marginalia (for some queries) feels like it shows such refreshing results is that it simply does not take popularity into account.
There is some truth in this, but to me it's similar to saying that a drug dealer gives their customers exactly what they want. People "want" those things because Google and its ilk have conditioned them to want those things.
I don't deny the importance of encryption, it is really what shaped the modern web, allowing for secure payment, private transfer of personal information, etc... See where I am getting at?
Removing encryption means that you can't reasonably do financial transactions, accounts and access restriction, exchange of private information, etc... You only share what you want to share publicly, with no restrictions. It seriously limits commercial potential which is the point.
It also helps technically. If you want to make a tiny web server, like on a microcontroller, encryption is the hardest part. In addition, TLS comes with expiring certificates, requiring regular maintenance, you can't just have your server and leave it alone for years, still working. It can also bring back simple caching proxies, great for poor connectivity.
Two problems remain with the lack of encryption, first is authenticity. Anyone can man-in-the-middle and change the web page, TLS prevents that. But what I think is an even better solution is to do it at the content level: sign the content, like a GPG signature, not the server, this way you can guarantee the authenticity of the content, no matter where you are getting it from.
The other thing is the usual argument about oppressive governments, etc... Well, if want to protect yourself, TLS won't save you, you will be given away by your IP address, they may not see exactly what you are looking at, but the simple fact you are connecting to a server containing sensitive data may be evidence enough. Protecting your identity is what networks like TOR are for, and you can hide a plain text server behind the TOR network, which would act as the privacy layer.
Governments can still track you with little issue since SNI is unencrypted. It's also very likely that Cloudflare and the like are sharing what they see as they MITM 80% of your connections.
Maybe, I suspect not, but even so if we reduce the number of men in the middle that's pretty nice.
How would this work in reality? With the current state of browsers this is not possible because the ISP can still insert their content into the page and the browser will still load it with the modified content that does not match the signature. Nothing forces the GPG signature verification with current tech.
If you mean that browsers need to be updated to verify GPG signature, I'm not sure how realistic that is. Browsers cannot verify the GPG signature and vouch for it until you solve the problem of key revocation and key expiry. If you try to solve key revocation and key expiry, you are back to the same problems that certificates have.
Some of the same problems. One nice thing about verifying content rather than using an SSL connection is that plain-old HTTP caching works again.
That aside, another benefit of less-centralized and more-fine-grained trust mechanisms would be that a person can decide, on a case-by-case basis what entities should be trusted/revoked/etc rather than these root CAs that entail huge swaths of the internet. Admittedly, most people would just use "whatever's the default," which would not behave that differently from what we have now. But it would open the door to more ergonomic fine-grained decision-making for those who wish to use it.
Another pro is that no encryption means super low power microcontrollers and retrocomputers can browse freely. The system req's go down by orders of magnitude. I think enforcing TLS in the Gemini protocol was a huge mistake; there are so many retrocomputing enthusiasts that would love to browse Geminispace on their Amigas and 486s -- it might actually have been a significant part of the userbase -- but they're locked out because their CPUs simply cannot reasonably handle modern TLS.
I don't have a lot to say about the technical discussion here, other than "TLS null cipher could be fine but also a lot more infrastructure than desirable", which could subvert your intent here.
Maybe we should normalise TOR usage before it becomes a surefire signal to the FBI to raid one's home.
> Two problems remain with the lack of encryption, first is authenticity. Anyone can man-in-the-middle and change the web page, TLS prevents that. But what I think is an even better solution is to do it at the content level: sign the content, like a GPG signature, not the server, this way you can guarantee the authenticity of the content, no matter where you are getting it from.
If your microcontroller can't do TLS then it probably won't do GPG either. But you can still serve HTTP content on port 80 if you need to support plaintext. I believe a lot of package distribution is still over HTTP.
Edit: Sorry, missed the web server part somehow and was thinking of a microcontroller based client.
> In addition, TLS comes with expiring certificates, requiring regular maintenance, you can't just have your server and leave it alone for years, still working. It can also bring back simple caching proxies, great for poor connectivity.
Yeah, TLS and DNS are the two of the biggest hurdles to a completely distributed Internet. Of course you go down that road and you get IPFS, which sounds cool to me, but doesn't seem to have ever taken off.
It is not a problem if you are only serving static files.
You do realise that “is it technically possible?” Is like 1% of the question in computing, at most, yes? HTTP and HTTPS are what we’ve got.
Even an esp32 can (just) handle TLS. Given relatively modern designs, you end up on remarkably small chips before TLS is a real blocker
Can all this performative love for unencrypted HTTP just die already. You’ve all forgotten what it was actually like, and what the drawbacks actually are. This is so tiring.
> The other thing is the usual argument about oppressive governments, etc... Well, if want to protect yourself, TLS won't save you, you will be given away by your IP address, they may not see exactly what you are looking at, but the simple fact you are connecting to a server containing sensitive data may be evidence enough. Protecting your identity is what networks like TOR are for, and you can hide a plain text server behind the TOR network, which would act as the privacy layer.
A huge ‘citation needed’ for this whole paragraph. Just admit that you don’t care about this use case and move on. Don’t present a contrived and completely justified hypothetical where oppressive governments behave exactly in a way that happens to mean that there’s only room for the technologies that you personally are into.
You’ve completely departed from reality. It’s not 2004 anymore.
I just mentioned that because I expected someone to say "but privacy...", because privacy and encryption go hand in hand. And my argument is that the encryption we usually think of in the context of the web is TLS, and it is not a good fit in that context.
The goal here is to publish information for everyone to see, it is not secret messaging, what you may want to protect is your identity. There are networks especially designed for this, and you are better off using these, but if you are not, then I believe that accessing a HTTP website through an anonymizing proxy (like TOR) is better at protecting your identity than relying on the TLS layer of HTTPS or Gemini.
People will still do financial transactions on an unencrypted web because the utility outweighs the risk. Removing encryption just guarantees the risk is high.
That does not necessarily require TLS to mitigate (although TLS does help, anyways). There are other issues with financial transactions, whether or not TLS is used. (I had idea, and wrote a draft specification of, "computer payment file", to try to improve security of financial transactions and avoid some kinds of dishonesty; it has its own security and does not require TLS (nor does it require any specific protocol), although using TLS with this is still helpful.) (There are potentially other ways to mitigate the problems as well, but this is one way that I think would be helpful.)
I think it should allow but not require encryption.
> Removing encryption means that you can't reasonably do financial transactions, accounts and access restriction, exchange of private information, etc... You only share what you want to share publicly, with no restrictions. It seriously limits commercial potential which is the point.
Note that the article linked to says "the Gemini protocol is so limited that it’s almost incapable of commercial exploitation", even though Gemini does use TLS. (Also, accounts and access restriction can sometimes be used with noncommercial stuff as well; they are not only commercial.)
> It also helps technically. If you want to make a tiny web server, like on a microcontroller, encryption is the hardest part.
This is one of the reasons I think it should not be required. (Neither the client side nor server side should require it. Both should allow it if they can, but if one or both sides cannot (or does not want to) implement encryption for whatever reason, then it should not be required.)
> Anyone can man-in-the-middle and change the web page, TLS prevents that. But what I think is an even better solution is to do it at the content level: sign the content, like a GPG signature
Using TLS only prevents spies (except Cloudflare) from seeing or altering the data, and does not prevent the server operator from doing so (or from reassigned domain names, if you are using the standard certificate authorities for WWW; especially if you are using cookies for authentication rather than client certificates which would avoid that issue (but the other issues would not entirely be avoided)).
Cryptographic signatures of the files is helpful, especially for static files, and would help even if the files are mirrored, so it does have benefits. However, these are different benefits than those of using TLS.
In other cases, if you already know what the file is and it is not changing, then using a cryptographic hash will help, and a signature might not be needed (although you might have that too); the hash can also be used to identify the file so that you do not necessarily need to access it from one specific server if it is also available elsewhere.
> Well, if want to protect yourself, TLS won't save you, you will be given away by your IP address, they may not see exactly what you are looking at, but the simple fact you are connecting to a server containing sensitive data may be evidence enough.
There is also SNI. Depending on the specific server implementation, using false SNI might or might not work, but even if it does, the server might not provide a certificate with correct data in that case (my document of Scorpion protocol mentions this possibility, and suggestions of what to do about it).
I recently jumped back onto IRC Rizon, and man, what a throwback.
The small web really is MUCH bigger when you start adding other protocols like IRC and onion sites.
We also recently released support for plain-text-lists which is a gemini-inspired spec that use lists as its foundational structure.
That was my understanding before it grew - it's a web of small indie sites.
It has about 1000 blogs in the repo at the moment. Discovering was the most time consuming part.
The question is how do you take it to a million? There probably are at least that many good personal and non-commercial websites out there, but if you open it up, you invite spam & slop.
For example, I have several non-commercial, personal websites that I think anyone would agree are "small web", but each of them fails the Kagi inclusion criteria for a different reason. One is not a blog, another is a blog but with the wrong cadence of posts, etc.
1) The requirement that it needs to be a blog. There's plenty of small-web sites of people who obsess over really wonderful and wacky stuff (e.g., https://www.fleacircus.co.uk/History.htm) but don't qualify here.
2) The requirement that it needs to be updated regularly. Same as above - I get that infrequently updated websites don't generate a "daily morning" feed, but admitting them wouldn't harm in any way.
3) Blanket ban on Substack-like platforms while allowing Blogspot, Wordpress.com, YouTube, etc. Bloggers follow trends, so you're effectively excluding a significant proportion of personal blogs created in the last six years, including the stuff that isn't monetized or behind interstitials. The outcomes are pretty weird: for example, noahpinionblog.blogspot.com is on your list, but noahpinion.blog is apparently no longer small web.
2) 'Regularly' means posted in the last 2 years to be included
3) Substack has an annoying subcribe popup and ads/popups are against the spirit of what this represents
Do you need to take it to a million in the same place? Is that still "small"?
Why not have 2000 hand curated directories instead?
It depends on what you're trying to achieve. If you want to have a personal feed of stories from interesting people, 50 is probably enough to give you some interesting daily reading. But if you want to build a "small web" search lens, you absolutely need to cover. For example, Kagi is billing a "small web" search filter, but it excludes a lot of the small web because they only allow actively-maintained blogs and only a subset of them.
So a similarity-based graph/network of webpages should cluster good with good, bad with bad. That is what I've seen so far, anyway.
With that, you just need to enter the graph in the right place, something that is fairly trivial.
gemini://gemi.dev/
FWIW, dillo now has plugins for both Gemini and Gopher and the plugins work find on the various BSDs.
For a while I hoped that VR will become the new World Wide Web, but it was successfully torpedoed by the Metaverse initiative.
Large companies have helped build the web but they've done at least as much, if not more, to help kill it.
The early web had a lot going on and allowed for a lot of creative experimentation which really caught the eye and the imagination.
Gemini seems designed to only allow long-form text content. You can't even have a table let alone inline images which makes it very limited for even dry scientific research papers, which I think would otherwise be an excellent use-case for Gemini. But it seems that this sort of thing is a deliberate design/philosophical decision by the authors which is a shame. They could have supported full markdown, but they chose not to (ostensibly to ease client implementation but there are a squillion markdown libraries so that assertion doesn't hold water for me)
It's their protocol so they can do what they want with it, but it's why I think Gemini as a protocol is a dead-end unless all you want to do is write essays (with no images or tables or inline links or table-of-contents or MathML or SVG diagrams or anything else you can think of in markdown). Its a shame as I think the client-cert stuff for Auth is interesting.
Note that the Gemini protocol is just a way of moving bytes around; nothing stops you from sending Markdown if you want (and at least some clients will render it - same with inline images).
I can't imagine the backlash if someone tried to normalize Markdown. Isn't the entire point of Gemini that it can never be extended or expanded upon?
Maybe it would be better to create an entirely different protocol/alt web around Markdown that didn't risk running afoul of Gemini's philosophical restrictions?
> The SmolNet consists of content available through alternative protocols outside the web such as gemini:// gopher:// Gopher+ gophers:// finger:// spartan:// text:// SuperText nex:// scorpion:// mercury:// titan:// guppy:// scroll:// molerat:// terse:// fsp://. There is a summary of the main SmolNet protocols.
Of course, as others have said, we could just use HTML without JavaScript or cookies and we'd be a lot of the way there with 95% less effort but hey in the future we'll probably just query an AI rather than load a web page ourselves.
https://www.demarkus.io/ https://github.com/latebit-io/demarkus
Mark protocol, with client tools, tui, server, and mcp (yes I know, hype train, but useful for agents) only markdown format. Simple :)
My point is that by using exotic tech, they limit the general public exposure to the "un-commercial web".
I used to use all sorts of small websites in 2005. But by 2015 I used only about 10 large ones.
Like many changes, I cannot pinpoint exactly when this happened. It just occurred to me someday that I do not run into many unusual websites any longer.
It's unfortunate that so much of our behavior is dictated by Google. I dint think it's malicious or even intentional--but at some point they stopped directing traffic to small websites.
And like a highway closeure ripples through small town economies, it was barely noticed by travellers but devestating to recipients. What were once quaint sites became abandoned.
The second force seems to be video. Because video is difficult and expensive to host, we moved away from websites. Travel blogs were replaced with travel vlogs. Tutorials became videos.
It did seem we had that for a while and now everything funnels back to a handful of big platforms.
Maybe as AI swallows the data of the entire web, it would start to look for these small sites, small creators, and rare personal content to keep itself interesting and we'll see more of them?
it's indirectly intentional in that Google isn't wringing it's hands trying to destroy tiny blogs but they (Google) have deliberately chosen to ignore anything that doesn't play the SEO game, whatever the driver of that game is.
Small websites have small dollars?
If I search for 'astronomy', Google doesnt earn any money whether i go to the Wikipedia page for astronomy, or Joe's astronomy site, right?
A few years ago they upranked all results on a few trusted domains, so many of those domains filled up with advertising and cheap copywritten content. They framed this as 'fighting misinformation.'
Just another place for hackers to go and keep to themselves.
Non tech people, who needs them! In shelf script slab city, we can share cooking recipes in plaintext.
Just spin up your instance, map it to a port, initialize the listener daemon, download the cli viewer…and you’re in shell script slab city! Who needs Twitter
I have no idea if Gemini can help revitalize the small web or not; I am a pessimist b nature so who knows. I fear so many things have changed in the last 20-30 years that some things may be permanently lost. For instance, in the early 2000s or so, a local university offered personal homepages for every student. This stopped in like 2010 or so. And never came back. At any rate, I welcome any idea to try to rescue the part of the world wide web not already destroyed by commercial interests. People seem to use private entities though; I saw that with discord. Then compared it to oldschool mIRC on freenode or galaxynet. Discord is private. I hate that.
That is my website! To be fair, the hard part is hard to keep a personal website regularly updated without making people think it's abandoned. I don't have a regular post cadence. So it looks like I don't touch the website at all for months. But I regularly update my posts and other sections event if there isn't any new posts.
I also wrote something similar to OP - https://www.unsungnovelty.org/posts/10/2024/life-of-a-blog-b...
And I'd like to also mention https://marginalia-search.com/ which is a small OSS search engine I have been using more and more theese days. I find it great to find IndieWeb / Small Web content.
<link rel="alternate" type="application/atom+xml" href="https://www.unsungnovelty.org/index.xml" />
in the HEAD of the pages on your website, it makes autodiscovery of the RSS feed a bit easier - not just for crawlers, but also for people with RSS plugins in their browser. It will make the RSS icon appear in their browser's URL field for easy subscription. Took me a while to find the RSS link at the bottom of your pages!
(Also, thanks for reminding me that it was time I donated something to the Marginalia project: https://buymeacoffee.com/marginalia.nu )
1. No. It's not javascript only. https://old-search.marginalia.nu/ is still available. It is also mentioned in https://about.marginalia-search.com/article/redesign/ as gonna be there for a very long time.
2. I don't think just because it uses javascript make it bad. It's a very nice site now. I prefer it better than old version. My website doesn't use JS for any functionality yet. But I've never said never either. The reason hasn't arised that I need to use JS. The day it does, I will use it.
But I understand the sentiment though. I used to be a no js guy before. But I've been softened by the need to use it professionally only to think --- hmmm, not bad.
I did a Small Web search at Marginalia and was immediately pointed to sites that claim that I and everyone in my political party are literally the spawn of Satan--I really don't think it's my thing.
I helped develop the ARPANET back in 1969-1970 while working for the UCLA Comp Sci dept, got a brief mention in RFC 57, hold several network patents, and was on usenet before the usenix conference where we voted to call it that ... I'm bemused by all the people who claim that boomers are technologically inept (I think they have us mixed up with our parents). Anyway it's been a heck of a wild ride and didn't end up quite how JCR Licklider envisioned it.
Thanks for sharing.
And woe betide thee whose website isn't a blog.
See: https://github.com/kagisearch/smallweb?tab=readme-ov-file#%E...
Honestly the hard part was that a lot of the sites I wanted to submit were already there!
If anyone wants to join up and add our sites together, here's mine:
https://yesteryearforever.xyz/