The other was more interesting. It was a "rise from the dead" post (https://news.ycombinator.com/item?id=46466027), meaning I posted it on Jan 2 (Friday), and then on Jan 5 (Monday) I started to get emails from readers giving me feedback about the post. I had not expected that level of response...
From this experience, I learned nothing about how either the algorithm works or when the best time post is. IMO just being part of the community and showing your work frequently is the best strategy here.
Launch early, launch often - or something like that.
As a sidenote: That clock is so cool: I was just mesmerized for multiple minutes!
2016-era HN had its share of negativism, but it also had a lot less people - the light green from those charts is misleading.
Excepting the last year - things see dark for a lot of users here in the last year.
I do think as a metric for total reach, a static cutoff actually works reasonably well. I think some form of square root normalization over total users is probably the best balance.
My company's core technology extends topic models to enable arbitrary hierarchical graphs, with additional branches beyond the topic and word branch. We expose those annotations in a SQL interface. It's an alternative/complementary approach to embeddings/LLMs for working with text data. In this case, the hierarchy broke submissions down into paragraphs added a layer to pool them into submissions, and added one more layer to pool them by year (on the topic branch).
Our word branch is a bit more complicated, but we have some extended documentation on our website if you are interested in digging a bit deeper. Always happy to chat more about the technical details of our topic models if you have any questions!
Overview of Our Technology: https://blog.sturdystatistics.com/posts/technology/
Technical Docs: https://docs.sturdystatistics.com
Do you have any insights into the Clawd spam ravaging /new and /show?
I'm in there, being part of a (down) "voting ring" (not coordinated)
1. We might be drowning under the "top of the iceberg" at the moment (quickly generated AI slop) but there's a silent crowd of builders doing long-term work (still with the help of AI) that will only surface after months of work. I expect more of the bottom of the iceberg to show up over time.
2. A lot of the most interesting work in science was done out of sheer curiosity, not given a specific problem to solve. The current generation of AI is good -- and getting better -- at the latter, but genuinely incapable of doing the former in a remotely meaningful way.
In other words, I'm long on truly human-driven innovation.
The good content is still there buts it drowns in noise and I'm not very good at filtering it out. I even suspect Hacker News is one of the prime advertisement targets of coding agent companies.
I would love to see if this is just my perception or if it can be found in the data.
I'm sure someone's done the numbers on HN trending topics over time aaand yup http://varianceexplained.org/r/hn-trends//.
The bigger problem is the effect that it's had on "Show HN" postings, which in the past were things you could depend on were built by the person submitting it. That's why those posts tended to be more strongly moderated, because they often were seen as attacks on the person's art. Now I feel like most of the credibility has left the room on those posts.
Don't get me wrong - I have no problem with "vibe coding". I do plenty of it myself these days, for commercial purposes. But I feel it cheapens and waters down someone presenting work as their own.
tl;dr of the essay, we need to move back to human-to-human recommendations and trust systems, and people are already doing that a lot of ways by retreating to DMs (iMessage, email, in-person conversations) and personal recommendations rather than relying on Google + the algorithm. What this means for public forums I don't know. I think they're gone and will never come back, probably.