VCs: what is it
Tom Dick & Harry: AI
VCs: get the ** out of here, we already burnt enough money and will never see it back
Tom Dick & Harry: hear me out this is different
VCs: ok you have 5 minutes to explain me your product
Tom Dick & Harry: I dont have one
VCs: get the ** out of here
Tom Dick & Harry: hear me out
VCs: ok, you have 30 seconds to impress us.
Tom Dick & Harry: I just quit Microslop and still have high level contacts there
VCs: Hot damn!!! you are our lottery ticket to recoup all the money we have lost in other ventures. This is going to be a race against time, before your contacts go stale. Here's 60M for you, wine and dine your friends with it. On your way out you will find some AI generated product names and some vague product descriptions. Pick one and slap it on some website and announce our deal. Now get the ** out of here.
```markdown # Run NNNN
## First Impressions [What state is the project in? What did the last agent leave?]
## Plan [What will you work on this iteration? Why?]
## Work Log [Fill this in as you work]
## Discoveries [What did you learn? What surprised you? What should the next agent know?]
## Summary [Fill this in before committing] ```
This is surprisingly effective and lets agents easily continue in progress work and understand past decisions.
So.. yea. Ignore and move on.
https://en.wikipedia.org/wiki/Acquisition_of_Twitter_by_Elon...
I think that's just how rich people play the game. Why use your own money when you can use other people's money?
This sounds like my current "phase" of AI coding. I have had so many project ideas for years that I can just spec out, everything I've thought about, all the little ideas and details, things I only had time to think about, never implement. I then feed it to Claude, and watch it meet my every specification, I can then test it, note any bugs, recompile and re-test. I can review the code, as you would a Junior you're mentoring, and have it rewrite it in a specific pattern.
Funnily enough, I love Beads, but did not like that it uses git hooks for the DB, and I can't tie tickets back to ticketing systems, so I've been building my own alternative, mine just syncs to and from github issues. I think this is probably overkill for whats been a solved thing: ticketing systems.
And I use git hooks on the tool event to print the current open gate (subtask) from task.md so the agent never deviates from the plan, this is important if you use yolo mode. It might be an original technique I never heard anyone using it. A stickie note in the tool response, printed by a hook, that highlights the current task and where is the current task.md located. I have seen stretches of 10 or 15 minutes of good work done this way with no user intervention. Like a "Markdown Turing Machine".
```
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "mix format ${file} 2>/dev/null || true"
}
]
}
],
"TaskCompleted": [
{
"matcher": "",
"hooks": [
{
"type": "prompt",
"prompt": "reminder: run mix test if implementation is complete"
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "prompt",
"prompt": "Check if all tasks are complete. If not, respond with {\"ok\": false, \"reason\": \"what remains to be done\"}."
}
]
}
]
},
```Just update it to iterate over your file. It should be a little easier to manage than git hooks and can hammer in testing.
For me a gate is: a dependency that must pass before a task is closed. It could be human verification, unit testing, or even "can I curl this?" "can I build this?" and gates can be re-used, but every task MUST have one gate.
My issue with git hooks integration at that level is and I know this sounds crazy, but not everyone is using git. I run into legacy projects, or maybe its still greenfield as heck, and all you have is a POC zip file your manager emailed you for whatever awful reason. I like my tooling to be agnostic to models and external tooling so it can easily integrate everywhere.
Yours sounds pretty awesome for what its worth, just not for me, wish you the best of luck.
I'm confused how this is any different to the pretty standard agentic coding workflow?
Beads is a nightmare.
I didn't mention above but I made https://github.com/wedow/ticket as my beads replacement back at the end of December.
I have zero reach compared to Jeff there so I'm considering `ticket` to be a smash hit at this point ;)
The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?
For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.
My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.
This is the agent that does the work: https://github.com/ChicagoDave/devarch/blob/main/docs/.claud...
My success with Claude Code is directly related to this agent and its output.
https://github.com/ChicagoDave/sharpee/tree/main/docs/contex...
Note I archive these to a different non-repo folder regularly. It's fun to tell Claude to go through them like a RAG implementation and note the progress of my development over time, which I update semi-regularly at: https://david.cornelson.net/sharpee-status-20260109.html
Now updated to https://david.cornelson.net/sharpee-status-20260211.html
I have hooks in Claude to auto-generate the work summary and read the last work summary on start. Now if I could only get Claude to quit/restart on its own or have it thoroughly flush its resources (even Anthropic says restarting is better), we'd be getting somewhere. I do wish there were automatic hooks that fire all the time (like a heart monitor) where I could query "if context is below 15%" do this: stop all work, write work summary, commit and push all changes, release all resources, read last work summary, continue the work.
That would be awesome.
Just say what your thing does. Or, better yet, show it to me in under 60 seconds.
Web sites are the new banner ads and headings like that are the new `<blink>`.
Edit: Actually it may just be aimed at investors. Who cares about having a product?
The fact that the first image you see has "$60M seed" in big text, I have to agree, this does not feel aimed at devs.
It's almost like an extension of the "if you're not paying for the product, you are the product" idea. If you're assessing a tool like this and the marketing isn't even trying to communicate to you, the user, what the product does, aren't you also kind of "the product" in this case too?
It's been like this since the Dotcom era
Or did you forget that you can do anything at zombo.com?
It appears to be rather slow today, but here's a Wiki link for the uninitiated- https://en.wikipedia.org/wiki/Zombo.com
It's still around, but has been redesigned and it's under "new management". Further proof that the internet is dying.
Yes yes a Dropbox comment. But the problem here is 1 million people are doing the same thing. For this to be worth 60M seed I suspect they need to do something more than you can achieve by messing around locally."
"Claude build me a script in bash to implement a Ralph loop with a KV store tied to my git commits for agent memory."
Show me the next blockchain or noSQL, or git!
Everything is "I took a prompt and stapled it to another prompt and added <insert existing OSS> framework".. and then. holds hand out "MONEY PLEASE!"
That said, nobody knows what the AI future looks like. Entire’s entire thesis is a solution for something we don’t even know we need. It’s a massive bet and uphill battle. Traditionally, dev tool success stories come from grassroots projects of developers solving their own problems and not massive VC funded efforts that tell you what you need to do.
EDIT: Or just keep a proper (technical) changelog.txt file in the repo. A lot of the "agentic/LLM engineering frameworks" boil down to best approaches and proper standards the industry should have been following decades ago.
I don't see the need for a full platform that is separate from where my code already lives. If I'm migrating away, it's to something like tangled, not another VC funded company
It also creates a challenge with respect with the embedding model chosen and how future proof it turns out to be.
Commit hook > Background agent summarizes (in a data structure) the work that went into the commit > saves to a note
Built similar (with a better name) a week ago at a hackathon: https://github.com/eqtylab/y
Context management is still an important human skill in working with an agent, and this makes it harder.
Token costs aside, arguably fresh context is also better at problem solving. When it was just me coding by hand, I didn't save all my intermediate thinking work anywhere: instead thinking afresh when a similar problem came up later helped in coming up with better solutions. I did occasionally save my thinking in design docs, but the equivalent to that is CLAUDE.md and similar human-reviewed markdown saved at explicit -umm- checkpoints.
Also, sometimes it gets something very wrong. I don't want to then poison every subsequent sessions with the wrong thing it learned. This has been a major issue for me at $WORK with AGENTS.md files that my colleagues write: they make my agent coding much worse so I need to manually delete them often.
Is this the product? I don't want to jump on the detractor wagon, but I read the post and watched the video, and all I gathered is that it dumps the context into the commit. I already do this.
Hows your ability to get an enterprise to mandate their 5000 employees to use it? That's what most of these types of rounds are about.
I guess if I had to do it, I'd reject pushes without the requisite commit to entire/checkpoints/v1. I think their storage strategy is a bit clunky for that, but it can be done. I might look to do something more like the way jujutsu colocates its metadata. I don't think this particular implementation detail makes too much of a difference, though. I got along just fine in a regulated environment by setting a policy and enforcing it before git existed. Ideally, we'd do things for a good reason, and if you can't get along in that world, then it's probably not the right job for you. Sometimes you've got to get the change controls in order before we can mainline your contributions because we have to sustain audits. I don't think this is about forcing people to do something that they otherwise wouldn't do if you told them that it's a requirement of the job.
And since you mentioned "enterprise", they won't be able to use it if they do not use GH, and a lot of enterprise cropos use azure devops.
Also there is no user management etc, if anything this is ANTI-enterprise.
(I will give the agent boom a bit of credit: I write a lot more documentation now, because it's essentially instruction and initial instruction to anything else that works on it. That's a total inversion, and I think it's good.)
The bigger problem is, like others have said, there's no one true flow. I use different agents for different things. I might summarize a lot of reasoning with a cheap model to create a design document, or use a higher reasoning model to sanity check a plan, whatever. It's a lot like programming in English. I don't want my tool to be prescriptive and imposing its technical restrictions on me.
All of that aside: it's impossible that this tool raised $60 million. The problem with this post is that it's supposed to be a hype post about changing the game "entirely" but it doesn't give us a glimpse into whatever we're supposed to by hyped about.
Then later if it goes off piste in another session tell it to re-read the ADDs for x, y and z.
If someone could make that process less clunky, that would be great. However it's very much not just funnel every turd uttered in the prompt onto a git branch and trying a chug the lot down every session.
For Claude Code, this is literally a JSONL file in .claude/projects/[path with - instead of /]/[uuid].jsonl... You can trivially have Claude Code write a commit hook to do this for you if you find it useful.
I'm sure their vision is wider than that, but they will need to iterate fast for this not to be made obsolete before they can even release something.
FWIW, I had an agent adding archiving of the JSONL for changesets linked to the work they're doing right as I started looking at this article, as when you start automating a non-interactive agent flow it's such an obviously necessary step to be able to retrospectively improve the workflows.
>FWIW, I had an agent adding archiving of the JSONL for changesets linked to the work they're doing
Would love to know more! Sounds interesting.
Putting it in a branch so it doesn't pollute your checked out copy may well be a good idea in the longer run. For now, I keep all the plans available, as I then have a review stage that reviews all the plans, and writes things like "the user got increasingly exasperated as the agent kept ignorning direction" :D and helps propose improvements to the tool and workflow to reduce the number of those exasperated movements...
I'm thinking of packaging it up and open-sourcing it. It's all very experimental and likely to totally change every day for now, but I find it helpful. It's built me a personal dashboard, and keeps adding stuff to it with relatively minimal direction beyond "spying" on my notes and journal at this point. At one point a plan specifically called me out for procrastinating and planned for how to "work around" that with tooling (I wish it'd succeed at that).
There's nothing really fancy here, just feedback loops that ensures the wild claims the agents sometimes will make are tested and rejected.
To the original JSONL bit, the uuid you need to look it up is also the UUID you need to call "claude --resume [uuid]", so extracting it also allows for e.g. having the verification agent (that checks if the implementation agent was truthful when ticking off the quality gates - spoiler: it very often isn't) feed its report back into the original implementation conversation if rejected, instead of having the implementation agent "start over" without the full context. I haven't tested that yet, but I'm hopeful.
Though even if you don't have it restart, you can point it to the snapshot of the previous conversation as a source of additional info, as another option.
My experience is that Cursor's reliance on VS Code's clunky panel-based UI and jack-of-all-trades harness is holding it back. Likewise, Claude Code shoe-horning a GUI into a TUI and perma-binding to a single model family is not the ideal end-state.
The VC play here? The git context CLI thing is a foundational step that lays the groundwork for a full IDE/workflow tool, I guess.
You can see the turns, tools called, outputs, changes. Similar to what he's trying to achieve.
I tried a similar(-ish) thing last year at https://github.com/imjasonh/cnotes (a Claude hook to write conversations to git notes) but ended up not getting much out of it. Making it integrated into the experience would have helped, I had a chrome extension to display it in the GitHub UI but even then just stopped using it eventually.
It's legit mania in VC world even as they're looking at each going "is this mania? Is it mania if you're asking if it's mania". The only rule right now is the music is playing so no-one wants to grab a chair. There's a sense this might come crashing down, but what's a player gonna do, sit on the side while paper markups are producing IRR that is practically unprecedented?
This kinda has to end badly at this point.
So is this just a few context.md files that you tell the agent to update as you work and then push it when you are done???
I welcome more innovation in the code forge space but if you’re looking for an oss alternative just for tracking agent sessions with your commits you should checkout agentblame
I wanted to more or less build Jira for agents and track the context there.
If I had to guess 60 million is just enough to build the POC out. I don't see how this can compete though, Open AI or Anthro could easily spin up a competitor internally.
Eventually you'll find a way that works for you that needs none of it, and if any piece of all the extras IS ever helpful, Anthropic adds it themselves within a month or two.
For debugging you could translate it out to English, but if these agents can do stuff without humans in the loop, why do they need to take notes in English?
I can't imagine creating this without hundreds of millions if not billions. I think the future is specialized models
You're kidding.
Two paragraph later:
> Entire will be based on three key components: a git-compatible database [...]
So which one is it?
Whether or not useful for agent collaboration, the data here will be more valuable than gold for doing RL training later on.
If you're approaching this problem-space from the ground up, there are just so many fundamental problems to solve that it seems to me that no amount of money or quality of team can increase your likelihood of arriving at enough right answers to ensure success. Pulling off something like this vision in the current red-ocean market would require dozens of brilliant ideas and hundreds of correct bets.
Claude Code supports hooks. This allows me to run an agent skill at the end of every agent execution to automatically determine if there were any lessons worth learning from the last session. If there were. new agent skills are automatically created or existing ones automatically updated as apporpriate.
I continue to find it painfully ironic that the Claude Code team is unable to leverage their deep expertise and unlimited token budget to keep the docs even close to up-to-date automatically. Either that or they have decided accurate docs aren't important.
[0] https://code.claude.com/docs/en/interactive-mode#built-in-co...
Some weeks ago we launched this:
https://github.com/Legit-Control/monorepo/tree/main/examples...
The idea was simple: keep AI prompts, intents, and conversations alongside your code and commits — basically treating AI interaction as first-class development artifacts. everything just plain Git
We struggled to get momentum. Things happen.
Now, less than 24 hours ago, Thomas announced Entire.io:
“Entire CLI hooks into your git workflow to capture AI agent sessions on every push. Sessions are indexed alongside commits, a searchable record of how code was written.”
That’s… very, very close to what we tried to build.
Honestly, I love the vision and think this will matter a lot in the AI age. It’s validating to see someone with that reach betting big on it.
Best of luck to Thomas and the team behind Entire.
greg brockman https://x.com/gdb/status/2019566641491963946
> 6. Work on basic infra [...] there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.
peter steinberger (clawdbot/openclaw) https://newsletter.pragmaticengineer.com/p/the-creator-of-cl...
> Peter now views PRs as “prompt requests” and is more interested in seeing the prompts that generated code than the code itself
If I were an employee looking to join Entire, or a developer evaluating the durability of Entire for my needs over the long-term, I'd ask things like —
- What's the terminal value of a DevTool in the AI era? Is it closer to a standard 10x ARR? or maybe 100x...perhaps 1000x?
- Is there value in the asset attributable to the network? If so, is it defensible? What is the likelihood protocols emerge that simply disintermediate the network moat of a AI agent memory company like Entire?
- What kind of AI data are developers willing to silo with a single vendor? What kind of controls do Enterprises expect of an AI agent memory system? Can Entire reasonably provide them to grow in the next 12-24 months?
- As a potential employee...if you join a company with a $60M seed funding and 19 employees, what is the ARR they need to achieve based on the first product in market in roughly ~12 months? $6M...$12M...20M? Is there a DevTools company that's ever done that!? What is the thesis for "this time is different"? Getting developers to pay for things is tough 'doncha know?
Only then can you ask basic technical diligence questions —
- Is Git a system that can scale to be the memory system and handle the kind of tasks that AI agents are expected handle?
- Are the infrastructure costs for these early platform decisions here reasonable over the long-term? Or do they just run out of money?
I wish them the best, and I hope employees get liquidity, and people take money off the table in a responsible way.
As for SDLC, you can do some good automations if you're very opinionated, but people have diverse tastes in the way they want to work, so it becomes a market selection thing.
Say what you want about LLM-assisted software development, the chances are high that it will stay, meaning a non-trivial part of code will be written by an LLM.
So is it better to have
- git commit (mostly only code)
- magic or blackblock (meaning back and forth by developer with LLM, before commit)
- git commit (mostly only code)
...
(rinse and repeat)
or - git commit (prompt and/or code)
- git commit (prompt and/or code)
...
(rinse and repeat)
Obviously for that to work the LLM output has to be deterministic and a commit chain has to be pinpointed to a specific weight blob.Productizing the building blocks of the platform seems like the smart play in today's environment honestly.
The readme is a bit more to the point.
https://github.com/re-cinq/claudit
What's amusing is that I specifically made a comedy license for it, because I thought trying to protect the IP of such a thing is madness in the AI-native era. Then within days I find out it's a massively funded startup!
https://github.com/re-cinq/ai-native-application-license/
I've got Claude Code adding Gemini and OpenCode support to my tool currently.
There is no Composer 2.0. There is Cursor 2.0 and Composer 1.5.
I see zero reason for a person to care about the checkpoints.
And for agents, full sessions just needlessly fill context.
So not sure what is being solved by this.
It's not like $60m in funding was given as charity.
No, it hasn’t. No, it isn’t.
I couldn't find any references of Composer 2.0 anywhere. When did that come out?
I am already overloaded with information (generated by AI and humans) on my day to day job, why do I need this additional context, unless company I work for just wants to spend more money to store more slop?
How is it different than reversing it, given a PR -> generate prompt based on business context relevant to the repo or mentioned issues -> preserve it as part of PR description
I barely look at git commit history, why should I look for even higher cardinality data, in this case: WTF, are you doing, idiot, I said don't change the logic to make tests pass, I said properly write tests!
But seriously, $300M valuation for a CLI tool that adds some metadata to Git commits. I don't know what to say.
Personally, I don't let LLMs commit directly. I git add -p and write my own commit messages -- with additional context where required -- because at the end of the day, I'm responsible for the code. If something's unclear or lacks context, it's my fault, not the robot's.
But I would like to see a better GitHub, so maybe they will end up there.
I tested it with multiple PRs and I see nothing in GH nor Entire dashboard.
https://www.youtube.com/watch?v=aJUuJtGgkQg
* This is snarky. Yes. But seriously.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
It's like complaining about the availability of the printing press because it proliferated tabloid production, while preferring beautifully hand-crafted tomes. It's reactively trendy to hate on it because of the vulgar production it enables and to elevate the artisanal extremes that escape its apparent influence
"Essentially all software is augmented with Stack Overflow now, or if not, built with technology or on platforms that is."
Agentic development isn't a panacea nor as widespread as you claim. I'd wager that the vast majority of developers treat AI is a more specified search engine to point them in the direction they're looking for.
AI hallucination is still as massive problem. Can't tell you the number of times I've used agentic prompting with a top model that writes code for a package based on the wrong version number or flat out invents functionality that doesn't exist.
If I do it myself, I get the added bonus of actually understanding what the code is doing, which makes debugging any issues down the line way easier. It's also in generally better for teams b/c you can ask the 'owner' of a part of the codebase what their intuition is on an issue (trying to have AI fill in for this purpose has been underwhelming for me so far).
Trying to maintain a vibecoded codebase essentially involves spelunking though a non-familliar codebase every time manual action is needed to fix an issue (including reviewing/verifying the output of an AI tool's fix for the issue).
(For small/pinpointed things, it has been very good. e.g.: write a python script to comb through this CSV and print x details about it/turn this into a dashboard)
Opus 4.5 and 4.6 is where those instances have gone down, waaay down (though still true). Two personal projects I had abandoned after sonnet built a large pile of semi working cruft it couldn’t quite reason about, opus 4.6 does it in almost one shot.
You are right about learning but consider: you can educate yourself along the way — in some cases it’s no substitute for writing the code yourself, and in many cases you learn a ton more because it’s an excellent teacher and you can try out ideas to see which work best or get feedback on them. I feel I have learned a TON about the space though unlike when I code it myself I may not be extremely comfortable with the details. I would argue we are about 30% of the way to the point where it’s not even no longer relevant it’s a disservice to your company to be writing things yourself.
Surely if all software is augmented with agentic development now, our most important space probes have had their software augmented too, right?
What about my blog that I serve static pages on? What about the xray machine my dentist uses? What about the firmware in my toaster? Does the New York Stock Exchange use AI to action stock trades? What about my telescope's ACSOM driver?
Blog: I use AI to make and blog developers are using agentic tools
X-ray machine: again a little late here, plus if you want to start dragging in places that likely have a huge amount of beaurocracy I don’t know that that’s very fair
Firmware in your toaster: cmon these are old basic things, if it’s new firmware maybe? But probably not? These are not strong examples
NYSE to action on stock trades; no they don’t use AI to action on stock trades (that would be dumb and slow and horribly inefficient and non-deterministic), but may very well now be using AI to work on the codebase that does
Let’s try to find maybe more impactful examples than small embodied components in toasters and telescopes, 1970s era telescopes that are already past our solar system.
The denial runs deep
I am afraid however that with these tools Claude Code will just copy this in 3 months and have it as standard functionality within itself as a plugin.
Also I find it ironic that this domain is blocking all AI tools to access it, I tried to ask AI to explain what is the product and it is blocking Claude/GPT access to the website.
Commit hook > Background agent summarizes (in a data structure) the work that went into the commit.
Built similar (with a better name) a week ago at a hackathon: https://github.com/eqtylab/y
Gotta bully that thing man. There's probably room in the market for a local tool that strips the superfluous niceties from instructions. Probably gonna save a material amount of tokens in aggregate.
Tech marketing has become a lot like dating, no technical explanation and intellectual honesty, just word words words and unreasonable expectations.
People usually cannot be honest in their romantic affairs, and here it is the same. Nobody can state: we just want to be between you and whatever you want to accomplish, rent seeking forever!
Will they ever care to elaborate HOW things works and the rationale behind stating this provides any benefit whatsoever? Perhaps this is not intended for those type of humans that care about understanding and logic?
Oh, nevermind, it’s some MS dude.
I guess when you are Ex-Github CEO, it is that easy raising a $60M seed. I wonder what the record for a seed round is. This is crazy.
The AI fatigue is real, and the cooling-off period is going to hurt. We’re deep into concept overload now. Every week it’s another tool (don’t get me started on Gas Town) confidently claiming to solve… something. “Faster development”, apparently.
Unless you’re already ideologically committed to this space, I don’t see how the average engineer has the energy or motivation to even understand these tools, never mind meaningfully compare them. That’s before you factor in that many of them actively remove the parts of engineering people enjoy, while piling on yet another layer of abstraction, configuration, and cognitive load.
I’m so tired of being told we’re in yet another “paradigm shift”. Tools like Codex can be useful in small doses, but the moment it turns into a sprawling ecosystem of prompts, agents, workflows, and magical thinking, it stops feeling like leverage and starts feeling like self-inflicted complexity.
This is why I use the copilot extension in VS code. They seem to just copy whatever useful thing climbs to the surface of the AI tool slop pile. Last week I loaded up and Opus 4.6 was there ready to use. Yesterday I found it has a new Claude tool built in which I used to do some refactoring... it worked fine. It's like having an AI tool curator.
I also keep getting job applications for AI-native 'developers' whatever that means.
The “I’m so tired of being told we’re in another paradigm shift” comments are widely heard and upvoted on HN and are just so hard to comprehend today. They are not seeing the writing on the wall and following where the ball is going to be even in 6-12 months. We have scaling laws, multiple METR benchmarks, internal and external evals of a variety of flavors.
“Tools like codex can be useful in small doses” the best and most prestigious engineers I know inside and outside my company do not code virtually at all. I’m not one of them but I also do not code at all whatsoever. Agents are sufficiently powerful to justify and explain themselves and walk you through as much of the code as you want them to.
My issue is that we’ve now got a million secondary “paradigm shifts” layered on top: agent frameworks, orchestration patterns, prompt DSLs, eval harnesses, routing, memory, tool calling, “autonomous” workflows… all presented like you’re behind if you’re not constantly replatforming your brain.
Even if the end-state is “engineers code less”, the near-term reality for most engineers is still: deliver software, support customers, handle incidents, and now also become competent evaluators of rapidly changing bot stacks. That cognitive tax is brutal.
So yes, follow where the ball is going. I am. I’m just not pretending the current proliferation is anything other than noisy and expensive to keep up with.
"As a result, every change can now be traced back not only to a diff, but to the reasoning that produced it."
This is a good idea, but I just don't see how you build an entire platform around this. This feels like a feature that should be added to GitHub. Something to see in the existing PR workflow. Why do I want to go to a separate developer platform to look at this information?
In my case I don't want my tools to assume git, my tools should work whether I open SVN, TFS, Git, or a zip file. It should also sync back into my 'human' tooling, which is what I do currently. Still working on it, but its also free, just like Beads.
On the one hand they think these things provide 1337x productivity gains, can be run autonomously, and will one day lead to "the first 1 person billion dollar company".
And in complete cognitive dissonance also somehow still have fantasies of future 'acquisition' by their oppressors.
Why acquire your trash dev tool?
They'll just have the agents copy it. Hell, you could even outright steal it, because apparently laundering any licensing issues through LLMs short circuits the brains of judges to protohuman clacking rocks together levels.
As to why would those company acquire a startup instead of having an agent generate it for them. Why has big tech ever acquired tech startups when they could have always funded it in house? It’s not always a technical answer. Sometimes it’s internal Political fights, time to market, reduce competition, PR reasons or they just wanna hire the founder to lead a team for that internally and the only way he’ll agree is if there is an exit plan for his employees. I sat in “acquire or build” discussions before. The “how hard would it be to just do that?” Was just one of many inputs into the discussion. Ever wondered why big big companies acquire a smaller one, not invest in it, then shut it down few years later?
1. Tom Preston-Werner (Co-founder). 2008 – 2014 (Out for, eh... look it up)
2. Chris Wanstrath (Co-founder). 2014 – 2018
(2018: Acquisition by Microsoft: https://news.ycombinator.com/item?id=17227286)
3. Nat Friedman (Gnome/Ximian/Microsoft). 2018 – 2021
4. Thomas Dohmke (Founder of HockeyApp, some A/B testing thing, acquired by Microsoft in 2014). 2021 - 2025
There is no Github CEO now, it's just a team/org in Microsoft. (https://mrshu.github.io/github-statuses/)
Nat's company Xamarin was acquired by Microsoft in 2016.
HockeyApp wasn't A/B testing, but a platform for iPhone, Mac, Android, and Windows Phone developers to distribute their beta version (like what TestFlight is today to the App Store), collect crash reports (like what Sentry is today), user feedback, and basic analytics for developers.
The Ximian thing I wrote from obviously faulty memory (I now wonder if it was influenced by early 2000s Miguel's bonobo obsession), the rest from various google searches. Should have gone deeper.
https://en.wikipedia.org/wiki/Ximian
Ximian, Inc. (previously called Helix Code and originally named International Gnome Support) was an American company that developed, sold and supported application software for Linux and Unix based on the GNOME platform. It was founded by Miguel de Icaza and Nat Friedman in 1999 and was bought by Novell in 2003
...
Novell was in turn acquired by The Attachmate Group on 27 April 2011. In May 2011 The Attachmate Group laid off all its US staff working on Mono, which included De Icaza. He and Friedman then founded Xamarin on 16 May 2011, a new company to continue the development of Mono. On 24 February 2016, Microsoft announced that they had signed an agreement to acquire Xamarin.
The same very online group endlessly hyping messy techs and frontend JS frameworks, oblivious to the Facebook and Google sized mechanics driving said frameworks, are now 100x-ing themselves with things like “specs” and “tests” and dreaming big about type systems and compilers we’ve had for decades.
I don’t wanna say this cycle is us watching Node jockies discover systems programming in slow motion through LLMs, but it feels like that sometimes.
Although this isn't stored in git, I don't see any particularly need to since it's too detailed. Instead I have the agent write design docs (as an alternative to plan mode) and check those in. That seems like enough.
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
Github has always been mediocre and forgettable outside of convenience that you might already have an account on the site. Svn was just shitty compared to git, and cvs was a crime against humanity.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another. StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
It's interesting to me that the only thing that made me vastly prefer using Github over bitbucket is that Github prioritised showing the readme over showing the source tree. Such a little thing, but it made all the difference.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
https://web.archive.org/web/20090531152951/http://www.survey...
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
I wish you the best of luck with your startup.
I'm not just running it on code, but on my daily journal, and it produces actionable plans for building infrastructure to help me plan and execute better as a result.
This lesson has been learned over and over (see AppleScript) but it seems people need to keep learning it.
We use simple programming languages composed of logic and maths not just to talk to the machine but to codify our thoughts within a strict internally consistent and deterministic system.
So in no sense are the vague imprecise instructions fed to LLMs the true source code.
I agree - at least with the thesis - that the more we "encode" the fuzzy ideas (as translated by an engineer) into the codebase the better. This isn't the same thing as an "English compiler". It'd be closer to the git commit messages, understanding why a change was happening, and what product decisions and compromises were being designed against.
You could ask that question about all the billions that went into crypto projects.
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
Yes, kernel and Linus used it, but he used a proprietary VCS before that did not go anywhere anyway, really.
SVN didn't need checkouts to edit that I recall? Perforce had that kind of model.
Nobody cares if it makes sense, it just has to appear futuristic and avant-garde.
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
The described situation for human-written code isn't much better. What actually works is putting a ticket (or project) number in the commit message, and making sure everything relevant gets written up and saved to that centralized repository.
And once you have that, the level of detail you'd get from saving agent chats won't add much. Maybe unless you're doing deliberate study of how to prompt more effectively (but even then, the next iteration of models is just a couple months away)?
I think "provenance gap" or temporal history can be helped by understanding what you have asked agentic systems to write, understand things written, and verified them.
We aren't yet at a point where something large or extended is easily pushed to agentic coding management - your point of provenance and memory is key here.
doesn’t that presume no value is being delivered by current models?
I can understand applying this logic to building a startup that solves today’s ai shortcomings… but value delivered today is still valuable even if it becomes more effective tomorrow.
The problem is that so many of these things are AI instructing AI and my trust rating for vibe coded tools is zero. It's become a point of pride for the human to be taken out of the loop, and the one thing that isn't recorded is the transcript that produced the slop.
I mean, you have the creator of openclaw saying he doesn't read code at all, he just generates it. That is not software engineering or development, it's brogrammer trash.
> That is not software engineering or development, it's brogrammer trash.
Yes, but it's working. I'm still reading the code and calling out specific issues to Claude, but it's less and less.
It's when you take yourself out of the loop and trust the process that it goes in the wrong direction.
On the other hand, deeply understanding how models work and where they fall short, how to set up, organize, and maintain context, and which tools and workflows support that tends to last much longer. When something like the “Ralph loop” blows up on social media (and dies just as fast), the interesting question is: what problem was it trying to solve, and how did it do it differently from alternatives? Thinking through those problems is like training a muscle, and that muscle stays useful even as the underlying technology evolves.
Now because of models improving, context sizes getting bigger, and commercial offerings improving I hardly hear about them.
Sounds to me like accidental complexity. The essential problem is to write good code for the computer to do it's task?
There's an issue if you're (general you) more focused on fixing the tool than on the primary problem, especially when you don't know if the tool is even suitable,
How do you both hold that the technology is so revolutionary because of its productive gains, but at the same time so esoteric that you better be ontop of everything all the time?
This stuff is all like a weird toy compared to other things I have taken the time to learn in my career, the sense of expertise people claim at all comes off to me like a guy who knows the Taco Bell secret menu, or the best set of coupons to use at Target. Its the opposite of intimidating!
This is just wrong. A) It doesn’t promise improvement B) Even if it does improve, that doesn’t say anything about skill investment. Maybe its improvements amplify human skill just as they have so far.
I kinda regret going through the SeLU paper lol back in the late 2010s.
I don’t mean to be so presumptuous as to teach Grampa how to suck eggs, but I think Amazon’s working backwards process is instructive.
His use of bombastic language in this announcement suggests that he has never personally worked on serious software. The deterioration of GitHub under his tenure is not confidence inspiring either, but that of course may have been dictated by Nadella.
If you are very generous, this is just another GitHub competitor dressed up in AI B.S. in order to get funding.
In this case I think the root problem is that the OP (https://entire.io/blog/hello-entire-world/) is the wrong genre for HN. It's a fine fundraising announcement, but that sort of enthusiastic general announcement rubs the HN audience the wrong way because what they really want is technical details. Spectacular non technical details like high valuations, etc., tend to accentuate this gap.
I mention this because if you or someone on your team wants to write a technical post about what you're building, with satisfying details and all that, then we could do a take 2 (whenever would be a good time for this).
Imagine being so intellectually lazy that you can't even be bothered to form your own opinion about a product. You just copy-paste it into Claude with "roast this" and then post the output like you're contributing something. That's not criticism, that's outsourcing your personality to an API call. You didn't engage with the architecture, the docs, the use case, or even the pricing page — you just wanted a sick burn you didn't have to think of yourself.
2026: The year everyone fried their brain with Think for Me SaaS.
For any new piece of technology, there are a subset of people for whom it will completely and utterly destroy.
I personally rarely need to use google maps, and if I do its a glance at it on the beginning of a trip, and I can find my way there through normal navigation. I might look again if I get lost, whereas, I have friends that use it to give directions to go five blocks. I don't think sense of direction is innate either, but its a muscle you build and some people choose to not work on that muscle and they suffer the consequences, albeit minor consequences.
I think we are seeing something similar with LLMs with the development and maintenance of reading, planning, creative and critical thinking skills. While some people might have a higher baseline, I think everyone has the ability to strengthen those muscles and the world implores us to do that in many situations, however, now we can pay Altman $0.0010 cents to offload that workout onto a GPU much like people do with navigation and maps. Tech companies love to exploit the dopamine driven response from taking shortcuts, getting somewhere quickly, its no different here.
I think (/know) the implications of this are much more hazardous than consequences of not exercising your navigational abilities, and at least with navigation there are fallback to assist people (signs, landmarks ect). There are no societal fallbacks for llm assisted thinking once someone becomes dependent on it for all aspects of analysis, planning and creativity. Once it is taken away (or they can't afford a quality of output the previously did), where do those natural abilities stand? The implications are very terrifying in my opinion.
I'm personally trying to stay as far away as possible from these things, I see where this is heading and its not as inconsequential as needing Maps to navigate 5 blocks. I do not want my critical thinking skills correlated 1:1 to the quality and quantity of tokens I can afford or have access too anymore than I do not want my navigational abilities correlated 1:1 to the quality of Maps service available to me.
People will say that this is cope, its the new calculator, whatever.. Have fun, I promise you that not knowing trigonometry but having access to an LLM does not give you the ability to write CAD software. I actually think not using these will give you a huge competitive advantage in the future. Someone who has great navigation skills will likely win a navigational competition in the mountains, or survive longer in certain situations. While the scope of those skills is narrow, it still proves a point[0]. The scope of your reading, critical thinking, creativity and planning skills is not limited.
[0]: It should be noted that some of the worlds most high agency and successful people actually participate in navigation as a sport called Orienteering, and spend boatloads of money in it.. I wonder why that is?
Most agent frameworks (LangChain, Swarm, etc.) obsessed over orchestration. But the actual pain point isn't "how do I chain prompts"—it's "what did the agent do, why, and how do I audit/reproduce it?"
The markdown-files-in-git crowd is right that simple approaches work. But they work at small scale. Once you have multiple agents across multiple sessions generating code in production, you hit the same observability problems every other distributed system hits: tracing, attribution, debugging failures across runs.
The $60M question is whether that problem is big enough to justify a platform vs. teams bolting on their own logging. I'm skeptical—but the underlying insight (agent observability > agent orchestration) seems directionally correct.
EDIT: I suspect the current "solution" is to just downvote (which I do!), but I think people who don't chat with LLMs daily might not recognize their telltale signs so I often see them highly upvoted.
Maybe that means people want LLM comments here, but it severely changes the tone and vibe of this site and I would like to at least have the community make that choice consciously rather than just slowly slide into the slop era.
@dang I would welcome a small secondary button that one can vote on to community-driven mark a comment as AI, just so we know.
It's not just the em dashes - its the cadence, tone and structure of the whole comment.
The actual insight isn't C, it's D.
I suppose it was just a matter of time before this kind of slop started taking over HN.
This has been the story for every trend empowering developers since year dot. Look back and you can find exactly the same said about CD, public cloud, containers, the works. The 'orchestration' (read compliance) layers always get routed around. Always.
Instead of just wiring agents together, I require stake and structured review around outputs. The idea is simple: coordination without cost trends toward noise.
Curious how entire.io thinks about incentives and failure modes as systems scale.
I guess I could not comment at all but that feels like just letting the platform sink into the slopacolypse?
This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.
I do see value in this, but like you I think it’s too trivial to implement to capture the value unless they can get some kind of lead on a model that can consume these artifacts more effectively. It feels like something Anthropic will have in Claude Code in a month.
And it was sold to Microsoft at $7B.
I’m sure there’d be some value to extract from the agent produced code in this thing, but I doubt it’s anywhere near as much.
This is not their offering, this is a tool to raise interest.
github for agent code is dropbox final_final2.zip
You are correct, that isn't the moat. Writing the software is the easy part
I definitely see the potential of AI-native version control, it will take a bit more to convince me this is a similar step-level improvement though.
"pfft! I could set all this up myself with a NAS xyz".
https://news.ycombinator.com/item?id=8863
I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.
Same thing it’s always been. Convenience. Including maintenance costs. AI tools have lowered the bar significantly on some things, so SaaS offerings will need to be better, but I don’t want to reinvent every wheel. I want to build the thing I want to build, not the things to build that.
Just like I go to restaurants instead of making every meal myself.
In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.
Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.
My mental model is that LLMs are obedient but lazy. The laziness shows in the output matching the letter of the prompt but with as high "code entropy" as possible.
What I mean by "code entropy" is, for example, copy-paste-tweak (high entropy) is always easier (on the short term) for LLMs (and humans) to output than defining a function to hold concepts common across the pastes with the "tweak" represented by function arguments.
LLMs will produce high entropy output unless constrained to produce lower entropy ("better") code.
Until/unless LLMs are trained to actually apply craft learned by experienced humans, we must be explicit in our prompts.
For example, I get good results from say Claude Sonnnet when my instruction include:
- Statements of specific file, class, function names to use.
- Explicit design patterns to apply. ("loop over the outer product of lists of choices for each category")
- Implementation hints ("use itertools.product() to iterate over the combinations")
- And, "ask questions if you are uncertain" helps trigger an iteration to quickly clarify something instead of fixing the resulting code.
This specificity makes prompting a lot more work but it pays off. I only go this far when I care about the resulting code. And, I still often "retouch" as you also describe.
OTOH, when I'm vibing I'll just give end goals and let the slop flow.
> The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - George Bernard Shaw
The Dropbox take was wrong because they didn't understand the market for the product. This time the people commenting are the target audience. You even get the secondary way this product will lose even if turns out to be a good idea, existing git forges won't want to lose users and so will standardize and support attaching metadata to commits.
Nah. People post about k8s on here all the time, but that doesn't mean I'm the target audience. Just because _someone_ on HN has a bad take doesn't mean they're the person who needs this. Nor does it mean they even understand it.
How is LangChain doing? How about OpenAI's Swarm or their Agent SDK or whatever they called it? AWS' agent-orchestrator? The crap ton of Agent Frameworks that came out 8-12 months ago? Anyone using any of these things today? Some poor souls built stuff on it, and the smart ones moved away, and some are stuck figuring out how to do complex sub-agent orchestration and handoffs when all you need apparently is a bunch of markdown files.
The dropbox-weekend take wasn't made by the intended target for the product.
This is.
I still remember the reaction when Dropbox was created: "It's just file sharing; I can build my own with FTP. What value could it possibly create".
We forget that human consumption doesn't increase with manufacturing complexity (it can be correlated, but not cause and effect). At the end of day, it's about human connection, which is dependent on emotion, usefulness, and availability.
I also seen examples of it before. I've got opencode running right now and it has a share session feature. That whole idea is just a spinoff on the concept of the same parent that led to this one.
But yes, I would totally love to invest in startups with people's pension funds. It seems like the perfect scam where the only losers are the public that allows such actions.
I LOVE THIS FOUNDER - I am a 10 out of 10 - YES!!!
Take my (investors) money
It's because of everybody there.
Currently no one is on Entire - the investor are betting they will be.
Everything else about the featureset was copy pasted from Slack. No one cares about that part.
If it were also their last, I would be inclined to agree.
I use AI a ton, but there are just way too many grifters right now, and their favorite refrain is to dismiss any amount of negativity with "oh you're just mad/scared/jealous/etc. it replaces you".
But people who actually build things don't talk like that, grifters do. You ask them what they've built before and after the current LLM takeoff and it's crickets or slop. Like the Inglourious Basterds fingers meme.
There's no way that someone complaining about coding agents not being there yet, can't simultaneously be someone who'd look forward to a day they could just will things into existence because it's not actually about what AI might build for them: it's about "line will go up and I've attached myself to the line like a barnacle, so I must proselytize everyone into joining me in pushing the line ever higher up"
These people have no understanding of what's happening, but they invent one completely divorced from any reality other than the reality them and their ilk have projected into thin air via clout.
It looks like mental illness and hoarding Mac Minis and it's distasteful to people who know better, especially since their nonsense is so overwhelmingly loud and noisy and starts to drown out any actual signal.
You could perhaps start by telling what value you see in this? And what this company does that someone can't easily do themselves while committing to GH?
Runs git checkpoint every time an agent makes changes?
But I think commenting on someone's bio is the kinda harshness you only do in the moment. the kinda thing I'd approach differently in hindsight (one that isn't an attempt to be cruel)
> This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
Just an opinion and not ridiculing or attacking someone specifically
It's not 1:1 with checkpoints, but I find such things to be useful.
E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.
E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.
That's how a trillion dollar company also does it, turns out.
0: https://github.com/karthink/gptel
I have a lot of concurrent agents working on things at the same time, so I'm not always sure why a piece of code is the way it is months later.
- It's nice to see conversation context alongside the change itself. - I wasn't able to see Claude Code utilise past commit context in understanding code. - It's a tad unclear (and possible unreliable) in what is called 'checkpointing'. - It mucked up my commit messages by replacing the first line with a sort of AI request title or similar.
Sadly, because of the last point (we use semantic release and git-cz) I've had to uninstall it.
I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.
My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.
Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.
This sounds a lot like that line from Microsoft's AI CEO "not understanding the negativity towards AI". And Satya instructing us to not use the term "slop" any more. Yes we don't see value in taking a git primitive like "commit" and renaming it to "checkpoint". I wonder whether the branches going to be renamed to something like "parallel history" :)
It's almost a meme: whenever a commercial product is criticized on HN, a prominent thread is started with a classic tone-policing "why are you guys so negative".
(Well, we explained why: their moat is trivial to replicate.)
I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.
I still have kinks to work out in mine but it's already usable for building software. Once I get to v1 I think it will provide enough value to be useful for me in particular. I don't have enough data to speak about months on yet, but if I think the experiment is a success then I will do a Show HN or something.
The gist is you can clone a repo or start a project from scratch, each engineering agent gets a worktree, you work with the manager agent and it dispatches and manages other agents. there are playbooks which agents contextually turn into specific tasks, each of which is tracked much like CI/CD. You can see all the tool calls, and all of the communication between both agents and humans.
The application model is ticket-based. Everything revolves around the all-holy ticket. It's like a prompt, but it becomes a foundation for tying together every bit of work related to the process of developing the feature. So you can see the progress of the ticket through the organization kanban style, or watch from a dashboard, or look at specific tickets.
There are multiple review steps where human review and intervention are required. Agents are able to escalate to humans whenever they think they need to. There is a permission system, where agents have to seek permissions from other agents or humans in a chain of command in order to do certain tasks. Everything is audited and memoized, allowing for extreme accountability and process refinement stages.
Additionally, every agent "belongs" to either another agent or a human, so there is always a human somewhere in the chain of command who is responsible and accountable for the actions of his agent team. This team includes the manager agent, engineering agents, test agents, QA agents, etc, each loaded with different context, motivations and tools to keep them on track and attempt to minimize the common failure modes I experience while working closely with these tools all day.
The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.
https://news.ycombinator.com/newsguidelines.html
We can articulate it but why should we bother when it’s so obvious.
We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.
You could have someone collect and analyze a bunch of them, to look for patterns and try to improve your shared .md files, but that's about it