Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)
50 points by najmuzzaman 2 hours ago | 17 comments
I shipped a wiki layer for AI agents that uses markdown + git as the source of truth, with a bleve (BM25) + SQLite index on top. No vector or graph db yet.

It runs locally in ~/.wuphf/wiki/ and you can git clone it out if you want to take your knowledge with you.

The shape is the one Karpathy has been circling for a while: an LLM-native knowledge substrate that agents both read from and write into, so context compounds across sessions rather than getting re-pasted every morning. Most implementations of that idea land on Postgres, pgvector, Neo4j, Kafka, and a dashboard.

I wanted to go back to the basics and see how far markdown + git could go before I added anything heavier.

What it does: -> Each agent gets a private notebook at agents/{slug}/notebook/.md, plus access to a shared team wiki at team/.

-> Draft-to-wiki promotion flow. Notebook entries are reviewed (agent or human) and promoted to the canonical wiki with a back-link. A small state machine drives expiry and auto-archive.

-> Per-entity fact log: append-only JSONL at team/entities/{kind}-{slug}.facts.jsonl. A synthesis worker rebuilds the entity brief every N facts. Commits land under a distinct "Pam the Archivist" git identity so provenance is visible in git log.

-> [[Wikilinks]] with broken-link detection rendered in red.

-> Daily lint cron for contradictions, stale entries, and broken wikilinks.

-> /lookup slash command plus an MCP tool for cited retrieval. A heuristic classifier routes short lookups to BM25 and narrative queries to a cited-answer loop.

Substrate choices: Markdown for durability. The wiki outlives the runtime, and a user can walk away with every byte. Bleve for BM25. SQLite for structured metadata (facts, entities, edges, redirects, and supersedes). No vectors yet. The current benchmark (500 artifacts, 50 queries) clears 85% recall@20 on BM25 alone, which is the internal ship gate. sqlite-vec is the pre-committed fallback if a query class drops below that.

Canonical IDs are first-class. Fact IDs are deterministic and include sentence offset. Canonical slugs are assigned once, merged via redirect stubs, and never renamed. A rebuild is logically identical, not byte-identical.

Known limits: -> Recall tuning is ongoing. 85% on the benchmark is not a universal guarantee.

-> Synthesis quality is bounded by agent observation quality. Garbage facts in, garbage briefs out. The lint pass helps. It is not a judgment engine.

-> Single-office scope today. No cross-office federation.

Demo. 5-minute terminal walkthrough that records five facts, fires synthesis, shells out to the user's LLM CLI, and commits the result under Pam's identity: https://asciinema.org/a/vUvjJsB5vtUQQ4Eb

Script lives at ./scripts/demo-entity-synthesis.sh.

Context. The wiki ships as part of WUPHF, an open source collaborative office for AI agents like Claude Code, Codex, OpenClaw, and local LLMs via OpenCode. MIT, self-hosted, bring-your-own keys. You do not have to use the full office to use the wiki layer. If you already have an agent setup, point WUPHF at it and the wiki attaches.

Source: https://github.com/nex-crm/wuphf

Install: npx wuphf@latest

Happy to go deep on the substrate tradeoffs, the promotion-flow state machine, the BM25-first retrieval bet, or the canonical-ID stability rules. Also happy to take "why not an Obsidian vault with a plugin" as a fair question.


armcat 4 minutes ago
Any particular reason for BM25? Why not just a table of contents or index structure (json, md, whatever) that is updated automatically and fed in context at query time? I know bag of words is great for speed but even at 1000s of documents, the index can be quite cheap and will maximise precision
reply
imafish 2 minutes ago
Cool idea. But are anyone actually building real stuff like this with any kind of high quality?

Every time I hear someone say "I have a team of agents", what I hear is "I'm shipping heaps of AI slop".

reply
jimmypk 40 minutes ago
The BM25-first routing bet is interesting. You mention 85% recall@20 on 500 artifacts, but the heuristic classifier routing "short lookups to BM25 and narrative queries to cited-answer" raises a practical question: what does the classifier key on to decide a query is narrative vs short? Token count? Syntactic structure? The reason I ask is that in agent-generated queries, the boundary is often blurry - an agent doing a dependency lookup might issue a surprisingly long, well-formed sentence. If the classifier routes those to the more expensive cited-answer loop it could negate the latency advantage of BM25 being first.
reply
hyperionultra 46 minutes ago
For some reason I dislike Karpathy’s fanatism towards LLMs. I don’t know why.
reply
mirekrusin 30 minutes ago
Feels like disliking musician for fanaticism towards musical instruments.
reply
newsicanuse 18 seconds ago
You must be fun at parties
reply
William_BB 37 minutes ago
I have the same feeling ever since his infamous LLM OS post
reply
spiderfarmer 43 minutes ago
Probably just envy.
reply
wiseowise 31 minutes ago
Obviously it is envy, and not scepticism over a guy who practically lives on Twitter and has unhinged[1] follower base.

1 -https://x.com/__endif/status/2039810651120705569

reply
dhruv3006 2 hours ago
I love that so many people are building with markdown !

But also would like to understand how markdown helps in durability - if I understand correctly markdown has a edge over other formats for LLMs.

Also I too am building something similar on markdown which versions with git but for a completely different use case : https://voiden.md/

reply
left-struck 5 minutes ago
I read the durability thing as markdown files are very open, easy to find software for, simple and are widely used. All of this together almost guarantees that they will he viewable/usable in the far future.
reply
Unsponsoredio 27 minutes ago
love the bm25-first call over vector dbs. most teams jump to vectors before measuring anything
reply
goodra7174 56 minutes ago
I was looking for something similar to try out. Cool!
reply
davedigerati 55 minutes ago
why not an Obsidian vault with a plugin?
reply
tomtomistaken 24 minutes ago
what plugin are you using?
reply
davedigerati 53 minutes ago
srsly tho this looks slick & love the office refs / will go play with it :)
reply
agentminds 27 minutes ago
[dead]
reply