Open-source implementation of Karpathy's LLM Wiki
Your LLM compiles and maintains a structured wiki from raw sources.
12 sources · Last updated 2 hours ago
This wiki tracks research on transformer architectures and their scaling properties. It synthesizes findings from 12 sources across 47 pages.
The relationship between model size and performance follows predictable scaling laws— loss decreases as a power law of compute, dataset size, and parameter count.
You rarely ever write the wiki yourself — the wiki is the domain of the LLM.
Articles, papers, notes, transcripts. Your immutable source of truth. The LLM reads from them but never modifies them.
LLM-generated markdown pages with summaries, entity pages, and cross-references. The LLM owns this layer. You read it; the LLM writes it.
A config file that tells the LLM how the wiki is structured, what conventions to follow, and what workflows to run on ingest.
Drop a source into raw/. The LLM reads it, writes a summary, updates entity and concept pages across the wiki, and flags anything that contradicts existing knowledge. A single source might touch 10–15 wiki pages.
Ask complex questions against the compiled wiki. Knowledge is already synthesized — not re-derived from raw chunks each time. Good answers get filed back as new pages, so your explorations compound.
Run health checks over the wiki. Find inconsistent data, stale claims, orphan pages, missing cross-references. The LLM suggests new questions to ask and new sources to look for.
“The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass.”
Andrej Karpathy
An incredible product instead of a hacky collection of scripts.
Get started free