Requirements: PKMS (Personal Knowledge Management System)
Based on Karpathy's LLM Wiki pattern. Served at https://pkms.hermesbillpay.com.
Core Concept
A personal knowledge base where the LLM does the bookkeeping. Three layers:
| Layer | What | Who owns it | |-------|------|-------------| | Layer 1 — Raw Sources | PDFs, articles, notes. Immutable. | Human curates | | Layer 2 — The Wiki | Structured, interlinked markdown pages. | LLM writes | | Layer 3 — The Schema | AGENTS.md rules for how the LLM behaves. | Defined once |
Three operations:
- Ingest — drop a source → LLM reads, summarizes, updates 10-15 wiki pages, cross-links
- Query — ask a question → LLM reads the pre-synthesized wiki, answers with citations
- Lint — LLM audits wiki for contradictions, orphans, stale data
MVP Scope (v0.1)
MUST have
Wiki Viewer (web UI)
- Browse wiki as rendered markdown pages
- Navigate via
[[wikilinks]]between pages - See index page listing all pages with structure
- Dark theme, clean design (consistent with Pankaj's preferences)
Source Upload
- Upload files to
raw/directory via web UI - List uploaded sources with metadata (name, date, size)
Ingest Operation
- After uploading a source, trigger LLM to compile it
- LLM reads the source, reads existing wiki, generates/updates wiki pages
- Shows progress/result in web UI
- Uses existing LLM API key (from Hermes config)
Query Operation
- Ask a natural language question
- LLM reads wiki pages (not raw sources) and synthesizes answer
- Answer includes citations to wiki pages
- Option to "file" a good answer back into the wiki
Lint Operation
- Trigger LLM audit of entire wiki
- Reports: contradictions, orphan pages, missing concept pages
- Displays results in web UI
NICE to have (v0.2+)
- Graph view (node-link diagram of wiki)
- Obsidian vault compatibility (same markdown format)
- Multiple LLM provider support
- Batch ingest (upload multiple sources at once)
- Edit wiki pages via web UI (with LLM assistance)
- Version history / rollback
OUT of scope (v0.1)
- Multi-user support (this is Pankaj's personal PKMS)
- Real-time collaboration
- Full-text search across raw sources (LLM compiles them first)
- Export to other formats
Technical Constraints
- Language: Python 3.11+
- Web framework: FastAPI
- Frontend: HTMX + Alpine.js (no JS framework)
- Storage: Markdown files on disk (the wiki), SQLite for metadata
- LLM: Any provider via OpenRouter API (reuses Hermes credentials)
- Deployment: systemd service behind nginx, same VPS
- Directory:
/home/pankaj/pkms/(already exists with placeholder)
User Flow
1. Pankaj opens https://pkms.hermesbillpay.com
2. Sees wiki index with all pages listed
3. Clicks "Upload" → selects a PDF → file lands in raw/
4. Clicks "Ingest" on the uploaded file
5. LLM processes: reads source → generates wiki pages → updates index
6. Pankaj browses the new pages, clicks [[links]] between them
7. Types a question → LLM reads wiki → returns synthesized answer with [[citations]]
8. Periodically clicks "Lint" → LLM reports wiki health
Success Criteria
- Pankaj can ingest a source and see wiki pages generated within 30 seconds
- Wiki pages are interlinked with working [[wikilinks]]
- Query returns answers grounded in wiki content (not hallucinated)
- System survives reboot (systemd)
- All operations work over HTTPS
Open Questions (for approval)
LLM backend: Use Pankaj's existing OpenRouter API key, or configure a separate one? Using the same key means PKMS inherits the same model/provider Pankaj uses for Hermes.
Directory layout for wiki pages: Karpathy's demo uses
wiki/sources/,wiki/concepts/,wiki/people/,wiki/examples/. Do we adopt this or let the LLM decide the structure based on content?Lint frequency: Manual trigger only, or auto-lint on every ingest?