████████╗██╗ ██╗██╗███╗ ██╗ ╚══██╔══╝██║ ██║██║████╗ ██║ ██║ ██║ █╗ ██║██║██╔██╗ ██║ ██║ ██║███╗██║██║██║╚██╗██║ ██║ ╚███╔███╔╝██║██║ ╚████║ ╚═╝ ╚══╝╚══╝ ╚═╝╚═╝ ╚═══╝
> █
# your ai's perfect project memory — portable, private, project-scoped.
# point twin at your work; your projects, decisions, and people travel with you across cursor, claude, and chatgpt.
// 01 the work
twin remembers the people, projects, and decisions that already define your work, so your ai doesn't have to ask. point it at a repo or your inbox; the same person across email, meetings, and code resolves to one canonical node. nothing to maintain. project-scoped by default, so your work doesn't bleed into your personal chats.
// 02 philosophy
you switch between chatgpt, claude, cursor every day. each one starts from scratch, forgets your project mid-thought, and bleeds context between unrelated work. built-in memories help inside one tool but never travel with you. twin is the layer underneath: a project-scoped, source-backed work-graph that follows you into whichever ai tool you open next.
1. ingest → repos, notes, email, calendar. zero-friction, no oauth required. 2. organize → resolve people + projects, extract decisions, audit every action. 3. retrieve → project-scoped context block, injected into the ai tool you're in.
// 02.5 cold start, solved
# most memory products die in the empty-state. twin doesn't: point it at a local git repo and you have a project graph in 90 seconds, no oauth, no scary permissions.
✓ scanning git log // 200 commits ingested ✓ resolving people from author emails → 4 contributors as person nodes ✓ extracting decisions from commit messages → 12 decisions captured (kind=decision) ✓ resolving project node → [twin] linked to github.com/Arsh-S/twin ✓ ready for browser-ext // mcp injection
inside cursor, claude, or chatgpt, twin's sidebar shows the project you're working on, the people involved, and the decisions on record. one click pastes a project-scoped context block into the prompt. read it before you send it.
# repo connector lives at POST /connectors/git-repo // ext arrives next.
// 03 architecture
claude code (mcp) [live] web chat (twin.arshsingh.net) [live] cli [live] // queued: ios app, browser ext, telegram
POST /ingest/raw raw content in POST /ingest/note structured note POST /connectors/git-repo bootstrap from local repo GET /context/working 3-section inject payload GET /recall hybrid vector search GET /walk n-hop entity neighbors GET /who_is canonical entity + facts GET /graph/snapshot full graph export POST /mcp model context protocol
extract-facts claude sonnet → triples resolve-entity id + alias + embed + prefix embed openai 3-large → vector audit-log every auto-decision logged
postgres + pgvector docs, facts, history audit log every action, undoable r2 (attachments) [planned]
claude sonnet extract + chat openai 3-large embeddings zero-retention flag on every call pii redaction [queued for p2]
<queued> gmail, gcal <queued> imessage, notion, browser ext <queued> discord, slack, github, rss <queued> voice (whisper), screenshots
// 04 surface
# five mcp tools live for any agent that speaks mcp (claude code, cursor, claude desktop, cline). browser ext for cursor + claude + chatgpt shipping next.
| tool | synopsis | purpose |
|---|---|---|
| twin_recall | query [limit] | vector hits across the personal graph |
| twin_who_is | name_or_id | canonical entity + recent mentions |
| twin_remember | text [occurred_at] | manual ingest, auto-extracted to graph |
| twin_walk | node_id [depth] | n-hop neighbors via fact edges |
| twin_inject_context | query [limit] | markdown context blob for any llm |
| entity.merge | src dst | <planned> collapse duplicate canonical nodes |
| entity.undo | audit_id | <planned> revert any auto-action from audit log |
| connector.add | kind [filters] | <planned> authorize gmail, gcal, notion, etc. |
// 05 phases
/brief/today assembly. daily digest email and pre-meeting heads-up cards. [gmail oauth verification submitted]docker compose up single-user self-host. markdown + sqlite export. anti-lock-in by default; investigating portable-ai-memory compat.// 06 join waitlist
# closed beta for ai-heavy technical founders. drop your email; we ship to operators who already feel the cross-tool context pain.
// 07 field log
$ tail -n 20 /var/log/twin/ingest.log [just now] /context/working project_id=01J... → 3-section block (218 tok) [2 min ago] /connectors/git-repo → 183 commits ingested, 12 decisions captured [5 min ago] resolve-entity: merged "Maya" → [[Maya Patel]] [9 min ago] extract-facts: 11 triples → graph (claude sonnet, zero-retention) [15 min ago] mcp/twin_remember → raw_item created, extraction queued [22 min ago] graph-walk 2 hops → 12 nodes, 22 edges [28 min ago] audit-log: 3 auto-decisions logged, undoable [1 hr ago] mcp/twin_inject_context → markdown blob assembled (cursor) [4 hr ago] mcp/twin_who_is "Helix" → canonical entity resolved [today] graph density 3.4 edges/node [OK]