Memory Vault
The Memory Vault is Sanctum’s shared knowledge base, inspired by how human memory works — which is to say, it forgets things on purpose and considers this a feature. It uses a three-layer architecture — working memory, short-term, and long-term — with a daily consolidation process that acts like sleep: distilling raw observations into lasting knowledge.
At some point during development, we realized we’d built a system that remembers where you left your SSH keys, forgets trivial session data, and dreams at 4 AM. The horror movie writes itself.
Memory Stack
Section titled “Memory Stack”Sanctum runs a four-layer memory stack. Each layer does one thing well. Together they give the house a memory system more sophisticated than most startups’ data architecture.
| Layer | Role | Backend | Cost |
|---|---|---|---|
| Mem0 | Primary cross-agent semantic memory | mem0.ai via Rube/Composio | Free |
| Memory Vault | Git-native versioned source of truth | sanctum-memory repo | Free |
| Neo4j / Graphiti | Entity relationships & graph queries | Local, port 31416 | Free (self-hosted) |
| Supermemory | Legacy cross-LLM bridge (secondary) | supermemory.ai MCP | Connected but secondary |
When Mem0 and the vault disagree, the vault wins — because Git has receipts. When the vault and the graph disagree, someone has a long night ahead of them. This has happened twice.
Architecture
Section titled “Architecture”┌─────────────────────────────────────────────────────┐│ LAYER 1: WORKING MEMORY (per-session, ephemeral) ││ Agent's context window. Dies when session ends. │├─────────────────────────────────────────────────────┤│ LAYER 2: MEM0 (cross-agent, persistent, free) ││ Semantic search + structured queries via Rube. ││ project: sanctum-memory / user: Ogilthorp3 │├─────────────────────────────────────────────────────┤│ LAYER 3: SHORT-TERM MEMORY (inbox/, days-weeks) ││ inbox/{agent}/ — raw observations, session notes ││ TTL: 7-30 days. Consolidated or discarded daily. │├─────────────────────────────────────────────────────┤│ LAYER 4: LONG-TERM MEMORY (consolidated, permanent)││ knowledge/{topic}/ — semantic facts (timeless) ││ events/YYYY/MM/ — significant episodes ││ procedures/ — how-to, runbooks ││ Neo4j/Graphiti — entity relationship graph │└─────────────────────────────────────────────────────┘Directory Structure
Section titled “Directory Structure”~/.sanctum/memory/├── inbox/ # Short-term, per-agent│ ├── claude-code/│ ├── gemini-cli/│ ├── openclaw/│ └── home-assistant/├── knowledge/ # Long-term semantic│ ├── devices/│ ├── network/│ ├── systems/│ ├── people/│ └── preferences/├── events/ # Long-term episodic│ └── 2026/03/├── procedures/ # Long-term procedural│ ├── troubleshooting/│ ├── maintenance/│ └── automation/├── archive/ # Expired, retained 90 days└── meta/ # Schema, consolidation logsNote Format
Section titled “Note Format”Every note has standardized YAML frontmatter. Opinions may vary on whether your house needs a metadata schema for its thoughts. Opinions are wrong.
---type: semantic | episodic | procedural | observation | session_summarysource_agent: claude-code | gemini-cli | openclaw | user | systemcreated: 2026-03-20T17:00:00Zupdated: 2026-03-20T17:00:00Zlast_accessed: 2026-03-20T17:00:00Zaccess_count: 3importance: 0.85consolidation_status: raw | reviewed | consolidated | archivedttl_days: null # null = permanent, or integer dayssuperseded_by: null # path to newer versiontags: [network, infrastructure]related_entities: [mac-mini, ubuntu-vm]---
# Note Title
Content with [[wikilinks]] to related notes...Importance Scoring
Section titled “Importance Scoring”Every note has a computed importance score (0.0 to 1.0) that determines its TTL. The vault is, in essence, deciding what’s worth remembering. A power you’d think we’d reserve for sentient beings, but here we are.
| Importance | TTL | Notes |
|---|---|---|
| > 0.8 | Permanent | Core knowledge, user-stated facts |
| 0.5 - 0.8 | 90 days | Agent-observed patterns |
| 0.3 - 0.5 | 30 days | Single observations |
| < 0.3 | 7 days | Ephemeral session data |
Score formula: source_weight x recency x access_frequency x link_density
Notes accessed 5+ times are protected from expiry regardless of score. If the system keeps coming back to a memory, there’s probably a reason. Even if that reason is paranoia.
Consolidation
Section titled “Consolidation”The consolidation engine runs daily at 4:17 AM (LaunchAgent: com.sanctum.memory-consolidate). Like sleep for the brain, except it runs on schedule, never hits snooze, and files a report when it’s done.
- Scan inbox for notes older than 24 hours
- Deduplicate against existing long-term knowledge
- Classify and move to the appropriate long-term directory
- Recompute importance scores for all active notes
- Expire notes that have exceeded their TTL
- Push to Mem0 — consolidated knowledge synced to Mem0 for cross-agent access
- Clean archive — delete archived notes older than 90 days
- Generate report in
meta/consolidation-log.md
Any agent can trigger consolidation manually via the memory_consolidate MCP tool.
Hard Caps
Section titled “Hard Caps”To prevent bloat, each layer has a maximum note count. Without limits, a system that remembers everything eventually remembers nothing useful — a problem familiar to anyone who’s ever hoarded browser tabs.
| Layer | Max Notes | Action at Cap |
|---|---|---|
| Inbox | 300 | Emergency consolidation |
| Knowledge | 1000 | Merge related notes |
| Events | 500 | Summarize old episodes |
| Procedures | 200 | Merge similar runbooks |
MCP Tools (8 total)
Section titled “MCP Tools (8 total)”| Tool | Description |
|---|---|
memory_search | Full-text search with tag and folder filters |
memory_read | Read a note (auto-tracks access) |
memory_write | Create/update with schema enforcement |
memory_delete | Remove a note |
memory_list | List notes by folder, tag, or type |
memory_links | Traverse the wikilink graph |
memory_consolidate | Trigger consolidation (dry-run by default) |
memory_health | Vault health metrics dashboard |
Mem0 Integration
Section titled “Mem0 Integration”Mem0 is the primary cross-agent memory layer, accessed via Rube/Composio. It’s free, and every recipe and automation that needs Sanctum context queries it first.
| Setting | Value |
|---|---|
| Provider | mem0.ai |
| Access | Rube/Composio (MEM0_* tools) |
| Project | sanctum-memory |
| User ID | Ogilthorp3 |
| Cost | Free |
Mem0 Tools (via Rube)
Section titled “Mem0 Tools (via Rube)”| Tool | Description |
|---|---|
MEM0_ADD_NEW_MEMORY_RECORDS | Store new memories with user/project scope |
MEM0_PERFORM_SEMANTIC_SEARCH_ON_MEMORIES | Natural language search across all memories |
MEM0_RETRIEVE_MEMORY_LIST | List memories with pagination and filters |
MEM0_SEARCH_MEMORIES_WITH_QUERY_FILTERS | Structured search with metadata filters |
MEM0_UPDATE_MEMORY_DETAILS_BY_ID | Update existing memory content |
Tool Integration
Section titled “Tool Integration”| Tool | Vault Access | Mem0 Access |
|---|---|---|
| Claude Code | MCP stdio (port 42069) | Via Rube/Composio |
| Gemini CLI | MCP stdio | Via Rube/Composio |
| OpenClaw agents | SSH skill (vault.sh) | Via API |
| Rube recipes | GitHub tools | Native (Composio MEM0_*) |
| Obsidian | File system (~/.sanctum/memory/) | N/A |
Anti-Bloat Rules
Section titled “Anti-Bloat Rules”- If derivable from code, logs, or sensors — don’t store it
- If the same fact is stated multiple ways — dedup into one
- If never accessed in 30 days and importance < 0.5 — archive
- Raw data goes to time-series DBs, not memory
- Every episodic memory needs context (who, what, when, why)
- Every semantic memory should be self-contained