A Compiled Memory System for AI Agents
EIDARA is a file-based persistent memory system for AI agents. Unlike existing approaches that rely on vector databases, graph structures, or cloud APIs, EIDARA uses a compiler-based architecture: AI agents write structured markdown files to a source directory, and a compiler transforms them into a single optimized document (BRAIN.md) that any AI can read. The system is governed by a 15-rule constitution (W1–W15) that standardizes how any AI—Claude, GPT, DeepSeek, or others—interacts with the memory. It includes self-healing mechanisms, distributed content governance via consensus flags (3/3 votes), SHA256 tamper detection, git-based version control, and two reading modes (Standard and Light). Stress-tested across 4 models with 52 autonomous tests each: 92.6% average, zero system failures.
Every AI conversation starts from zero. Large language models are stateless by design—they process a context window and produce a response, but nothing persists between sessions. This creates a fundamental limitation:
Mem0 uses a hybrid vector + graph architecture. It extracts facts from conversations, stores them in a searchable format, and retrieves relevant context at query time. Requires a cloud API or self-hosted vector database.
Letta / MemGPT takes an OS-inspired tiered memory approach with core/recall/archival storage tiers. Requires PostgreSQL and a runtime server.
Zep builds temporal knowledge graphs using their Graphiti library. Requires a knowledge graph infrastructure.
Karpathy's LLM-Wiki concept describes markdown files that AI agents maintain as persistent memory. No compiler, validation, governance, or consensus mechanisms.
Common limitations: require infrastructure, are platform-specific, lack content governance, and store raw facts rather than compiling structured knowledge.
Any AI can read and write if it follows the protocol. Security lies in the rules of entry (a constitution with two gates), not in who has access.
"Controls create bottlenecks when they're about WHO, not about HOW."
~10% imprecision is acceptable in exchange for scalability. If an AI writes something incorrect, the next AI that reads it and knows it's wrong will fix it. The system cleans itself through use.
The structure doesn't change whether there are 25 memories or 2,000. No folders to reorganize, no hierarchies to break. Just new memories, a growing index, and connections between them.
DARA.md (Constitution)
|
+-- Gate 1: VAULT/ (Write)
| +-- NEURONS/ -> Project and context knowledge
| +-- ENABLERS/ -> Agents, tools, credentials
| +-- INBOX/ -> Feedback channel
| +-- BACKUP/ -> Timestamped snapshots
|
+-- Gate 2: LIBRARY/ (Read)
+-- BRAIN.md -> Compiled output, single file
Gate 1 — VAULT (Write): Any AI writes information as individual markdown files. Each file represents one topic. Files follow a compressed encoding format designed for AI consumption.
Gate 2 — LIBRARY (Read): Any AI reads a single compiled file (BRAIN.md). Typically ~3,500 tokens for an active VAULT with 20+ entries.
| Element | Format | Example |
|---|---|---|
| Dates | YYYY-MM-DD | 2026-04-24 |
| Money | €NNK | €15K |
| Status | [A/H], [C/L], [P/M] | [A/H] = Active + High |
| People | First.Last(role) | Alice.M(Marketing) |
| References | →refs: | →refs: project-alpha |
| Consensus | →flag: | →flag: "reason" | 2/3 | voters |
Standard Mode: AI reads full BRAIN.md and opens VAULT files as needed. For Claude, GPT with plugins, and models with reliable tool access.
Light Mode: AI reads only BRAIN.md (INDEX + summaries). For DeepSeek, ChatGPT web, and models without filesystem access. The protocol adapts to the model—not the other way around.
The constitution defines 15 writing rules that govern every AI interaction with the system:
| Rule | Description |
|---|---|
| W1 | Check first—scan BRAIN.md INDEX before creating a new neuron |
| W1(b) | External-session writes—read DARA.md fully before writing from any external context |
| W2 | Fix errors—self-healing. Spot it, fix it, log it |
| W3(a) | Never delete files without explicit owner instruction |
| W3(b) | 5 protected files (Architect-only) |
| W4 | Update existing when: new info, data changed, error detected |
| W5 | Create new only when topic doesn't fit anywhere existing |
| W6 | After any write: regenerate BRAIN.md (auto-compiled by watcher in ~8s) |
| W7 | Log everything in changelog.md—append only |
| W8 | File naming: kebab-case, always |
| W9 | Session closing trigger—"anything DARA should remember?" |
| W10 | Live memory detection—flag significant info as it arises |
| W11 | Write lean, review everything—consensus flags for ambiguous content |
| W12 | Verify writes via shell after edits |
| W13 | Large neurons start with →brain: summary |
| W14 | Every agent file starts with summary: line |
| W15 | Platform-aware reading—Standard vs Light mode |
Distributed governance: when an AI is unsure whether content still adds value, it adds a flag. Another AI votes. At 3/3 votes, content is removed. Disagreement resets the flag. Stale flags (>30 days, <3 votes) are removed—consensus didn't form, content stays.
An executable document (agent-librarian.md) that any AI can read and become. Runs every 3 days with a 7-phase protocol: Orientation, System backup, Health diagnostics (11 checks), Separate durable from dated, Execute corrections, System verification, Report.
The Librarian triages INBOX items as SIMPLE (resolves directly) or STRUCTURAL (escalates to owner for Architect activation).
The only role that can modify the 5 protected files. Requires explicit activation ("Golden Door"). This dual gating prevents any AI from spontaneously modifying the Constitution.
The compiler transforms raw VAULT files into a single, optimized, verified BRAIN.md via a 10-step pipeline:
A populated VAULT holds 30K+ chars. BRAIN.md stays under 4K tokens through:
Compression ratio: ~27K source chars → ~3.5K BRAIN tokens (approximately 7:1).
Every compile produces SYSTEM/health.json with metrics and alerts. Token budget alerts fire at 50K (INFO), 100K (WARNING), 150K (CRITICAL). Auto-feedback files are generated in INBOX when warnings are detected.
python compile.py Full compilation
python compile.py --dry-run Validate without writing
python compile.py --stats Quick health check
python compile.py --validate-only Validate structure only
python compile.py --no-git Compile without git commit
Agents are documents, not programs. A file agent-librarian.md contains complete instructions for vault maintenance. Any AI that reads that file becomes the librarian—it doesn't matter if it's Claude, DeepSeek, or GPT. The intelligence is in the file, not in who executes it.
To create a new agent: write a markdown file. To share an agent: share the file. To modify an agent: edit the file. This eliminates platform dependency entirely.
4 models, 52 autonomous tests each, 7 evaluation blocks, zero human intervention:
| Model | Platform | Score | % |
|---|---|---|---|
| Opus 4.6 | Claude Cowork | 670/700 | 95.7% |
| Sonnet 4.6 | Claude Cowork | 652/700 | 93.1% |
| DeepSeek V4 | TypingMind (no shell) | 650/700 | 92.9% |
| Opus 4.7 | Claude Cowork | 613/700 | 87.6% |
Average: 92.6% · Zero system failures · Zero data corruption
The spread reflects interpretive choices on ambiguous edge cases. Variance across a 9-point band is evidence that the protocol, not the model, carries the system.
103 pytest tests across 3 files, 100% pass rate. Covers all 16 validators, checksums, noise stripping, backups, and the full pipeline.
v1 — The Org Chart Trap: Six specialized agents with a supervisor. Killed because task experts don't understand domain context.
v2 — The Roles Trap: Domain agents with write permissions. Killed because it creates bottlenecks. Gatekeeping by identity is less efficient than gatekeeping by protocol.
v3 — The Breakthrough: Eliminate all hierarchical folders. Design for the AI, not for the human. A flat document lake + a compiler that creates structure.
Javier comes from strategy consulting—13 years total, 9 at Bain & Company—where MECE is the central dogma. The realization: MECE is a thinking tool, not a memory tool.
"That's how I think but not how I remember."
"If the human never opens the folder, who are you organizing for?"
Required: Python 3.10+ and a folder on the local filesystem.
Recommended: Git (for automatic version history) and an AI assistant with filesystem access.
Not required: GitHub account, cloud sync, a specific AI platform, or any Python package beyond stdlib.
cd /path/to/your/EIDARA/folder
python3 --version # Check Python 3.10+
git init && git add -A && git commit -m "EIDARA v1.0 - initial install"
python3 SYSTEM/compile.py # First compile
python3 SYSTEM/watcher.py # Auto-watcher
LIBRARY/BRAIN.md.EIDARA demonstrates that persistent AI memory does not require vector databases, cloud infrastructure, or complex graph systems. A compiler-based approach with a clear constitution, self-healing mechanisms, and distributed governance can produce a system that is simpler, more transparent, and more portable than existing alternatives.
The system is production-ready, tested across multiple AI models, and available as open-source software.
EIDARA v1.0 — MIT License — Created by Javier Rotllant Miras
Download PDF ↓