Story How it works Results Download Contact

How it works

A two-gate system: write structured files, compile them into one, read anywhere.

Two-gate architecture

DARA.md (Constitution) 15 rules · Entry point for all AIs VAULT/ (Gate 1: WRITE) NEURONS/ (projects) ENABLERS/ (tools) INBOX/ (feedback) Any AI writes here compile.py LIBRARY/ (Gate 2: READ) BRAIN.md Single compiled file Writers deal with individual, well-structured source files. Readers get a single file that fits in a context window.

What's a Neuron?

A Neuron is one topic. One file. One concern.

# Examples
project-alpha.md — everything about your main project
dara-system.md — the memory system's own documentation
profile-claude.md — how Claude should interact with Javier

Each starts with →brain: — a compressed summary line that goes directly into BRAIN.md. The rest is detail for when an AI needs depth.

What's an Enabler?

An Enabler is a tool, agent, or credential.

agent-architect.md — instructions for the system architect role
cred-cloudflare.md — how to access Cloudflare (no secrets in plaintext)
tool-typingmind.md — how TypingMind is configured

Same encoding, same rules. Just a different category.

The compiler (compile.py v3.1)

1,170 lines. Python stdlib only. No dependencies.

10-step pipeline:

1. Backup current BRAIN.md
2. Prune old backups (max 60, max age 180 days)
3. Read all VAULT files
4. Auto-purge archived entries (60-day soft-delete)
5. Deduplicate (name overlap + Jaccard similarity)
6. Validate + auto-fix (16 validator functions)
7. Check INBOX (count pending feedback)
8. Compile BRAIN.md (summaries + SHA256 checksum)
9. Delta report (per-entry size comparison)
10. Git auto-commit

Run it manually: python compile.py
Or let the watcher run it automatically every time a file changes.

The writing protocol (W1–W15)

RuleDescription
W1Check first — look up in BRAIN.md INDEX before creating a new neuron
W1(b)External-session writes — read DARA.md fully before writing from any external context
W2Fix errors — self-healing. Spot it, fix it, log it.
W3(a)Never delete files without explicit owner instruction
W3(b)5 protected files (Architect-only)
W3(c)Other files: any AI may fix errors; structural changes need permission
W4Update existing when: new info on same topic, data changed, error detected
W5Create new ONLY when: topic doesn't fit anywhere existing
W6After any write: regenerate BRAIN.md (watcher does it in ~8s)
W7Log everything in changelog.md — append only
W8File naming: kebab-case, always
W9Session closing trigger — ask "anything DARA should remember?"
W10Live memory detection — flag significant info as it arises
W11Write lean, review everything — consensus flags for ambiguous removals
W12Verify writes via shell after edits
W13Structure for BRAIN efficiency — large neurons start with summary
W14Enabler summaries — every agent file starts with summary:
W15Platform-aware reading — Standard vs Light mode

Fix or flag, never ask

The cardinal rule for AI behavior inside DARA:

If certain → fix it.

If uncertain → flag it (INBOX).

Never ask the human "should I update this?"

This is what makes it self-maintaining. Every AI leaves the system slightly better than it found it.

Protection mechanisms

1

SHA256 checksums — 5 critical files are integrity-locked. Any unexpected change is caught.

2

Warning header — BRAIN.md starts with a machine-readable "don't edit directly" block.

3

Consensus flags — Deleting content requires 3/3 AI confirmations. No unilateral removals.

4

Git auto-commit — Every compile triggers a commit. Full history, always recoverable.

5

Timestamped backups — Automatic snapshots, auto-rotated (max 60, max 180 days).

Works with every AI, even without tool calls

Standard Mode

AI reads full BRAIN.md, opens VAULT files as needed. For Claude, GPT with plugins, and models with reliable tool access.

Light Mode

AI reads only BRAIN.md (INDEX + summaries). For DeepSeek, ChatGPT web, and models without filesystem access. Same data, smaller context.

The protocol adapts to the model's capabilities — not the other way around.