Story How it works Results Download Contact

Your AI remembers everything
between conversations.

You use Claude, GPT, and DeepSeek. Every session starts from zero.
DARA is a file they all share — write once, read everywhere.
Open source. Local. No cloud required.

You've been working with Claude for two hours. Real progress. Then the session fills up — or you need to switch to GPT, or start a new thread.

Everything you built disappears. You start over. Paste context. Re-explain decisions. Lose nuance.

Your AI session is a room. When it fills up, you leave everything behind.
DARA is the door that lets you carry everything with you.

How it works

A shared memory file that any AI can read and write. A compiler keeps it clean.

WRITE

VAULT/

Any AI writes.
Any platform.
Follow the rules.

COMPILE

compile.py

10-step pipeline.
Validates. Dedupes.
Auto-fixes. Locks.

READ

BRAIN.md

Any AI reads.
One file. Current.
SHA256 verified.

Write anywhere. Compile once. Read everywhere.

See full architecture →

What you actually do: two actions.

No terminals. No file management. Just talk to your AI.

→ Consult DARA

Open any AI and say:
"Check DARA for context on project X."

Seconds later, the AI knows everything.
→ Write to DARA

When something new happens:
"Update DARA with what we just decided."

The AI writes it. Next session, it's there.

What makes it different

Not another chatbot wrapper. A compiled, self-healing, consensus-driven memory protocol.

It compiles, not accumulates

A 10-step pipeline validates, deduplicates, auto-fixes, and checksums your memory with SHA256. Every compile is git-committed. No other system compiles — they just pile up.

Self-healing by design

Errors are expected (~10% imprecision). The next AI that spots one fixes it automatically. The system gets better with every interaction — no manual cleanup needed.

3/3

Governance by consensus

Deleting or changing content requires 3 independent AI votes. No single AI makes unilateral changes. If you disagree with a flag, you reset it. Built-in democracy.

A constitution, not an API

15 rules (W1-W15) govern every interaction. Any AI that reads the rules can participate. No training, no fine-tuning, no API keys per agent. The protocol IS the access control.

Zero infrastructure

Markdown files + Python standard library. No database, no Docker, no API keys, no cloud account. Your files never leave your machine. Runs anywhere Python 3.10+ exists.

Two reading modes

Standard Mode: full BRAIN.md + VAULT files. Light Mode: BRAIN.md only (index + summaries). The protocol adapts to the model — not the other way around.

"That's how I think but not how I remember."

Javier Rotllant — on why DARA doesn't use folders

It took three wrong answers to find this

Before DARA worked, it failed three times — as task experts, role permissions, and folder hierarchies.

v1

Task experts

A mail agent doesn't understand your business. A domain expert does. Specialized agents fail because context is cross-cutting.

v2

Role permissions

Only authorized AIs can write. But any AI might have the right info. Gatekeeping creates bottlenecks, not quality.

v3

Hierarchies

Organizing memory for humans who browse folders. But humans never open the folder — the AI does. Organize for the reader.

"If the human never opens the folder, who are you organizing for?"

That question changed everything.
Read the full story →

Share knowledge with your team

Each topic lives in its own file — called a neuron. One project = one file. One tool = one file. Plain markdown that any AI understands.

Drop a neuron in shared Drive. Your colleague opens it with their AI — Claude, GPT, DeepSeek, doesn't matter. No install needed. It's just a text file.

Agents work the same way. A file contains the protocol. Any AI that reads it becomes that agent. The intelligence lives in the file, not in who runs it.

# agent-librarian.md
summary: Vault maintenance agent

## Protocol
1. Scan INBOX for pending items
2. Route each to correct neuron
3. Flag ambiguous entries
4. Log actions in changelog

Does it actually work?

4 AI models. 52 tests each. No human intervention. No cherry-picking.

92.6%
Average score across 4 models
670/700
Best: Opus 4.6 (95.7%)
0
System failures
0
Data corruption
103
Unit tests (100% pass)

No other memory system publishes cross-model test results.
We do because it has to work with your AI — not just ours.

See full results →

Who is this for

If you checked 2 or more, EIDARA will save you time starting today.

Get started in 10 seconds

$ git clone https://github.com/jrotllant/eidara.git
$ cd eidara
$ python compile.py

Or give the INSTALL file to any AI. It guides you through 11 steps — personalization, git setup, and watcher configuration. You don't type a single terminal command.

Built in the open

EIDARA is free. MIT licensed. No telemetry. No cloud. Your data never leaves your machine.

Built by Javier Rotllant — pragmatic founder, Bain background, not an engineer. Someone with a problem and the stubbornness to solve it properly.

Star on GitHub →

The foundation is solid. Now it evolves.

Semantic search. MCP server for Cursor and Cline. Conflict detection. Web dashboard. Agent marketplace. Each feature builds on a system that already works — tested, proven, stable.

Cloud sync will come, but it will never be required. Local-first, always.

Read the whitepaper →