Your AI Memory, Your Rules: How Ditto Handles Privacy and Data Ownership

A memory-first AI raises a fair question: who controls the memory? Ditto's answer: you do. Here's how we handle privacy, data ownership, and user control.

Your AI Memory, Your Rules: How Ditto Handles Privacy and Data Ownership

There’s an obvious tension at the heart of what Ditto does. We built an AI that remembers everything — every conversation, every subject, every connection in your knowledge graph. That memory is what makes Ditto useful. It’s also what makes the next question inevitable:

Who controls all that data?

If an AI remembers your debugging sessions, your project architecture, your half-formed startup ideas, your personal reflections — that’s sensitive information. The value of persistent memory only works if you trust the system holding it. And trust requires more than a privacy policy buried in a footer. It requires design decisions.

Here’s how Ditto handles it.

You Own Your Data. Actually.

“You own your data” has become a meaningless phrase in tech. Every service says it. Few mean it.

Here’s what it means concretely in Ditto:

  • Your memories are yours. Every conversation pair, every extracted subject, every knowledge graph connection — it belongs to you, not us. We don’t license it. We don’t sell it. We don’t use it to train models.
  • Delete means delete. When you remove a memory, it’s gone. Not “archived,” not “retained for 90 days,” not “anonymized and kept for analytics.” Deleted.
  • Nothing is public by default. Your memories, your knowledge graph, your personality assessments — all private until you explicitly choose to share something. And if you do share, you can revoke access at any time.

Ditto Does Not Train Models on Your Conversations

This is worth stating plainly because the industry norm is the opposite.

ChatGPT trains on your conversations unless you opt out. Google’s Gemini uses conversations to “improve products and services” unless you turn off activity tracking. Most AI companies treat your conversations as training data first and your property second.

Ditto doesn’t train models. We don’t have our own foundation model. We use models from providers like OpenAI, Anthropic, and Google via their APIs — and API usage is excluded from training by every major provider’s terms. Your conversations pass through these models for inference, but they aren’t stored or learned from by the model providers.

What we do store is your memory: the conversation pairs, subject extractions, and vector embeddings that power Ditto’s memory system. That data lives in your account and nowhere else.

You Control What the AI Sees

Most AI memory systems are black boxes. The AI silently stores things about you, silently retrieves them, and you have no idea what’s influencing its responses.

Ditto does the opposite. We built transparent memory as a core design principle:

  • See what Ditto retrieves. Every response shows which memories were used to generate it. Expandable cards show the exact context that informed the AI’s answer.
  • Edit and delete memories. If Ditto remembered something incorrectly — or something you’d rather it forgot — delete it. The knowledge graph updates immediately.
  • Steer context with Threads. Ditto Threads let you attach specific subjects, memories, and notes to a conversation. You decide what the AI focuses on, not an opaque retrieval algorithm.

This is context steering, not context guessing. You’re in the loop.

MCP: You Decide Who Accesses Your Memory

Ditto’s MCP integration lets you use your memory across other AI tools — Claude, Cursor, and any MCP-compatible client. But connecting your memory to external tools raises a privacy question: who can access what?

Here’s how it works:

  • You create API keys explicitly. No tool gets automatic access to your memory. You generate a key, configure the connection, and can revoke it at any time.
  • OAuth requires your approval. If you connect via OAuth (like in Claude.ai), you see exactly what permissions you’re granting and approve them interactively.
  • Per-tool isolation. Each MCP connection uses your credentials. There’s no shared access between tools. Revoking one key doesn’t affect others.
  • Ditto as MCP client is opt-in too. When you connect external MCP servers to extend Ditto’s capabilities, each server is a separate, revocable connection that you enable explicitly.

The principle: your memory is a resource you grant access to, not a resource that’s available by default.

What We Send to Model Providers

When you chat with Ditto, your message and relevant context (retrieved memories, thread attachments, system instructions) are sent to whichever AI model you’ve selected for that thread. This is how every AI assistant works — the model needs your input to generate a response.

What’s different with Ditto:

  • You choose the provider. If you trust Anthropic more than OpenAI (or vice versa), use their model. Ditto is provider-agnostic by design.
  • API usage isn’t training data. Every major model provider (OpenAI, Anthropic, Google) excludes API usage from model training. Your conversations through Ditto are processed for inference only.
  • Ditto doesn’t add tracking. We don’t inject analytics, identifiers, or metadata into your prompts. What you send is what the model sees.

The Privacy Paradox of AI Memory

There’s a genuine paradox here, and we’d rather address it honestly than pretend it doesn’t exist.

The more Ditto remembers, the more valuable it becomes. A knowledge graph with 500 subjects and thousands of memories provides dramatically better context than a blank-slate conversation. That’s the whole point.

But the more Ditto remembers, the more there is to protect. A rich memory system is a rich target. A detailed knowledge graph is a detailed portrait of your thinking.

We don’t think the answer is to store less. That defeats the purpose. The answer is to give you clear, granular control over what’s stored, how it’s used, who can access it, and how to remove it. Privacy isn’t the absence of data — it’s the presence of control.

What This Means in Practice

If you use Ditto today, here’s what your privacy posture looks like:

  1. Your conversations are stored as memory pairs in your account. They power semantic search, subject extraction, and context retrieval.
  2. Your knowledge graph is derived from your conversations. Subjects, connections, and co-occurrences are extracted automatically. You can view, search, and delete any of it.
  3. Nothing is shared externally unless you explicitly create an MCP key, share a profile, or share content.
  4. Model providers receive your messages for inference but don’t retain or train on them (per their API terms).
  5. You can delete any memory, any subject, or your entire account at any time.

Why This Matters for Memory-First AI

Other AI assistants can afford to be vague about privacy because they don’t store much. Conversations disappear. Context resets. There’s not much to protect because there’s not much to keep.

Ditto keeps everything — and that’s a feature, not a bug. But it only works if you trust the system. Trust requires transparency, control, and honest answers to hard questions.

We’re building the AI that remembers you. You should always be able to see what it remembers, control who else can see it, and delete any of it whenever you want.

Your memory. Your rules.


Ready to try a memory-first AI you can actually trust? Start with Ditto — your data stays yours.

Join 660+ users · Try free

Try Ditto Free →