Frequently Asked Questions
General
What is mippi?
A layer between you and every AI tool you use.
Mıppı remembers everything: your preferences, your research, your projects, the connections between all of it. And it makes that context available to any AI you connect. Start a conversation in Claude, continue it in ChatGPT. Same context, same you.
It does more than remember. Mıppı delivers, connects, and executes. Send your research to your Kindle. Pipe data between services. Run workflows while you're away. The infrastructure that makes all your AI tools smarter.
Is mippi an AI?
No. Mıppı doesn't contain an AI. It doesn't chat with you, doesn't generate text, doesn't have opinions.
It uses the AI you already have. Claude, ChatGPT, Gemini, whatever you prefer. No second AI subscription. Mıppı is the infrastructure underneath: memory, delivery, connections. Your AI provides the intelligence. Mıppı provides the context and the tools.
What AI tools does it work with?
Anything that supports MCP (Model Context Protocol): Claude, ChatGPT, Cursor, Claude Code, Codex, and open-source tools.
Start a conversation in one, continue it in another. Same context. Mıppı is model-agnostic, client-agnostic, and provider-agnostic.
Do I need to be technical?
No. If you use Claude, you can use mıppı .
Your AI tool handles the connection. You don't need to know what MCP is, what a knowledge graph is, or how any of it works underneath. The technical depth lives on /for-developers for people who want it.
Is this like ChatGPT’s memory?
ChatGPT's memory stores conversation snippets inside ChatGPT. It stays there. You can't use it in Claude, Cursor, or any other tool. And you can't export it or search it meaningfully.
Mıppı stores structured knowledge (connections, tags, collections) across every AI tool you use. Your accountant connects to your tax documents connects to the deductions you researched. Portable, searchable, yours.
Plus delivery, connectors, and execution, things built-in AI memory can't do.
Short version: a notepad inside one app vs. a knowledge graph across all your apps, plus powertools.
How is my data stored? Is it safe?
Your data lives in its own isolated storage. Nobody else's data is in there. Privacy is architectural.
What happens if mippi shuts down?
You leave with more than you came with.
Full export: structured files with Schema.org metadata, in a human-readable format. Your knowledge graph is rebuildable from your files alone. The metadata is in the files themselves.
This is the walkaway guarantee. It's a design constraint, not a feature we might remove.
Can my team use it?
Coming soon. Every team member will have their own mıppı with their own agents.
Shared context is overlapping memory. Your agent sees what it needs from a colleague's work, without seeing everything. When your colleague saves meeting notes, your AI gets the parts relevant to your project.
What is MCP?
MCP (Model Context Protocol) is an open standard that lets AI tools connect to external services. It's how Claude, ChatGPT, and other AI tools talk to mıppı .
You don't need to understand MCP to use mıppı . Your AI tool handles the connection automatically.
Is mippi ready for production use?
No. Mıppı is in early alpha. Things can and probably will break.
Your data is safe. Isolated storage, walkaway guarantee, you can export everything anytime. But the product is still being built. Features will change. Rough edges are expected.
You're joining at the ground floor. If that sounds exciting, welcome.
What does it cost?
Free during alpha. Paid tiers start in beta. See /pricing for an indication of what's included.
Mıppı doesn't sell AI intelligence. You bring your own. You pay for infrastructure: memory, delivery, connectors, execution.
How do I get started?
Connect your AI tool in about 30 seconds with a config snippet. No separate sign-up. Just connect your AI and sign in with your existing account.
For Developers
How is mippi different from Mem0 / Letta / Cognee / Zep?
Every competitor in the AI memory space runs LLMs server-side: embedding, summarization, or retrieval. Mıppı doesn't. The pipeline is deterministic. That's the fundamental architectural bet.
| Dimension | mippi | Mem0, Letta, Cognee, Zep |
|---|---|---|
| Server-side LLM | None. Your AI does the thinking. mippi stores and serves the result. | Required. Ingestion runs LLM extraction, embedding, and summarization. |
| Switching providers | No re-processing. Context is structured text, not vectors. | Multi-provider, but switching embedding models means re-processing your entire memory store. |
| Your cost at scale | Zero on mippi's side. Use your existing AI subscription. | LLM inference on every write. Cost grows with your memory. |
| Data isolation | Per-user database. Your own storage, your own files. | Shared infrastructure with logical tenant separation. |
Full comparison on /for-developers.
Can I self-host mippi?
Aspiration, not shipped.
The architecture is being built with self-hosting as a design constraint: per-user isolated storage, flat files on S3-compatible storage.
The sovereignty stack direction: self-hostable, end-to-end encrypted, decentralized storage, rebuildable from your S3 bucket. Per-user isolation and flat-file backup are already built. The rest is where we're headed.
What’s the data model?
Four primitives:
- Records
- Everything you store is a Record, with a type, tags, and relations to other records. Notes, research, settings, all Records.
- Relations
- Typed connections between records. Your accountant links to your tax documents links to the deductions. The graph is the connections.
- Events
- What happened and when. Deliveries, searches, agent actions. The audit trail.
- Blobs
- Binary content. Files, documents, attachments.
Collections are records that govern other records. Three layers of governance (instructions, policy, schema validation) that scale from fully open to strictly validated. Same primitives power memory, delivery receipts, settings, and everything else.