Switching memory providers usually means writing a migration script, waiting for vectors to re-embed, and hoping nothing breaks.
Not here. retainr has a bulk import endpoint that accepts Mem0's own export format directly — paste your JSON and it's done.
This guide shows the exact steps for three sources: Mem0, Zep, and a raw Postgres table. Each path takes under two minutes once you have the export file.
Before you start
You need a retainr workspace. Register here — the free plan has 1,000 ops/month, which is plenty for a migration test run. No credit card.
If you want to follow along with the API directly, grab your API key from /dashboard.
Path 1 — Migrate from Mem0
Mem0 exposes a Python SDK method to export all memories for a user or agent. Run this in a Python script or notebook:
from mem0 import MemoryClient
client = MemoryClient(api_key="your-mem0-key")
# Export all memories for a user
memories = client.get_all(user_id="alice")
import json
with open("mem0-export.json", "w") as f:
json.dump(memories, f, indent=2)This produces a JSON array like:
[
{
"memory": "User prefers email over Slack for async comms",
"user_id": "alice",
"metadata": { "source": "crm" }
},
{
"memory": "User is in Berlin timezone (UTC+1)",
"user_id": "alice"
}
]That's exactly the format retainr's import endpoint accepts. No transformation needed.
Import via the dashboard:
- Go to retainr.dev/dashboard/import
- Paste the contents of
mem0-export.jsoninto the textarea (or upload the file) - Click Import memories
Or via curl:
curl -X POST https://api.retainr.dev/v1/memories/import \
-H "Authorization: Bearer rec_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d @mem0-export.jsonResponse:
{ "imported": 47, "skipped": 0 }Namespace mapping
retainr maps Mem0's identity fields automatically:
| Mem0 field | retainr namespace |
|---|---|
user_id: "alice" | user:alice |
agent_id: "support-bot" | agent:support-bot |
session_id: "sess-123" | session:sess-123 |
| none | global |
Your existing search and context calls stay unchanged — just swap user_id="alice" for namespace="user:alice".
Path 2 — Migrate from Zep
Zep stores memories in sessions. Export them via the REST API:
# List all sessions
curl -s "https://api.getzep.com/api/v1/sessions?limit=100" \
-H "Authorization: Bearer YOUR_ZEP_KEY" > sessions.json
# Export facts for a session
SESSION_ID="session-uuid-here"
curl -s "https://api.getzep.com/api/v1/sessions/${SESSION_ID}/memory" \
-H "Authorization: Bearer YOUR_ZEP_KEY" | jq '.facts[]' > zep-facts.jsonZep facts are plain strings. Convert them to retainr's native format with a short script:
import json
with open("zep-facts.json") as f:
facts = json.load(f) # list of strings
payload = {
"memories": [
{
"content": fact,
"namespace": "session:SESSION_ID_HERE"
}
for fact in facts
if fact.strip()
]
}
with open("retainr-import.json", "w") as f:
json.dump(payload, f, indent=2)Then import the resulting file exactly as shown above.
Tip: Zep's
factsare already deduplicated by Zep. retainr will re-embed them on import and apply its own deduplication pass within 24 hours (the daily River job). Duplicates won't show up in search results even before that.
Path 3 — Migrate from a raw Postgres table
If you rolled your own memory store, you probably have a table like:
CREATE TABLE agent_memories (
id UUID PRIMARY KEY,
user_id TEXT,
content TEXT,
created_at TIMESTAMPTZ
);Export it as JSON with psql:
psql "$DATABASE_URL" -c \
"COPY (
SELECT json_agg(row_to_json(t))
FROM (SELECT content, user_id FROM agent_memories) t
) TO STDOUT" > raw-export.jsonThen wrap it into retainr's native format:
import json
with open("raw-export.json") as f:
rows = json.load(f)
payload = {
"memories": [
{
"content": row["content"],
"namespace": f"user:{row['user_id']}" if row.get("user_id") else "global"
}
for row in rows
if row.get("content", "").strip()
]
}
with open("retainr-import.json", "w") as f:
json.dump(payload, f, indent=2)Import the file. retainr will embed every memory automatically using Voyage AI on the server — you don't need to pre-compute vectors.
Batch size limit
The import endpoint accepts up to 500 memories per request. For larger datasets, split into chunks:
import json, math
with open("retainr-import.json") as f:
all_memories = json.load(f)["memories"]
BATCH = 500
for i in range(0, len(all_memories), BATCH):
chunk = all_memories[i:i+BATCH]
with open(f"chunk-{i//BATCH}.json", "w") as f:
json.dump({"memories": chunk}, f)
print(f"Created {math.ceil(len(all_memories)/BATCH)} chunk files")Then import each chunk:
for f in chunk-*.json; do
echo "Importing $f..."
curl -s -X POST https://api.retainr.dev/v1/memories/import \
-H "Authorization: Bearer rec_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d @"$f" | jq .
doneVerify the migration
After import, check the memory count in your workspace:
curl -s "https://api.retainr.dev/v1/workspace" \
-H "Authorization: Bearer rec_live_YOUR_KEY" | jq .memory_countOr open your dashboard and look at the total on the Overview tab.
Run a quick search to confirm semantic recall is working:
curl -s -X POST https://api.retainr.dev/v1/memories/search \
-H "Authorization: Bearer rec_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"query":"communication preferences","namespace":"user:alice","limit":3}' | jq .You should see relevant memories ranked by cosine similarity within a few seconds of import.
Note on embeddings: retainr embeds memories synchronously during import. If Voyage AI is unavailable, the memory is stored without a vector and picked up by the background embedding retry job (runs every 15 minutes). It will be searchable within 15 minutes at most.
What you get after migrating
Once your memories are in retainr:
- Semantic search —
POST /v1/memories/searchwith a natural language query - Context injection —
POST /v1/memories/contextreturns a pre-formatted block ready for your system prompt - Memory decay — stale memories lose importance over time automatically; no manual cleanup
- Auto-deduplication — near-duplicate memories are merged daily (no more "user is in Berlin" × 12)
- Webhooks — get notified when a new memory matches a threshold (e.g., fire a Slack message when a user mentions churn)
- EU hosting — Hetzner Germany, data never leaves Europe
The import is not quota-counted, so a 500-memory import costs 0 ops against your monthly limit.
Dashboard import wizard
Prefer a UI? The import wizard in your dashboard handles both formats — paste JSON or upload a .json file. It shows exactly how many memories were imported and which were skipped (empty content, oversized entries) with per-item error details.
Next steps
- API reference — full endpoint docs
- n8n integration — 3-node workflow to connect retainr to any n8n agent
- Make.com integration — scenario blueprints
- Pricing — free plan, no credit card