This is the complete tutorial for AI agent memory — from installing the retainr node in n8n to running a production agent that remembers thousands of users. The patterns here work on n8n, Make.com, Zapier, or any platform with HTTP support.
If you want the quick version, see Why AI Agents Forget Everything. This tutorial goes deeper.
Prerequisites
- n8n v1.0+, Make.com, Zapier, or any platform with HTTP requests
- A retainr API key (get one free — takes 30 seconds)
- Basic familiarity with your chosen workflow platform
Part 1: Installing retainr on Your Platform
On n8n Cloud
- Open your n8n workspace
- Go to Settings (gear icon, bottom left)
- Select Community Nodes
- Click Install a community node
- Enter:
n8n-nodes-retainr - Click Install
The node appears in your node list within 30 seconds.
On self-hosted n8n
cd /path/to/n8n
npm install n8n-nodes-retainrRestart n8n after installation.
Add your credentials
- In any workflow, add a retainr node
- In the Credentials dropdown, click Create new
- Enter your API key from retainr.dev/dashboard
- Click Save — credential is now reusable across all workflows
Part 2: Understanding Memory Types
retainr supports two memory scoping patterns. Choose based on your use case:
User Memory (permanent)
Memories scoped to a user_id. Persists forever (or until you delete it). Best for:
- Customer service bots that remember users across months
- Personal AI assistants that build user profiles over time
- CRM enrichment that accumulates interaction history
{
"content": "User mentioned they are a developer and prefer technical explanations",
"user_id": "[email protected]"
}Session Memory (temporary)
Memories scoped to a session_id. Useful for:
- Multi-step tasks where you want a clean slate per task
- Workflow state management within a complex job
- Temporary context that should expire
{
"content": "Step 1 complete: scraped 150 product pages. Next: categorize by type.",
"user_id": "automation-job",
"session_id": "job-2026-03-08-001"
}Set ttl_seconds to auto-expire session memories:
{
"ttl_seconds": 86400
}Part 3: Core Workflow Patterns
Pattern 1: Basic Conversational Agent
The simplest memory pattern — works on all platforms:
Webhook (receives message + user_id)
↓
Search Memory (query = user message, user_id = their ID, limit = 5)
↓
AI Node (system prompt includes memory context)
↓
Store Memory (content = user message + AI response)
↓
Respond
Pattern 2: Progressive User Profiling
Build a user profile that grows richer over time. After each interaction, extract and store key facts separately:
AI Node (extract facts):
"Extract any preferences, facts, or important context.
Output as JSON: { facts: ['...', '...'] }"
↓
Store each fact separately with tag "profile-fact"
Later, search with tags: ["profile-fact"] to retrieve only profiling data.
Pattern 3: Long-Running Task State
For multi-step jobs that span multiple workflow executions:
Trigger: New task created
↓
Store Memory: "Task started: [description]. Status: IN_PROGRESS"
with sessionId = taskId
--- later ---
Trigger: Task continuation
↓
Search Memory (query = "task status", sessionId = taskId)
↓
AI decides next step based on what has been done
↓
Store Memory: "Step 3 complete: [result]. Remaining: [steps]"
Part 4: Production Configuration
Always fail gracefully
Wrap retainr nodes in an Error Handler sub-workflow:
retainr: Search Memory
└─ On Error → Set default: { memories: [] }
↓
AI node runs with or without memories
Async memory storage
Don't block your response on memory storage — store asynchronously after responding to the user. On n8n, use the Execute Workflow node to run memory storage in the background. This shaves 50-100ms off your response time.
Memory cleanup
Set up a scheduled workflow to clean up old session memories:
Cron trigger (weekly)
↓
retainr: Delete Memories
└─ Tag: "session"
└─ Created before: 30 days ago
Part 5: Real Production Example — Support Ticket Handler
The workflow handles:
- Incoming support tickets via webhook
- Semantic search for customer history
- AI classification + response generation
- Memory storage with categorization
- Escalation routing
retainr Search Memory configuration:
{
"operation": "search",
"query": "the customer message",
"user_id": "customer email",
"limit": 5,
"tags": ["support"]
}AI Agent system prompt:
You are a support agent for [Company Name].
CUSTOMER HISTORY (relevant past interactions):
[memories injected here]
INSTRUCTIONS:
- Reference past interactions when relevant
- Don't repeat solutions that did not work before (visible in history)
- Classify urgency: LOW / MEDIUM / HIGH
- End response with: URGENCY: [level]
retainr Store Memory configuration:
{
"operation": "store",
"content": "Ticket: [message] | Response: [ai response] | Urgency: [urgency]",
"user_id": "customer email",
"tags": ["support", "urgency-level"]
}Troubleshooting
"No memories returned": Check that you're using the same user_id in Search as you used in Store. User IDs are case-sensitive.
"Memory content too long": retainr has a 10,000 character limit per memory. Split long content into multiple memories with the same session_id.
"Search returning irrelevant results": Use the actual user message as the query, not a summary. Natural language queries work best.
"Rate limit exceeded": You're exceeding your plan's ops/month. Upgrade to Builder or Pro, or reduce the number of memories stored per interaction.
Give your AI agents a real memory
Free plan includes 1,000 memory operations/month. No credit card required.
Install the retainr node for n8n →Summary
You now have everything needed to build production-grade AI agents with persistent memory on any platform:
- Install on your platform of choice (n8n community node, Make.com app, Zapier webhooks, or direct API)
- Choose between user memory (permanent) and session memory (temporary)
- Implement the search-before/store-after pattern
- Handle failures gracefully — always fail open
- Monitor memory quality over time
The platform + retainr combination turns stateless automations into learning agents. Every interaction makes the next one better.
Frequently Asked Questions
Which platform is best for AI agent memory? All three work equally well with retainr. n8n is best for developers who want flexibility and self-hosting. Make.com is best for visual no-code builders. Zapier is best for connecting many apps quickly.
Can I migrate memories from one platform to another? Yes. Your retainr memories are stored on retainr's servers, not on your workflow platform. Switch from n8n to Make.com and your memories are unchanged — just connect with the same API key.
What's the memory search speed? Under 50ms for 100k memories, under 120ms for 1M memories. Memory retrieval is never the bottleneck — the LLM call always takes longer.