n8n is stateless by design. Every workflow execution starts with a clean slate — no memory of previous runs, no knowledge of what the same user asked last time, no context from anything that happened before this exact trigger fired.
This is intentional for most automation tasks. But for AI agents and customer-facing workflows, it is a fundamental limitation. This post explains every approach to n8n workflow memory between executions, and which one is right for your situation.
Why n8n Forgets Everything
n8n processes each workflow execution in isolation. When a trigger fires, n8n loads the workflow definition, runs the nodes in sequence, and terminates. No state is passed to the next execution.
This is correct behavior for stateless automation — sending an email, updating a spreadsheet, processing an order. Each run is independent and idempotent.
For AI agents, this becomes a problem. A user chats with your AI assistant today. They come back tomorrow. The agent has zero memory of the previous conversation. It asks the same onboarding questions. It forgets the user's name, preferences, and history.
The 5 Approaches Compared
1. Static Data (Built-in, Single-User Only)
n8n has a Static Data feature accessible in the Code node:
const workflowStaticData = $getWorkflowStaticData('global')
workflowStaticData.lastMessage = "Hello from last run"What it does: Persists data between executions of the same workflow.
The problem: Static Data is shared across ALL executions of the workflow. It is not scoped per user. If two users trigger your workflow simultaneously, their data collides.
Use only for: single-user personal automations, counters, rate limiting logic.
2. Postgres Chat Memory Node (Built-in, Requires DB)
n8n's AI Agent module includes a Postgres Chat Memory node. It stores conversation history in a Postgres table you manage.
What it does: Stores full message history per session_id. When the AI Agent runs, it loads the last N messages for that session.
Setup required:
- Running Postgres instance (Supabase, Neon, or self-hosted)
- n8n Postgres credential configured
- The node creates its own table automatically on first run
Limitations:
- Returns last N messages — not semantic search
- No relevance ranking: recent message 100 is always returned, relevant message from 3 weeks ago is not
- You manage the database (backups, storage, cleanup)
- No TTL: old messages accumulate forever unless you write cleanup jobs
- No multi-channel: only works within n8n, not shareable with Make.com or API workflows
3. Redis / External KV Store (Requires Infrastructure)
Some teams use a Redis HTTP Request node to store key: user_id → value: last conversation.
What it does: Fast key-value lookup for simple session state.
Limitations: No semantic search, no history beyond whatever you explicitly overwrite, requires Redis infrastructure. Not designed for AI memory retrieval.
4. External Postgres with pgvector (Requires Engineering)
Store memories as vector embeddings in a Postgres table with the pgvector extension. Run semantic search queries before each AI call.
This is exactly what retainr does under the hood — but you would need to:
- Provision a Postgres instance
- Install and configure pgvector
- Write embedding generation code
- Design the schema with proper namespacing
- Write the search and insert queries
- Handle HNSW index tuning
- Implement TTL and cleanup jobs
- Manage multi-tenant isolation
Cost: 1-3 days of engineering. Reasonable if you need deep customization.
5. retainr Community Node (Recommended)
Install once, use everywhere. No infrastructure, no schema, no embeddings code.
Settings → Community Nodes → Install → n8n-nodes-retainr
What it does: Gives your n8n workflow persistent semantic memory in two nodes — Store Memory and Search Memory.
How it works:
- You store memories with a
namespace(e.g.,user:alice) andcontent(the text to remember) - You search memories with a
namespaceandquery— returns the most semantically relevant results - Namespaces isolate users completely:
user:alicenever seesuser:bobresults
Implementing Cross-Execution Memory with retainr
Here is the exact node setup for any AI workflow that needs to remember things between runs.
The Pattern
Trigger (webhook, cron, form, etc.)
→ retainr: Search Memory (retrieve relevant past context)
→ AI Node (generate response with context injected)
→ retainr: Store Memory (save this exchange)
→ Response / Action
Search Memory Configuration
- Namespace: The identifier that scopes this user or session
- User workflows:
user:{{ $json.userId }} - Customer support:
customer:{{ $json.email }} - Project tracking:
project:{{ $json.projectId }}
- User workflows:
- Query: The current input — what do you want to find relevant context for?
- Limit: 3-5 results
Store Memory Configuration
- Namespace: Same as the search namespace
- Content: What happened this execution — the input + output
User asked: {{ $json.userMessage }} Assistant responded: {{ $json.aiResponse }} - TTL: Days until this memory expires automatically (optional)
Injecting Memory into the AI Prompt
In your AI system prompt:
Relevant context from previous interactions:
{{ $node["retainr_search"].json.results?.map(r => r.content).join("\n") ?? "No prior history." }}
Current task: answer the user's question using this context where relevant.
The AI now has cross-execution memory.
Choosing the Right Approach
| Situation | Recommended approach |
|---|---|
| Single-user, simple state (counter, flag) | Static Data |
| Multi-user chat history, last N messages | Postgres Chat Memory node |
| Multi-user, semantic search, no infra | retainr community node |
| Custom AI app, full control needed | pgvector + custom implementation |
| n8n Cloud Free (no community nodes) | retainr via HTTP Request node |
Common Mistakes
Using Static Data for multi-user workflows: Data collides across users. Use a proper namespaced store.
Storing too much in a single memory entry: Keep each memory entry focused — one conversation turn, one event, one fact. Large monolithic entries reduce retrieval quality.
Not using TTL: Memories accumulate. Set a TTL of 30-180 days depending on your use case. Stale memories degrade search quality.
Searching with the workflow trigger payload instead of the user message: The query should be the user's actual message, not the full webhook payload. Search retrieves what is most relevant to the user's current intent.
For workflows that run on a cron schedule (not user-triggered), use a fixed namespace like workflow:weekly-report to store state between cron runs. The semantic search becomes less useful for this pattern — consider using a simple key-value store or Postgres instead.
Memory on n8n Cloud Free (Without Community Nodes)
n8n Cloud Free plan does not support community nodes. Use two HTTP Request nodes instead:
Search Memory (HTTP Request):
- Method: GET
- URL:
https://api.retainr.dev/v1/memories/search - Query parameters:
namespace={{ $json.userId }}&q={{ $json.message }}&limit=5 - Headers:
Authorization: Bearer YOUR_API_KEY
Store Memory (HTTP Request):
- Method: POST
- URL:
https://api.retainr.dev/v1/memories - Headers:
Authorization: Bearer YOUR_API_KEY,Content-Type: application/json - Body:
{"namespace": "user:{{ $json.userId }}", "content": "{{ $json.content }}"}
The REST API provides the same capabilities as the community node — the node just pre-fills the configuration.
Give your AI agents a real memory
Free plan includes 1,000 memory operations/month. No credit card required.
Add memory to your n8n workflow — free plan →Frequently Asked Questions
Does retainr work with n8n self-hosted? Yes. Install the community node in your self-hosted n8n instance. The node communicates with the retainr cloud API — no infrastructure to add to your VPS.
Can I store non-conversation data in retainr? Yes. Store any text: user preferences, project notes, support ticket summaries, product feedback. The semantic search retrieves whatever is most relevant to your query.
What is the difference between n8n Postgres Chat Memory and retainr? Postgres Chat Memory stores the last N messages for a session. retainr stores any content and retrieves the most semantically relevant entries regardless of when they were stored. Use Postgres Chat Memory for simple chatbots; use retainr when relevance matters more than recency.
How do I clear a user's memory? Call DELETE /v1/memories?namespace=user:{id} via HTTP Request node. This deletes all memories in that namespace. Useful for GDPR deletion requests or user-initiated resets.
Add memory to any n8n workflow
Working with a specific app? These guides cover the exact setup:
- n8n WhatsApp AI Memory — Persistent memory for WhatsApp bots
- n8n Telegram AI Memory — Long-term memory for Telegram bots
- n8n Slack AI Memory — Persistent memory for Slack bots
- n8n Discord AI Memory — Persistent memory for Discord bots
- n8n Gmail AI Memory — Per-sender memory for Gmail AI workflows
- n8n HubSpot AI Memory — CRM-aware AI memory for HubSpot
- n8n Zendesk AI Memory — AI memory for support ticket workflows
- n8n Airtable AI Memory — Persistent memory for Airtable workflows