Platform:
n8nMake.comZapier
n8nMake.comZapierLangChainAI memorytutorial

n8n LangChain Memory Node: Why It Fails and the Fix for n8n, Make.com & Zapier

By retainr team··10 min read·Updated Mar 10, 2026
n8n LangChain Memory Node: Why It Fails and the Fix for n8n, Make.com & Zapier

If you've built an AI agent in n8n, you've probably tried the built-in memory nodes. They look like the right solution. They're called "memory." They're right there in the LangChain node group.

And then you trigger your workflow a second time and realize: it forgot everything.

This is the same problem on every automation platform. Make.com has no built-in AI memory at all. Zapier's AI steps start fresh every run. This post explains exactly why that happens and how to fix it on all three platforms.

What n8n's LangChain Memory Nodes Actually Do

n8n ships with several memory nodes in the AI category:

  • Window Buffer Memory — keeps the last N messages in a sliding window
  • Token Buffer Memory — keeps messages up to a token limit
  • Simple Memory — basic key-value store for conversation context

These nodes are wrappers around LangChain's memory abstractions. They're useful in a specific context: within a single workflow execution.

If you're building a multi-step agent that calls tools, loops, and needs to remember earlier steps in the same run, these nodes work fine. The conversation history is maintained in memory as the execution progresses.

The problem is what happens when the execution ends.

Why Memory Resets Between Executions

n8n is a workflow orchestrator, not a stateful application server. When a workflow execution completes:

  1. All in-memory data is freed
  2. The execution record is written to the database (just logs and metadata)
  3. No business data from that run persists anywhere

The LangChain memory nodes store their data in the execution's memory space. When the execution ends, that memory is gone.

⚠️

This applies even to n8n's "Chat Memory Manager" nodes. The memory they manage is scoped to the execution. There is no built-in persistence layer that survives across separate webhook calls or scheduled runs.

Here's the execution model:

Execution 1: [User: "Hi"] → [Memory stored in RAM] → Execution ends → RAM freed
Execution 2: [User: "Remember me?"] → Fresh start → Memory = empty

There's no bridge between executions. Each webhook call starts a new, isolated execution.

The Window Buffer Memory Misconception

The Window Buffer Memory node has a session_id field, which misleads many people into thinking it provides cross-execution persistence.

It doesn't.

The session_id is used within an execution to partition memory if you're running multiple agent chains in the same workflow. It's a namespace, not a persistence key.

When you set session_id to the user's ID and then trigger the workflow again, the new execution has no access to the previous execution's Window Buffer Memory. The session is gone.

The Same Problem on Make.com and Zapier

Make.com and Zapier have an even simpler story: there is no built-in AI memory at all.

Make.com scenarios receive data, process it, and terminate. The OpenAI module sees only what you pass it in that single run.

Zapier Zaps are trigger → action → done. The AI step has no concept of previous runs.

Neither platform has a memory node, a session ID, or any mechanism to persist AI context across runs. It's not a bug — it's the same stateless architecture that makes these platforms reliable.

What "Persistent" Memory Actually Requires

To have memory that genuinely persists across executions, you need to store it somewhere outside of the execution's memory space.

StorageQueryable?Vector Search?Setup Effort
n8n Static DataNoNoLow (but fragile)
Google SheetsSort ofNoMedium
AirtableYes (limited)NoMedium
Postgres (external)YesWith pgvectorHigh
retainrYesYes (built-in)Low

The Real Cost of Fake Memory

Support bot scenario: User contacts support, explains they're on the Pro plan, mentions they have three team members. The AI responds helpfully. Next week, the user asks a follow-up question. The bot asks: "What plan are you on?"

The user experience is broken. The bot appears dumb. Users lose trust.

Sales assistant scenario: Prospect has three conversations about pricing. On conversation four, the AI starts the pricing explanation from scratch. The prospect concludes the AI is a basic FAQ bot, not a real assistant.

In each case, the agent is worse than useless — it creates the illusion of being smart while behaving in ways that feel random and frustrating.

Setting Up Real Persistent Memory: Choose Your Platform

Step 1: Remove the LangChain Memory Nodes

If you have Window Buffer Memory or Simple Memory nodes, remove them. They'll cause confusion once you have real memory working.

Step 2: Install the retainr Community Node

Settings → Community Nodes → Install → search n8n-nodes-retainr → Install.

Create a retainr API credential with your key from retainr.dev/dashboard.

Step 3: Add Memory Retrieval Before the AI Node

{
  "operation": "searchMemory",
  "scope": "user",
  "user_id": "={{ $json.sessionId }}",
  "query": "={{ $json.chatInput }}",
  "limit": 5
}

Use the session_id from the n8n webhook input as your userId — it's the stable identifier n8n uses to track conversations.

Step 4: Format the Memories for the AI

Add a Set node:

Field name: memoryContext

Field value (JavaScript expression):

{{ $json.memories.length > 0
  ? "Previous conversation context:\n\n" + $json.memories.map(m => m.content).join('\n\n')
  : "" }}

Step 5: Update Your AI Agent Node

In the system prompt:

You are a helpful assistant.

{{ $('Format Memories').item.json.memoryContext }}

Answer based on this context. Reference past conversations naturally.

Step 6: Store the Interaction After Response

{
  "operation": "storeMemory",
  "scope": "user",
  "user_id": "={{ $('Webhook').item.json.sessionId }}",
  "content": "={{ 'User: ' + $('Webhook').item.json.chatInput + '\n\nAssistant: ' + $json.output }}",
  "tags": ["conversation"]
}

This completely replaces LangChain memory — 6 nodes, all visual, no code required.

Comparison: n8n LangChain Memory vs retainr

Featuren8n Window Bufferretainr
Persists between executionsNoYes
Cross-device/channelNoYes
Vector similarity searchNoYes
Filter by userNoYes
Filter by tagsNoYes
Scale to millions of recordsNoYes
Works on Make.com and ZapierNoYes
Setup complexityNone (but broken)15 minutes

The n8n built-in memory is simpler to set up but fundamentally broken for any real AI agent use case. retainr is slightly more setup but actually works — on n8n, Make.com, Zapier, or any platform.

Migration Guide: Converting Existing n8n Agents

If you have a working (but amnesiac) n8n AI agent and want to add persistent memory:

1. Map your existing execution flow

Identify: where does user input enter? Where does the AI respond? What's the user_id/session_id?

2. Keep LangChain nodes for intra-execution context (optional)

If you have a complex agent that calls multiple tools in one execution, keep the Window Buffer Memory for that within-execution context. It still works for that purpose.

3. Add retainr as the inter-execution layer

At the start: search for past context. At the end: store the current interaction.

4. Gradually phase out LangChain memory

Once retainr is working, you can often remove the LangChain memory nodes entirely. The vector search retrieves what's relevant, so you don't need a sliding window.

Give your AI agents a real memory

Free plan includes 1,000 memory operations/month. No credit card required.

Add persistent memory to your n8n agent

Frequently Asked Questions

Does this work with n8n Cloud? Yes. The retainr community node is compatible with n8n Cloud v1.0+.

What is the difference between sessionId and userId in retainr? user_id scopes memories permanently to a user. session_id scopes memories to a specific conversation thread within a user's history. Use user_id for customer memory, session_id for project-specific memory.

Can I import existing n8n conversation logs into retainr? Yes — export your n8n execution history, format as memory objects, and POST them in batches to the retainr API. Useful for seeding memory from historical data.

What if I want to clear memory for a specific user? Call DELETE /v1/memories with the user_id parameter. All memories for that user are removed immediately. Important for GDPR right-to-erasure requests.

Does this work on Make.com and Zapier too? Yes. The same API key and the same memory pool work on all three platforms. An agent on n8n and another on Make.com can share user memories.

n8nSales

Lead Qualification Agent that Remembers Context

Qualify inbound leads with an AI agent that builds a persistent profile across multiple touchpoints. Each interaction enriches the lead record — no CRM field mapping required.

~30 memory ops/lead
// Blueprint: n8n-lead-qualification-agent-memory.json
// Download below to get the full importable workflow JSON.
n8n workflow · Intermediate

Free API key required — 1,000 memory ops/month, no credit card.

Get free API key →

Give your AI agents a real memory

Store, search, and recall context across Make.com, n8n, and Zapier runs. Start free - no credit card required.

Try retainr free

Related articles