Platform:
n8nMake.comZapier
make.comchatgptpersistent memoryopenaiAI agent

Make.com ChatGPT Persistent Memory: Complete Setup (2026)

By retainr team··6 min read·Updated Mar 26, 2026
Make.com ChatGPT Persistent Memory: Complete Setup (2026)

Make.com's OpenAI and ChatGPT modules are stateless. Every scenario execution starts fresh — no memory of previous runs, no customer context, no knowledge of past conversations. This is fine for one-shot tasks. For AI scenarios that interact with the same customers repeatedly, it is a fundamental limitation.

This guide covers the complete fix: adding persistent memory to any Make.com ChatGPT scenario using retainr.

Why Make.com ChatGPT Scenarios Forget Everything

Make.com runs each scenario execution in isolation. When an execution finishes, all data from that run is gone. The next execution starts completely fresh.

The OpenAI module accepts a prompt and returns a response. It has no access to previous executions, no built-in user session, and no way to retrieve what was discussed in the last run. If you want context-aware AI responses, you must supply the context yourself.

retainr is where you store and retrieve that context.

The Pattern: Search Before, Store After

Every Make.com ChatGPT memory scenario follows this structure:

  1. Trigger (any trigger — new email, form submission, webhook, schedule)
  2. HTTP GET — search retainr for memory relevant to this user/contact
  3. OpenAI module — generate response with memory context injected into the prompt
  4. HTTP POST — store this interaction in retainr for next time

This pattern works with any Make.com AI module: OpenAI, Anthropic Claude, Google Gemini. The memory layer is model-agnostic.

Step-by-Step Setup

Step 1: Get your retainr API key

Create a free account at retainr.dev/dashboard. Copy your API key. The free plan includes 1,000 memory operations per month.

Step 2: Add the HTTP GET module

After your trigger module, add an HTTP → Make a request module:

  • URL: https://api.retainr.dev/v1/memories/search
  • Method: GET
  • Query String:
    • Name: namespace / Value: customer:{{triggerEmail}} (map the identifier from your trigger)
    • Name: q / Value: {{currentQuery}} (the current subject or question)
    • Name: limit / Value: 5
  • Headers: Add Authorization with value Bearer YOUR_API_KEY

Map {{triggerEmail}} from your trigger — this scopes memory per customer. Use any stable unique identifier: email address, user ID, contact ID.

Step 3: Inject memory into the OpenAI module

In the OpenAI → Create a Completion module, build the system prompt dynamically:

You are a helpful assistant.

Customer memory:
{{join(map(1.data.results; "content"); "\n")}}

Respond based on this context. If no context exists, respond normally.

The map() function extracts the content field from each result. The join() function combines them into a single text block. Replace 1 with the actual module number of your HTTP GET module.

If you use the Chat endpoint instead of Completions, put the memory block in the System message field.

Step 4: Add the HTTP POST module

After the OpenAI module, add another HTTP → Make a request module:

  • URL: https://api.retainr.dev/v1/memories
  • Method: POST
  • Headers: Authorization: Bearer YOUR_API_KEY, Content-Type: application/json
  • Body type: Raw
  • Content type: JSON
  • Request content:
{
  "namespace": "customer:{{triggerEmail}}",
  "content": "{{topic}}: {{substring(openAIResponse; 0; 300)}}"
}

Store a concise summary — the topic and the first 300 characters of the AI response. Avoid storing the full response; summaries retrieve better semantically.

Namespace Design

The namespace field determines whose memory you read and write. Choose a pattern and use it consistently in both HTTP modules:

Use caseNamespace
Per customer emailcustomer:{email}
Per HubSpot contacthubspot:{contact_id}
Per form respondentform:{email}
Per Airtable recordairtable:{record_id}
Per Gmail senderemail:{from_email}

Common Make.com Scenarios with Memory

Customer support automation

Trigger: Gmail → Watch Emails (new support email) Memory scope: support:email:{from_email} AI task: draft reply that references past ticket history Store: issue category + resolution summary

Lead nurture

Trigger: HubSpot → Watch Contacts (contact stage change) Memory scope: hubspot:{contact_id} AI task: personalize follow-up based on past interactions Store: interaction summary + objections raised

Knowledge base Q&A

Trigger: Typeform → Watch Responses (new form submission) Memory scope: user:{respondent_email} AI task: answer question, referencing what this user has asked before Store: question + AI answer summary

Content generation

Trigger: Airtable → Watch Records (record update) Memory scope: airtable:{record_id} AI task: generate content that builds on past drafts and feedback Store: draft summary + feedback notes

Handling the Map/Join Pattern

The HTTP GET module returns a JSON response with a data.results array. Each result has a content field. To convert this to a string for the OpenAI prompt:

{{join(map(httpModuleNumber.data.results; "content"); "\n---\n")}}

If the array is empty (new customer), join() returns an empty string — no error. Your system prompt should handle empty context gracefully.

If map() is not available in your Make.com plan, use the Tools → Set Variable module to process the array, or use a single concatenated string from the first result: {{httpModuleNumber.data.results[].content}}.

Give your AI agents a real memory

Free plan includes 1,000 memory operations/month. No credit card required.

Add memory to your Make.com ChatGPT scenario — free plan

Frequently Asked Questions

Does this work with Claude or Gemini modules in Make.com? Yes. The HTTP GET and POST modules are the same regardless of which AI module you use. Swap OpenAI for Anthropic, Google, or any other AI module — the retainr memory steps are identical.

What Make.com plan do I need? Any paid Make.com plan. The scenario needs at least 3 modules: trigger, HTTP GET, HTTP POST (4 with the AI module). Make.com Free only supports 2 modules per scenario.

Can I use retainr across multiple Make.com scenarios? Yes. retainr memory is keyed by namespace, not by scenario. A customer's memory stored in a support scenario is accessible in a sales scenario if you use the same namespace. Use different prefixes to keep pools separate: support:email:{addr} vs sales:email:{addr}.

How do I handle the first run for a new customer? The HTTP GET returns an empty results array for new customers. Make.com handles empty arrays gracefully — join(map(results; "content"); "\n") returns an empty string. Your system prompt should include a fallback: "If no context exists, respond as if meeting this customer for the first time."

What is the memory search latency in Make.com? Under 50ms for 100k memories. The HTTP GET module adds negligible time to your scenario — the OpenAI module call is always the bottleneck.

Add memory to any Make.com scenario

Working with a specific app? These guides cover the exact setup:

Give your AI agents a real memory

Store, search, and recall context across Make.com, n8n, and Zapier runs. Start free - no credit card required.

Try retainr free

Related articles