Customer Service Chat with Persistent Context
Make.com scenario that gives an AI chat widget full access to the user's support history. Every new message is answered with complete prior context retrieved from retainr.
Usage estimate
Each incoming chat message triggers 1 memory search (top 5 past interactions) and 1 memory write (current exchange). A 10-message support conversation uses ~20 ops.
When customers open a chat widget and ask a question, they shouldn't have to re-explain their situation. This Make.com scenario intercepts each incoming chat message, retrieves the customer's full support history from retainr, and feeds it to the AI before generating a reply.
The scenario runs on a custom webhook so it integrates with any chat platform — Crisp, Intercom, Tawk.to, or your own front-end — by simply pointing the outgoing webhook at Make.com.
What's included: - Trigger: custom webhook receiving chat messages - retainr HTTP module: semantic search over customer support history by email or customer ID - OpenAI module: generate a context-aware reply with full history in the system prompt - HTTP module: POST reply back to the originating chat platform - retainr HTTP module: append the current exchange to the customer's memory
Setup steps
- 1Import the scenario blueprint into Make.com
- 2Add your retainr API key to both HTTP module Authorization headers
- 3Point your chat platform's outgoing webhook at the Make.com webhook URL
- 4Add your OpenAI API key to the OpenAI module
- 5Customise the system prompt in the OpenAI module to match your brand voice
Free download
Customer Service Chat with Persistent Context
Get a free API key first — 1,000 memory ops/month included. No credit card required.