Telegram bots in n8n are straightforward to build — but they forget everything between conversations. A user who told your bot their preferences last Tuesday gets the same generic response today as if they had never spoken before.
This guide shows you how to add long-term memory to an n8n Telegram AI agent. When a user messages your bot, it searches their entire conversation history, generates a context-aware response, and stores the exchange for future recall.
What You Will Build
A Telegram AI agent that:
- Responds to any message in a Telegram chat or group
- Retrieves the user's full conversation history via semantic search
- Generates a personalized, context-aware response
- Stores every exchange for long-term recall
- Works across sessions and bot restarts
Prerequisites
- n8n (self-hosted or Cloud)
- Telegram bot token (create one via @BotFather in Telegram — free and instant)
- retainr account (free at retainr.dev/dashboard)
- OpenAI or Anthropic API key
Step 1: Create the Telegram Bot
In Telegram, open a chat with @BotFather and run:
/newbot
Follow the prompts. BotFather gives you a token like 7123456789:AAFxyz.... Copy it — you will add this to n8n as a Telegram credential.
Step 2: Install the retainr Community Node
Settings → Community Nodes → Install → n8n-nodes-retainr
Step 3: Configure the Telegram Trigger
Add a Telegram Trigger node:
- Credential: your Telegram token
- Updates: Message (includes all messages sent to the bot)
The trigger fires on every incoming message. Key fields from the payload:
{
"message": {
"from": {
"id": 123456789,
"username": "alice",
"first_name": "Alice"
},
"text": "Can you remind me what project we were working on?",
"chat": {
"id": 123456789
}
}
}Use message.from.id as your memory namespace — it is stable and unique per Telegram user.
Step 4: Search Long-Term Memory
Add a retainr: Search Memory node:
- Namespace:
=tg:{{ $json.message.from.id }} - Query:
={{ $json.message.text }} - Limit: 5
The tg: prefix scopes Telegram users separately from any other channels you have connected to the same retainr account. Useful if you also have WhatsApp or email flows writing to the same workspace.
For group bots, use tg:{{ $json.message.from.id }}:{{ $json.message.chat.id }} as the namespace. This scopes memory per user per group, so the same user in two different groups gets separate context.
Step 5: Generate the Response
Add an OpenAI node:
- Resource: Chat
- Model: gpt-4o
- System prompt:
You are a helpful assistant. The user is messaging you via Telegram.
Their name is {{ $node["Telegram Trigger"].json.message.from.first_name }}.
Previous context from past conversations:
{{ $node["retainr_search"].json.results?.length > 0
? $node["retainr_search"].json.results.map(r => r.content).join("\n---\n")
: "No prior history." }}
Use this context to give personalized, helpful responses.
- User message:
={{ $node["Telegram Trigger"].json.message.text }}
Including the user's first name in the system prompt makes the AI feel genuinely personal, not generic.
Step 6: Store the Exchange
Add a retainr: Store Memory node:
- Namespace:
=tg:{{ $node["Telegram Trigger"].json.message.from.id }} - Content:
{{ $node["Telegram Trigger"].json.message.from.first_name }}: {{ $node["Telegram Trigger"].json.message.text }}
Assistant: {{ $node["OpenAI"].json.choices[0].message.content }}
- Tags:
telegram,chat(optional, for filtering later) - TTL: 180 (days — 6 months is usually right for personal assistants)
Step 7: Send the Reply
Add a Telegram node:
- Operation: Send Message
- Chat ID:
={{ $node["Telegram Trigger"].json.message.chat.id }} - Text:
={{ $node["OpenAI"].json.choices[0].message.content }}
For long responses, set Parse Mode to Markdown — the AI naturally produces formatted text and Telegram renders it well.
The Complete Workflow
Telegram Trigger
→ retainr: Search Memory
→ OpenAI: Chat
→ retainr: Store Memory
→ Telegram: Send Message
Advanced: Handling Commands
Add an IF node after the trigger to route commands:
IF $json.message.text starts with "/"
→ YES: Handle command (/start, /forget, /history)
→ NO: Normal memory-based AI response
Useful commands to implement:
/forget— delete the user's memory namespace (DELETE /v1/memories?namespace=tg:{id})/history— list the last 5 stored memories for debugging/start— welcome message for new users
Memory Across Bot Restarts
One key advantage of retainr over n8n's built-in Static Data: memory persists across bot restarts, n8n upgrades, and server migrations. The data lives in retainr's managed database, not in your n8n instance's local state.
Restart n8n, redeploy your workflow, switch from self-hosted to n8n Cloud — the Telegram users' memory is always there when they message again.
Performance Considerations
For bots with more than 500 active daily users:
- Keep the search limit at 3-5 results — enough context, minimal latency
- Use TTL to expire old memories automatically (90-180 days recommended)
- Consider storing summaries rather than raw exchanges for very active users: after every 10 turns, summarize the last 10 into 1 "session summary" memory
The semantic search latency is typically 20-50ms for most namespaces. Your Telegram bot response time is dominated by the OpenAI call (1-3 seconds), not the memory retrieval.
Frequently Asked Questions
Can I use this with Anthropic Claude instead of OpenAI? Yes. Replace the OpenAI node with the HTTP Request node calling the Anthropic API, or use n8n's Anthropic node if your n8n version includes it. The retainr nodes are model-agnostic.
Does this work in Telegram groups? Yes. Use the full namespace pattern tg:{user_id}:{chat_id} to scope memory per user per group, or tg:group:{chat_id} if you want shared group memory.
What happens if retainr returns no results? The search node returns an empty results array. The system prompt shows "No prior history." The bot responds normally without context. New users and inactive users are handled gracefully.
How many Telegram users can I serve on the free plan? The free plan includes 1,000 memory operations per month. Each conversation turn uses 2 operations (1 search + 1 store). That is 500 conversation turns per month across all users — enough for a personal bot or small community. Paid plans start at 20,000 ops/month.
Add memory to other n8n workflows
Building with a different app? These guides cover the same pattern:
- n8n WhatsApp AI Memory — Persistent memory for WhatsApp bots
- n8n Slack AI Memory — Persistent memory for Slack bots
- n8n Discord AI Memory — Persistent memory for Discord bots
- n8n Gmail AI Memory — Per-sender memory for Gmail AI workflows
- n8n HubSpot AI Memory — CRM-aware AI memory for HubSpot
- n8n Zendesk AI Memory — AI memory for support ticket workflows
- n8n Airtable AI Memory — Persistent memory for Airtable workflows
Give your AI agents a real memory
Free plan includes 1,000 memory operations/month. No credit card required.
Build your Telegram memory bot — free API key →