If your chatbot forgets everything after each message, it doesn’t seem very smart. What makes a chatbot truly intelligent is memory; the ability to remember past conversations and use that information in future replies. So, in this article, I’ll show you how to add memory to your chatbot with LangChain.
Why Chatbot Memory Matters
Without memory, the bot treats every message as a new conversation. It forgets user preferences, past questions, and context.
With memory, it recalls past interactions. It feels human-like and context-aware.
LangChain makes this process very simple if you use the right methods. Let’s get started.
Step 1: Install Everything You Need
You’ll need just a few libraries:
pip install langchain langchain-huggingface transformers sentence-transformers
We’ll use a local model from Hugging Face, so you won’t have to pay for any API calls.
Step 2: Import the Required Classes
Start with the key LangChain modules:
from langchain.memory import ConversationBufferMemory from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain_huggingface import HuggingFacePipeline
Step 3: Load a Local Model
We’ll use FLAN-T5-small, a free and lightweight model for text generation:
from transformers import pipeline
# Load the model locally
generator = pipeline("text2text-generation", model="google/flan-t5-small")
# Wrap it with LangChain
llm = HuggingFacePipeline(pipeline=generator)This model will run on your computer’s CPU, or on a GPU if you have one available.
Step 4: Add Memory to Your Chatbot
LangChain gives you several types of memory:
- ConversationBufferMemory stores everything, similar to a chat log.
- ConversationBufferWindowMemory – stores the last k interactions.
- ConversationSummaryMemory – summarizes the chat to save space.
Let’s start simple:
# Create memory
memory = ConversationBufferMemory()
# Define a prompt that includes the chat history
prompt = PromptTemplate(
input_variables=["history", "input"],
template="This is a conversation between a human and an AI assistant.\n\n{history}\nHuman: {input}\nAI:"
)
# Create the LLMChain with memory
chain = LLMChain(llm=llm, prompt=prompt, memory=memory)Step 5: Chat with Context
Now test it interactively:
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
response = chain.run(user_input)
print("AI:", response)You: My name is Aman.
AI: Hi, Aman.
You: What’s my name?
AI: Aman.
You: exit
The bot will remember your name due to the memory buffer.
A Tip from Experience
If your chatbot starts forgetting context after 4–5 messages, switch to:
from langchain.memory import ConversationSummaryMemory memory = ConversationSummaryMemory(llm=llm)
This keeps only the essence of the conversation, saving compute while retaining context, perfect for long chats.
Summary
So, adding memory transforms a basic chatbot into an intelligent conversational agent.
Now that you’ve mastered memory, the next step is to make your chatbot context-aware using RAG (Retrieval-Augmented Generation), where it can recall facts from documents too. You can learn about RAGs from my guided project here.
I hope you liked this article on how to add memory to your chatbot with LangChain. Feel free to ask valuable questions in the comments section below. You can follow me on Instagram for many more resources.





