AI Agent with Multi-Session Memory

-

Intro

In Computer Science, identical to in human cognition, there are different levels of memory:

  • Primary Memory (like RAM) is the energetic temporary memory used for current tasks, reasoning, and decision-making on current tasks. It holds the data you might be currently working with. It’s fast but volatile, meaning that it loses data when the facility is off.
  • Secondary Memory (like physical storage) refers to long-term storage of learned knowledge that shouldn’t be immediately energetic in working memory. It’s not at all times accessed during real-time decision-making but will be retrieved when needed. Subsequently, it’s slower but more persistent.
  • Tertiary Memory (like backup of historical data) refers to archival memory, where information is stored for backup purposes and disaster recovery. It’s characterised by high capability and low price, but with slower access time. Consequently, it’s rarely used.

AI Agents can leverage all of the varieties of memory. First, they will use Primary Memory to handle your current query. Then, they might access Secondary Memory to usher in knowledge from recent conversations. And, if needed, they may even retrieve older information from Tertiary Memory.

On this tutorial, I’m going to indicate how you can construct an AI Agent with memory across multiple sessions. I’ll present some useful Python code that will be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you would be able to replicate this instance (link to full code at the tip of the article).

Setup

Let’s start by organising (pip install ollama==0.5.1), a library that enables users to run open-source LLMs locally, without having cloud-based services, giving more control over data privacy and performance. Because it runs locally, any conversation data doesn’t leave your machine.
To start with, it is advisable to download  from the web site. 

Then, on the prompt shell of your laptop, use the command to download the chosen LLM. I’m going with Alibaba’s , because it’s each smart and lightweight.

After the download is accomplished, you may move on to Python and begin writing code.

import ollama
llm = "qwen2.5"

Let’s test the LLM:

stream = ollama.generate(model=llm, prompt='''what time is it?''', stream=True)
for chunk in stream:
    print(chunk['response'], end='', flush=True)

Database

An Agent with multi-session memory is an Artificial Intelligence system that may remember information from one interaction to the subsequent, even when those interactions occur at different times or over separate sessions. For instance, a private assistant AI that remembers your day by day schedule and preferences, or a customer support Bot that knows your issue history without having you to re-explain every time.

Mainly, the Agent must access the chat history. Based on how old the past conversations are, this might be classified as Secondary or Tertiary Memory.

Let’s get to work. We are able to store conversation data in a vector database, which is one of the best solution for efficiently storing, indexing, and searching unstructured data. Currently, probably the most used vector db is Microsoft’s , while one of the best open-source one is , which is helpful, easy, and free.

After a fast pip install chromadb==0.5.23 you may interact with the db using Python in three alternative ways:

  • chromadb.Client() to create a db that stays temporarily in memory without occupying physical space on disk.
  • chromadb.PersistentClient(path)to save lots of and cargo the db out of your local machine.
  • chromadb.HttpClient(host='localhost', port=8000) to have a client-server mode in your browser.

When storing documents in , data are saved as vectors in order that one can search with a query-vector to retrieve the closest matching records. Please note that, if not specified otherwise, the default embedding function is a sentence transformer model (.

import chromadb

## connect with db
db = chromadb.PersistentClient()

## check existing collections
db.list_collections()

## select a set
collection_name = "chat_history"
collection = db.get_or_create_collection(name=collection_name, 
    embedding_function=chromadb.utils.embedding_functions.DefaultEmbeddingFunction())

To store your data, first it is advisable to extract the chat and reserve it as one text document. In , there are 3 roles within the interaction with an LLM:

  • — used to pass core instructions to the model on how the conversation should proceed (i.e. the essential prompt)
  • — used for user’s questions, and likewise for memory reinforcement (i.e. “do not forget that the reply should have a selected format”)
  • — it’s the reply from the model (i.e. the ultimate answer)

Be certain that each document has a novel id, which you’ll generate manually or allow to auto-generate. One vital thing to say is that you would be able to add additional information as metadata (i.e., title, tags, links). It’s optional but very useful, as metadata enrichment can significantly enhance document retrieval. As an illustration, here, I’m going to make use of the LLM to summarize each document into just a few keywords.

from datetime import datetime

def save_chat(lst_msg, collection):
    print("--- Saving Chat ---")
    ## extract chat
    chat = ""
    for m in lst_msg:
        chat += f'{m["role"]}: <<{m["content"]}>>' +'nn'
    ## get idx
    idx = str(collection.count() +1)
    ## generate info
    p = "Describe the next conversation using only 3 keywords separated by a comma (for instance: 'finance, volatility, stocks')."
    tags = ollama.generate(model=llm, prompt=p+"n"+chat)["response"]
    dic_info = {"tags":tags,
                "date": datetime.today().strftime("%Y-%m-%d"),
                "time": datetime.today().strftime("%H:%M")}
    ## write db
    collection.add(documents=[chat], ids=[idx], metadatas=[dic_info])
    print(f"--- Chat num {idx} saved ---","n")
    print(dic_info,"n")
    print(chat)
    print("------------------------")

We want to start out and save a chat to see it in motion.

Run basic Agent

To start out, I shall run a really basic LLM chat (no tools needed) to save lots of the primary conversation within the database. Throughout the interaction, I’m going to say some vital information, not included within the LLM knowledge base, that I would like the Agent to recollect in the subsequent session.

prompt = "You're an intelligent assistant, provide one of the best possible answer to user's request."
messages = [{"role":"system", "content":prompt}]

while True:    
    ## User
    q = input('🙂 >')
    if q == "quit":
        ### save chat before quitting
        save_chat(lst_msg=messages, collection=collection)
        break
    messages.append( {"role":"user", "content":q} )
   
    ## Model
    agent_res = ollama.chat(model=llm, messages=messages, tools=[])
    res = agent_res["message"]["content"]
   
    ## Response
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

At the end, the conversation was saved with enriched metadata.

Tools

I want the Agent to be able to retrieve information from previous conversations. Therefore, I need to provide it with a Tool to do so. To put it in another way, the Agent must do a Retrieval-Augmented Generation (RAG) from the history. It’s a technique that combines retrieval and generative models by adding to LLMs knowledge facts fetched from external sources (in this case, ).

def retrieve_chat(query:str) -> str:
    res_db = collection.query(query_texts=[query])["documents"][0][0:10]
    history = ' '.join(res_db).replace("n", " ")
    return history

tool_retrieve_chat = {'type':'function', 'function':{
  'name': 'retrieve_chat',
  'description': 'Once you knowledge is NOT enough to reply the user, you should use this tool to retrieve chats history.',
  'parameters': {'type': 'object', 
                 'required': ['query'],
                 'properties': {
                    'query': {'type':'str', 'description':'Input the user query or the subject of the present chat'},
}}}}

After fetching data, the AI must process all the data and provides the ultimate answer to the user. Sometimes, it may possibly be more practical to treat the “final answer” as a Tool. For instance, if the Agent does multiple actions to generate intermediate results, the ultimate answer will be considered the Tool that integrates all of this information right into a cohesive response. By designing it this manner, you might have more customization and control over the outcomes.

def final_answer(text:str) -> str:
    return text

tool_final_answer = {'type':'function', 'function':{
  'name': 'final_answer',
  'description': 'Returns a natural language response to the user',
  'parameters': {'type': 'object', 
                 'required': ['text'],
                 'properties': {'text': {'type':'str', 'description':'natural language response'}}
}}}

We’re finally able to test the Agent and its memory.

dic_tools = {'retrieve_chat':retrieve_chat, 
             'final_answer':final_answer}

Run Agent with memory

I shall add a few utils functions for Tool usage and for running the Agent.

def use_tool(agent_res:dict, dic_tools:dict) -> dict:
    ## use tool
    if agent_res["message"].tool_calls shouldn't be None:
        for tool in agent_res["message"].tool_calls:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                print(t_output)
                ### final res
                res = t_output
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")      
    ## don't use tool
    else:
        res = agent_res["message"].content
        t_name, t_inputs = '', ''
    return {'res':res, 'tool_used':t_name, 'inputs_used':t_inputs}

When the Agent is trying to resolve a task, I would like to maintain track of the Tools which were used and the outcomes it gets. The model should try each Tool just once, and the iteration shall stop only when the Agent is prepared to present the ultimate answer.

def run_agent(llm, messages, available_tools):
    ## use tools until final answer
    tool_used, local_memory = '', ''
    while tool_used != 'final_answer':
        ### use tool
        try:
            agent_res = ollama.chat(model=llm, messages=messages, tools=[v for v in available_tools.values()])
            dic_res = use_tool(agent_res, dic_tools)
            res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
        ### error
        except Exception as e:
            print("⚠️ >", e)
            res = f"I attempted to make use of {tool_used} but didn't work. I'll try something else."
            print("👽 >", f"x1b[1;30m{res}x1b[0m")
            messages.append( {"role":"assistant", "content":res} )       
        ### update memory
        if tool_used not in ['','final_answer']:
            local_memory += f"n{res}"
            messages.append( {"role":"user", "content":local_memory} )
            available_tools.pop(tool_used)
            if len(available_tools) == 1:
                messages.append( {"role":"user", "content":"now activate the tool final_answer."} ) 
        ### tools not used
        if tool_used == '':
            break
    return res

Let’s start a brand new interaction, and this time I would like the Agent to activate all of the Tools, for retrieving and processing old information.

prompt = '''
You're an intelligent assistant, provide one of the best possible answer to user's request. 
You should return natural language response.
When interacting with a user, first you need to use the tool 'retrieve_chat' to recollect previous chats history.  
'''
messages = [{"role":"system", "content":prompt}]

while True:
    ## User
    q = input('🙂 >')
    if q == "quit":
        ### save chat before quitting
        save_chat(lst_msg=messages, collection=collection)
        break
    messages.append( {"role":"user", "content":q} )
   
    ## Model
    available_tools = {"retrieve_chat":tool_retrieve_chat, "final_answer":tool_final_answer}
    res = run_agent(llm, messages, available_tools)
   
    ## Response
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

I gave the Agent a task circuitously correlated to the subject of the last session. As expected, the Agent activated the Tool and looked into previous chats. Now, it should use the “final answer” to process the data and reply to me.

Conclusion

This text has been a tutorial to show how you can construct AI Agents with Multi-Session Memory from scratch using only . With these constructing blocks in place, you might be already equipped to start out developing your individual Agents for various use cases.

Full code for this text: GitHub

I hope you enjoyed it! Be at liberty to contact me for questions and feedback or simply to share your interesting projects.

👉 Let’s Connect 👈

(All images, unless otherwise noted, are by the writer)

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x