运行 Meta AI 的 Llama 3.1 8B:实践指南
Title: How I Build an Agent with Long-Term, Personalized Memory
URL Source: https://readmedium.com/how-i-build-an-agent-with-long-term-personalized-memory-54b7f4272d5f
Markdown Content:
canva
Artificial intelligence (AI) chatbots often struggle to remember things, both between different conversations and within the same conversation. However, an open-source framework might change this.
If you chat with a large language model like OpenAI’s ChatGPT for a long time, it starts to forget important details, especially if the conversation exceeds 4 million words. When this happens, its performance quickly gets worse
For instance, medical AI assistants use information from past conversations to help make accurate clinical diagnoses. Therefore, if LLMs cannot remember long-term details, they might miss important disease symptoms and make less accurate diagnoses.
To solve the problem of AI models’ lack of long-term memory and personalization capabilities, I introduce Mem0. It is suitable for AI applications that require long-term memory and context retention, such as chatbots and smart assistants
in this story, I will provide an easy-to-understand explanation of Mem0 overview, what makes Mem0 unique, How Mem0 is different from Rag, and even how to build an actual application.
Before we start! 🦸🏻♀️
If you like this topic and you want to support me:
- Clap my article 50 times; that will really help me out.👏
- Follow me on Medium and subscribe to get my latest article🫶
- Follow me on my YouTube channel
What is Mem0?
Mem0 is an open-source project for an innovative memory layer designed for large language models (LLMs), to enhance personalized AI experiences in various applications, allow AI chatbots to remember user preferences and the context of previous interactions, making AI interactions more relevant and attractive over time, and making it easier for developers and users around the world to use complex AI tools.
Mem0’s memory layer uses advanced algorithms, like GPT-4, and efficient data structures, like vector databases, to store and retrieve information quickly.
Mem0 is a cutting-edge long-term memory solution for large language models (LLMs), enabling personalization across the entire GenAI stack, allowing LLMs to recall past interactions and provide customized responses that vary with each user.
what makes Mem0 unique.
Put simply, Mem0 provides an intelligent, self-improving memory layer for large language models, enabling personalized AI experiences across applications. Key features include:
- Self-improving memory: The system learns from user interactions and improves over time.
- Cross-platform consistency: It provides a consistent experience across different AI platforms and apps.
- Centralized memory management: It lets you easily store and manage different types of memory (long-term, short-term, semantic, and episodic) for individual users, agents, and sessions through APIs
RAG Vs Mem0
Mem0 is the next step in RAG’s evolution, with key differences:
- Entity relationships: Mem0 understands and connects entities in different interactions, unlike RAG which just retrieves information from static documents. This gives Mem0 a deeper understanding of context and relationships.
- Relevance and decay: Mem0 prioritizes recent interactions and gradually forgets outdated information, ensuring accurate and up-to-date responses.
- Contextual continuity: Mem0 retains information across multiple sessions, maintaining conversation continuity, crucial for long-term engagement, like virtual companions or personalized learning assistants.
- Adaptive learning: Mem0 improves its personalization based on user interactions and feedback, making memories more accurate and relevant over time.
- Dynamic updates: Mem0 can update its memories based on new information and interactions, unlike RAG, which relies on static data. This allows for real-time adjustments and improvements, enhancing the user experience.
These advanced memory features make Mem0 a powerful tool for developers to create personalized and context-aware AI applications.
How to initialize Mem0
Installing required packages
To use Mem0, you first need to install the necessary Python packages. Specifically, install Mem0, and openai. These packages are required to take full advantage of Mem0 functionality.
pip install mem0ai , openai
The power of Mem0 lies not only in its simple API but also in its flexible configuration options. Let’s see how to perform advanced configuration:
import os
from mem0 import Memory
os.environ[“OPENAI_API_KEY”] = “your_api_key”
os.environ[“OPENAI_BASE_URL”] = “your_base_url”
config = {
“llm”: {
“provider”: “openai”,
“config”: {
“model”: “gpt-4o-mini”,
“temperature”: 0.2,
“max_tokens”: 1500,
}
},
“history_db_path”: “history.db”
}
m = Memory.from_config(config)
This configuration code demonstrates several key features of Mem0:
- Flexible LLM integration: Mem0 can be integrated with different LLM providers, here we use OpenAI’s GPT-4o-mini.
- Custom model parameters: You can adjust model parameters such as temperature and max_tokens according to your needs.
- Local history storage: By specifying
history_db_path, Mem0 can save historical records in a local database, ensuring data persistence and privacy.
Memory Storage and Retrieval
result = m.add(“I am working on improving my tennis skills.
Suggest some online courses.”, user_id=”alice”,
metadata={“category”: “hobbies”})
all_memories = m.get_all()
memory_id = all_memories[0][“id”]
specific_memory = m.get(memory_id)
related_memories = m.search(query=”What are Alice’s hobbies?”, user_id=”alice”)
Mem0’s memory system can store simple text information and add contextual information through metadata, making retrieval more accurate and meaningful.
Memory Update and History Tracking
result = m.update(memory_id=memory_id, data=”Likes to play tennis on weekends”)
history = m.history(memory_id=memory_id)
Mem0 not only allows the memory to be updated but also keeps the historical version of the memory. This feature is crucial for understanding the changes in user preferences or tracking the decision-making process of AI systems.
Memory Management
m.delete(memory_id=memory_id)
m.delete_all(user_id=”alice”)
m.reset()
Mem0 provides fine-grained memory management capabilities, which can delete a single memory, clear all memories of a specific user, or even reset the entire system. This provides developers with great flexibility, especially when dealing with privacy-sensitive data.
Case demo: AI Agent assistant
we use Mem0 to create a personalized customer service assistant AI Agent. Customer support AI Agent uses Mem0 to retain information during the interaction process, allowing for a personalized and efficient support experience.
Configuration :
we create an instance of Customer Support AI Agent to set up the memory and the OpenAI, then we set the configuration with parameters for OpenAI and the memory database, it creates a memory object using the given configuration.
class CustomerSupportAIAgent:
def __init__(self):
“””
Initialize CustomerSupportAIAgent, configure memory and OpenAI client.
“””
config = {
“llm”: {
“provider”: “openai”,
“config”: {
“model”: “gpt-4o-mini”,
“temperature”: 0.2,
“max_tokens”: 1500,
}
},
“history_db_path”: “history.db”
}
self.memory = Memory.from_config(config)
self.client = OpenAI()
self.app_id = “customer-support”
Handle query :
def handle_query(self, query, user_id=None):
“””
Handle customer query and store related information in memory.
:param query: The customer query to process.
:param user\_id: Optional user ID to associate with memory.
"""
stream = self.client.chat.completions.create(
model="gpt-4",
stream=True,
messages=\[
{"role": "system", "content": "You are a customer support
AI agent."},
{"role": "user", "content": query}
\]
)
self.memory.add(query, user\_id=user\_id, metadata={"app\_id":
self.app\_id})
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end=“”)
we create a handle_query function to process customer queries taking query and an optional user_id as parameters. we sent a request to open AI with a predefined system message. The query is stored in memory with associated metadata. then we printed in real-time by iterating over the response stream. Each chunk’s content is printed if it’s not None .
get_memories
def get_memories(self, user_id=None):
“””
Retrieve all memories associated with a given customer ID.
:param user\_id: Optional user ID to filter memories.
:return: List of memories.
"""
return self.memory.get\_all(user\_id=user\_id)
You can get all your memories at any time using the following code:
memories = support_agent.get_memories(user_id=customer_id)
for m in memories:
print(m[‘text’])
key point
- Initialization: The CustomerSupportAIAgent class is initialized with the necessary memory configuration and OpenAI client settings.
- Handling queries: The handle_query method sends a query to the AI and stores the relevant information in memory.
- Get Memories: The get_memories method gets all stored memories associated with a customer.
from openai import OpenAI
from mem0 import Memory
import os
os.environ[“OPENAI_API_KEY”] = “Your_api”
os.environ[“OPENAI_BASE_URL”] = “https://api.openai.com/v1 “
class CustomerSupportAIAgent:
def __init__(self):
“””
Initialize CustomerSupportAIAgent, configure memory and OpenAI client.
“””tomer
config = {
“llm”: {
“provider”: “openai”,
“config”: {
“model”: “gpt-4o-mini”,
“temperature”: 0.2,
“max_tokens”: 1500,
}
},
“history_db_path”: “history.db”
}
self.memory = Memory.from_config(config)
self.client = OpenAI()
self.app_id = “customer-support”
def handle\_query(self, query, user\_id=None):
"""
Handle customer query and store related information in memory.
:param query: The customer query to process.
:param user\_id: Optional user ID to associate with memory.
"""
stream = self.client.chat.completions.create(
model="gpt-4",
stream=True,
messages=\[
{"role": "system", "content": "You are a customer
support AI agent.”},
{“role”: “user”, “content”: query}
]
)
self.memory.add(query, user\_id=user\_id, metadata={"app\_id": self.app\_id})
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end=“”)
def get\_memories(self, user\_id=None):
"""
Retrieve all memories associated with a given customer ID.
:param user\_id: Optional user ID to filter memories.
:return: List of memories.
"""
return self.memory.get\_all(user\_id=user\_id)
support_agent = CustomerSupportAIAgent()
customer_id = “learning-AI-from-scratch”
support_agent.handle_query(“I need help with my recent order. It hasn’t arrived yet.”, user_id=customer_id)
memories = support_agent.get_memories(user_id=customer_id)
for m in memories:
print(m[‘text’])
import os
from mem0 import Memory
os.environ[“OPENAI_API_KEY”] = “”
os.environ[“OPENAI_BASE_URL”] = “https://api.openai.com/v1 “
config = {
“llm”: {
“provider”: “openai”,
“config”: {
“model”: “gpt-4o-mini”,
“temperature”: 0.2,
“max_tokens”: 1500,
}
},
“history_db_path”: “history.db”
}
m = Memory.from_config(config)
result = m.add(“I am working on improving my tennis skills. Suggest
some online courses.”, user_id=”alice”, metadata={“category”: “hobbies”})
all_memories = m.get_all()
memory_id = all_memories[0][“id”]
specific_memory = m.get(memory_id)
related_memories = m.search(query=”What are Alice’s hobbies?”, user_id=”alice”)
result = m.update(memory_id=memory_id, data=”Likes to play tennis on weekends”)
history = m.history(memory_id=memory_id)
m.delete(memory_id=memory_id)
m.delete_all(user_id=”alice”)
m.reset()
Conclusion :
Mem0 provides a powerful and flexible memory system for AI application development. Through reasonable configuration and innovative applications, developers can build AI systems with persistent memory, contextual understanding, and emotional intelligence. This not only improves the user experience but also opens up new possibilities for AI applications.
As AI technology continues to develop, intelligent memory systems like Mem0 will become a key component in building the next generation of AI applications.
References :
🧙♂️ I am an AI application expert! If you are looking for a Generative AI Engineer, drop an inquiry here or Book a 1-on-1 Consulting Call With Me.
- 标题: 运行 Meta AI 的 Llama 3.1 8B:实践指南
- 作者: Barry
- 创建于 : 2024-07-30 03:29:35
- 更新于 : 2024-08-31 06:59:45
- 链接: https://wx.role.fun/2024/07/30/9c716d936c5f4a6aa76af0b95f4cbb3f/
- 版权声明: 本文章采用 CC BY-NC-SA 4.0 进行许可。