Enhancing AI agents with long-term memory: Insights into LangMem SDK, Memobase and the A-MEM Framework

Enhancing AI agents with long-term memory: Insights into LangMem SDK, Memobase and the A-MEM Framework
Source: Venture Beat

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


AI agents can automate many tasks that enterprises want to perform. One downside, though, is that they tend to be forgetful. Without long-term memory, agents must either finish a task in a single session or be constantly re-prompted. 

So, as enterprises continue to explore use cases for AI agents and how to implement them safely, the companies enabling development of agents must consider how to make them less forgetful. Long-term memory will make agents much more valuable in a workflow, able to remember instructions even for complex tasks that require several turns to complete.

Manvinder Singh, VP of AI product management at Redis, told VentureBeat that memory makes agents more robust. 

“Agentic memory is crucial for enhancing [agents’] efficiency and capabilities since LLMs are inherently stateless — they don’t remember things like prompts, responses or chat histories,” Singh said in an email. “Memory allows AI agents to recall past interactions, retain information and maintain context to deliver more coherent, personalized responses, and more impactful autonomy.”

Companies like LangChain have begun offering options to extend agentic memory. LangChain’s LangMem SDK helps developers build agents with tools “to extract information from conversation, optimize agent behavior through prompt updates, and maintain long-term memory about behaviors, facts, and events.”

Other options include Memobase, an open-source tool launched in January to give agents “user-centric memory” so apps remember and adapt. CrewAI also has tooling around long-term agentic memory, while OpenAI’s Swarm requires users to bring their memory model. 

Mike Mason, chief AI officer at tech consultancy Thoughtworks, told VentureBeat in an email that better agentic memory changes how companies use agents.

“Memory transforms AI agents from simple, reactive tools into dynamic, adaptive assistants,” Mason said. “Without it, agents must rely entirely on what’s provided in a single session, limiting their ability to improve interactions over time.” 

Better memory 

Longer-lasting memory in agents could come in different flavors. 

LangChain works with the most common memory types: semantic and procedural. Semantic refers to facts, while procedural refers to processes or how to perform tasks. The company said agents already have good short-term memory and can respond in the current conversation thread. LangMem stores procedural memory as updated instructions in the prompt. Banking on its work on prompt optimization, LangMem identifies interaction patterns and updates “the system prompt to reinforce effective behaviors. This creates a feedback loop where the agent’s core instructions evolve based on observed performance.”

Researchers working on ways to extend the memories of AI models and, consequently, AI agents have found that agents with long-term memory can learn from mistakes and improve. A paper from October 2024 explored the concept of AI self-evolution through long-term memory, showing that models and agents actually improve the more they remember. Models and agents begin to adapt to more individual needs because they remember more custom instructions for longer. 

In another paper, researchers from Rutgers University, the Ant Group and Salesforce introduced a new memory system called A-MEM, based on the Zettelkasten note-taking method. In this system, agents create knowledge networks that enable “more adaptive and context-aware memory management.”

Redis’s Singh said that agents with long-term memory function like hard drives, “holding lots of information that persists across multiple task runs or conversations, letting agents learn from feedback and adapt to user preferences.” When agents are integrated into workflows, that kind of adaptation and self-learning allows organizations to keep the same set of agents working on a task long enough to complete it without the need to re-prompt them.

Memory considerations

But it is not enough to make agents remember more; Singh said organizations must also make decisions on what the agents need to forget. 

“There are four high-level decisions you must make as you design a memory management architecture: Which type of memories do you store? How do you store and update memories? How do you retrieve relevant memories? How do you decay memories?” Singh said. 

He stressed that enterprises must answer those questions because making sure an “agentic system maintains speed, scalability and flexibility is the key to creating a fast, efficient and accurate user experience.” 

LangChain also said organizations must be clear about which behaviors humans mujst set and which should be learned through memory; what types of knowledge agents should continually track; and what triggers memory recall. 

“At LangChain, we’ve found it useful first to identify the capabilities your agent needs to be able to learn, map these to specific memory types or approaches, and only then implement them in your agent,” the company said in a blog post.

The recent research and these new offerings represent just the start of the development of toolsets to give agents longer-lasting memory. And as enterprises plan to deploy agents at a larger scale, memory presents an opportunity for companies to differentiate their products. 



Read Full Article