|
| 1 | +--- |
| 2 | +title: "Announcing MemMachine v0.2.0: The Next Evolution of AI Memory" |
| 3 | +date: 2025-12-09T21:30:00Z |
| 4 | +featured_image: "featured_image.png" |
| 5 | +tags: ["Release", "MemMachine", "AI Agent", "SDK", "MCP", "Semantic Memory"] |
| 6 | +author: "Steve Scargall" |
| 7 | +description: "Unlock the full potential of your AI agents with MemMachine v0.2.0. Discover our complete rearchitecture, powerful new SDKs, enhanced MCP integration, and the game-changing shift to Episodic and Semantic AI Agent Memory." |
| 8 | +aliases: |
| 9 | +--- |
| 10 | + |
| 11 | +We are thrilled to announce the release of **MemMachine v0.2.0**, a major milestone that brings a complete redesign and rearchitecture of our memory system. This release introduces powerful new capabilities for AI Agent developers, including a shift to **Episodic and Semantic Memory**, native **MCP support**, and robust **Python SDKs**. |
| 12 | + |
| 13 | +## Highlights |
| 14 | + |
| 15 | +- **Episodic and Semantic Memory**: "Profile" memory is now "Semantic" memory, reflecting its broader capabilities. |
| 16 | +- **New Architecture**: A reimagined ingestion and search pipeline for better performance and accuracy. |
| 17 | +- **Python SDKs**: Official Client and Server SDKs for seamless integration. |
| 18 | +- **MCP Support**: Native implementation of the Model Context Protocol. |
| 19 | +- **API v2**: A cleaner, more powerful REST API. |
| 20 | + |
| 21 | +--- |
| 22 | + |
| 23 | +## From "Profile" to "Episodic and Semantic" Memory |
| 24 | + |
| 25 | +In v0.2.0, we have renamed "Profile" memory to **Semantic Memory**. While "Profile" implied a focus on user attributes, our system has evolved to capture a much wider range of semantic information—facts, world knowledge, and complex relationships derived from interactions. This rename aligns with our vision of providing a comprehensive long-term memory store that goes beyond simple user profiling. |
| 26 | + |
| 27 | +## A Reimagined Architecture |
| 28 | + |
| 29 | +We've completely rewritten our core architecture to address the limitations of the previous DeclarativeMemory system. The new design focuses on simplicity, performance, and scalability. |
| 30 | + |
| 31 | +### 1. Ingestion Pipeline |
| 32 | +Our new ingestion process is designed to maximize context and retrieval quality: |
| 33 | +- **Derivative Extraction**: We extract raw sentences from message-type episodes using NLTK. |
| 34 | +- **Context Augmentation**: Sentences are augmented with timestamps and source information. |
| 35 | +- **Derivative Embedding**: These augmented sentences are embedded into vectors and stored in a vector database, pointing back to their originating episodes. |
| 36 | +- **2-Tier Persistence**: We now persist data in two tiers: **Episodes** (raw content) and **Derivatives** (embedded chunks linked to episodes). |
| 37 | + |
| 38 | +### 2. Advanced Search Workflow |
| 39 | +Search is now more intelligent and context-aware: |
| 40 | +- **Vector Similarity**: Queries are embedded as-is to find matches in the derivative vector database. |
| 41 | +- **Context Expansion**: Matched derivatives trigger a context expansion, pulling in 1 episode backward and 2 episodes forward to reconstruct the full narrative. |
| 42 | +- **Reranking**: Expanded contexts are reranked to ensure the most relevant information surfaces first. |
| 43 | +- **Smart Limits**: If the search limit is reached, we prioritize episodes closest to the vector-matched nucleus. |
| 44 | + |
| 45 | +### Why This Matters |
| 46 | +This new architecture solves several critical pain points: |
| 47 | +- **Performance**: Optimized database queries and efficient vector search. |
| 48 | +- **Simplicity**: Configuration is now straightforward, removing the complexity of the old DeclarativeMemory. |
| 49 | +- **Robustness**: The system is no longer sensitive to insertion order, making batch processing easier. |
| 50 | +- **First-Class Properties**: Timestamps and sources are now first-class properties, simplifying filtering and indexing. |
| 51 | + |
| 52 | +--- |
| 53 | + |
| 54 | +## New Python SDKs |
| 55 | + |
| 56 | +We are introducing two new Python SDKs to make building with MemMachine easier than ever. |
| 57 | + |
| 58 | +### Client Python SDK |
| 59 | +The new **Client SDK** (`memmachine.rest_client`) allows you to integrate MemMachine into your applications with just a few lines of code. It handles authentication, project management, and memory operations seamlessly. |
| 60 | + |
| 61 | +```python |
| 62 | +from memmachine import MemMachineClient |
| 63 | + |
| 64 | +client = MemMachineClient(base_url="http://localhost:8080") |
| 65 | +project = client.create_project(org_id="my_org", project_id="my_agent", description="Memory store for customer support agent") |
| 66 | +memory = project.memory(user_id="user123", agent_id="support_bot_01",session_id="session_555") |
| 67 | + |
| 68 | +# Add a memory |
| 69 | +memory.add(content="I am strictly vegetarian and I love spicy food.", role="user", metadata={"topic": "food_preference"}) |
| 70 | + |
| 71 | +# Search memory |
| 72 | +results = memory.search("What should I suggest for dinner?") |
| 73 | +print(results) |
| 74 | +``` |
| 75 | + |
| 76 | +For more information, see the [Client SDK documentation](https://docs.memmachine.ai/api_reference/python/client). |
| 77 | + |
| 78 | +### Python Server SDK |
| 79 | +For developers who want to embed MemMachine directly or build custom server implementations, the **Server SDK** (`memmachine-server`) provides direct access to the core memory logic and storage engines. |
| 80 | + |
| 81 | +--- |
| 82 | + |
| 83 | +## Model Context Protocol (MCP) Support |
| 84 | + |
| 85 | +MemMachine v0.2.0 includes native support for the **Model Context Protocol (MCP)**. This means MemMachine can now be instantly used as a memory tool by any MCP-compliant agent or IDE. |
| 86 | + |
| 87 | +We expose two core tools via MCP: |
| 88 | +- `add_memory`: Store important information, facts, and preferences. |
| 89 | +- `search_memory`: Retrieve relevant context and long-term knowledge. |
| 90 | + |
| 91 | +This allows agents to automatically manage their own memory without custom integration code. |
| 92 | + |
| 93 | +--- |
| 94 | + |
| 95 | +## Integrations |
| 96 | + |
| 97 | +We are committed to making MemMachine available wherever you build your agents. We are excited to announce integrations with leading platforms: |
| 98 | + |
| 99 | +- **Claude Code**: Seamlessly give your Claude agents long-term memory. |
| 100 | +- **GPT Store**: Enhance your custom GPTs with persistent context. |
| 101 | +- **LangGraph**: Easily plug MemMachine into your LangGraph workflows. |
| 102 | + |
| 103 | +And this is just the beginning—we have plans to add support for many more platforms soon! |
| 104 | + |
| 105 | +--- |
| 106 | + |
| 107 | +## Get Started |
| 108 | + |
| 109 | +MemMachine v0.2 delivers significant advancements in conversational memory and efficiency, establishing itself as one of the highest-scoring AI memory systems available on the LoCoMo benchmark. |
| 110 | + |
| 111 | +**Ready to experience the benefits of MemMachine v0.2?** |
| 112 | + |
| 113 | +- 👉 [Download and try MemMachine on GitHub](https://github.com/MemMachine/MemMachine) yourself. Get started today and see the performance firsthand. |
| 114 | +- 📖 [Explore the comprehensive documentation](https://docs.memmachine.ai) to discover integration guides, workflows, and advanced features. |
| 115 | +- 💬 [Join our Discord community](https://discord.gg/usydANvKqD) to connect with fellow developers, share feedback, and collaborate with teams already building innovative solutions on top of MemMachine. |
| 116 | + |
| 117 | +Don’t miss the opportunity to join a fast-growing ecosystem of organizations and engineers leveraging MemMachine for state-of-the-art conversational AI. Your feedback and contributions are welcome! |
| 118 | + |
| 119 | +We can't wait to see what you build with this new foundation! |
0 commit comments