-
Notifications
You must be signed in to change notification settings - Fork 34
feat(memory): integrate mem0 long-term memory for Chat Shell #295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add mem0 long-term memory integration for Chat Shell type conversations, enabling user-level cross-session memory that complements the existing Redis short-term session management. Backend changes: - Add memory_service.py for mem0 API integration (add, search, update, delete) - Add MEM0_BASE_URL, MEM0_API_KEY, MEM0_ENABLED config options - Modify chat_service.py to retrieve memories before chat and save after - Create /api/memories endpoints for memory management (CRUD operations) Frontend changes: - Add memory.ts API client for memory management - Add MemoryPanel component as sidebar for viewing/editing memories - Integrate memory button in ChatArea header The system gracefully degrades when mem0 is not configured, ensuring normal chat functionality without long-term memory features.
WalkthroughThis PR integrates long-term memory capabilities into the chat application via a new mem0-backed memory service. It adds memory CRUD endpoints, extends the chat stream with user context and memory injection, introduces configuration settings for mem0 integration, and includes a frontend memory management panel component. Changes
Sequence DiagramsequenceDiagram
participant User as User/Client
participant Chat as Chat Endpoint
participant ChatSvc as Chat Service
participant MemSvc as Memory Service
participant mem0 as mem0 Backend
participant LLM as LLM Provider
User->>Chat: POST /chat with message + user_id
Chat->>ChatSvc: chat_stream(user_id, message)
rect rgb(200, 220, 255)
Note over ChatSvc,mem0: Memory Retrieval Phase
ChatSvc->>MemSvc: search_memories(user_id, query)
MemSvc->>mem0: GET /v1/memories/search/
mem0-->>MemSvc: [memories]
MemSvc-->>ChatSvc: formatted memory context
end
rect rgb(220, 255, 220)
Note over ChatSvc,LLM: Chat Generation Phase
ChatSvc->>ChatSvc: _build_messages(history, message, memory_context)
ChatSvc->>LLM: stream completion with messages
LLM-->>ChatSvc: streamed response chunks
ChatSvc-->>Chat: StreamingResponse
Chat-->>User: streaming chunks
end
rect rgb(255, 230, 200)
Note over ChatSvc,mem0: Memory Save Phase (async)
ChatSvc->>MemSvc: add_memory(user_id, messages) [async, fire-and-forget]
MemSvc->>mem0: POST /v1/memories/ with conversation
mem0-->>MemSvc: memory saved
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (8)
frontend/src/features/tasks/components/ChatArea.tsx (1)
1148-1151: Consider adding memory button to the floating input area for consistency.The memory panel button (Brain icon) is only present in the initial input area (no messages view) at Lines 926-935, but not in the floating input area when
hasMessagesis true. Users viewing an existing conversation cannot access the memory panel.Consider adding the memory button here for feature parity:
<div className="ml-auto flex items-center gap-2 flex-shrink-0"> + {/* Memory Panel Button */} + <Button + variant="ghost" + size="icon" + onClick={() => setIsMemoryPanelOpen(true)} + className="h-6 w-6 rounded-full flex-shrink-0 translate-y-0.5" + title="View memories" + > + <Brain className="h-4 w-4 text-text-secondary" /> + </Button> {!shouldHideQuotaUsage && (frontend/src/features/tasks/components/MemoryPanel.tsx (2)
142-165: Consider replacingconfirm()with a custom confirmation dialog.The native
confirm()dialog works but provides a basic browser-native experience that may feel inconsistent with the rest of the UI. Consider using an AlertDialog from shadcn/ui for better UX consistency.
184-195: Panel accessibility: consider adding keyboard trap and focus management.The fixed positioning panel overlays content but doesn't trap keyboard focus. Users navigating with keyboard can tab out of the panel to elements behind it. Consider:
- Adding
aria-modal="true"androle="dialog"- Trapping focus within the panel when open
- Returning focus to the trigger button on close
This is an accessibility enhancement that could be addressed in a follow-up.
backend/app/services/chat/memory_service.py (2)
12-14: Remove unused import.The
datetimeimport on line 13 is not used anywhere in this file.import logging -from datetime import datetime from typing import Any, Dict, List, Optional
42-62: Consider adding client lifecycle management for application shutdown.The
close()method exists but there's no automatic cleanup mechanism. If the application shuts down without explicitly callingclose(), the HTTP client connection pool may not be properly released.Consider registering this with FastAPI's lifespan context or implementing
__aenter__/__aexit__for async context manager support, though this is a minor concern since httpx handles cleanup reasonably well on garbage collection.backend/app/api/endpoints/memory.py (2)
71-71: Extract magic number to a named constant.The hard-coded limit
100should be extracted to a module-level constant for maintainability.Apply this diff:
+# Maximum number of memories to return in search results +MAX_SEARCH_RESULTS = 100 + @router.get("") async def get_memories( keyword: Optional[str] = None, current_user: User = Depends(security.get_current_user), ) -> MemoryListResponse: """ Get all memories for the current user. Args: keyword: Optional keyword to search/filter memories Returns: List of user's memories """ if not memory_service.is_configured: return MemoryListResponse(memories=[], total=0) try: if keyword: # Search memories by keyword raw_memories = await memory_service.search_memories( user_id=current_user.id, query=keyword, - limit=100, + limit=MAX_SEARCH_RESULTS, )As per coding guidelines, magic numbers must be extracted to named constants.
82-93: Extract duplicate memory transformation logic to a helper function.The memory transformation logic (extracting
id,content,created_at,updated_atfrom raw memory dictionaries) is duplicated across multiple endpoints (lines 82-93, 130-132, 184-191).Extract this logic into a helper function:
def _normalize_memory_response(mem: dict[str, Any], memory_id: Optional[str] = None) -> Optional[MemoryResponse]: """ Normalize a raw memory dict to MemoryResponse. Args: mem: Raw memory dictionary from mem0 memory_id: Optional override for memory ID Returns: MemoryResponse if valid, None otherwise """ if not mem: return None mem_id = memory_id or mem.get("id", mem.get("memory_id", "")) content = mem.get("memory", mem.get("text", mem.get("content", ""))) created_at = mem.get("created_at", mem.get("createdAt")) updated_at = mem.get("updated_at", mem.get("updatedAt")) if not mem_id or not content: return None return MemoryResponse( id=str(mem_id), content=content, created_at=str(created_at) if created_at else None, updated_at=str(updated_at) if updated_at else None, )Then use it in endpoints:
# In get_memories: memories = [ normalized for mem in raw_memories if (normalized := _normalize_memory_response(mem)) ] # In get_memory: return _normalize_memory_response(mem, memory_id) # In update_memory: return _normalize_memory_response(result, memory_id)This follows the DRY principle and improves maintainability.
frontend/src/apis/memory.ts (1)
64-76: Extract duplicate error handling logic to a helper function.The error parsing logic (extracting
detailfrom JSON or falling back to raw text) is duplicated across all four API functions (lines 64-76, 98-110, 134-146, 168-180).Extract to a helper function:
/** * Parse error response and extract error message. * * @param response - Failed fetch response * @returns Error message string */ async function parseErrorResponse(response: Response): Promise<string> { const errorText = await response.text() let errorMsg = errorText try { const json = JSON.parse(errorText) if (json && typeof json.detail === 'string') { errorMsg = json.detail } } catch { // Not JSON, use raw text } return errorMsg }Then simplify each function:
if (!response.ok) { const errorMsg = await parseErrorResponse(response) throw new Error(errorMsg) }This eliminates significant code duplication and follows the DRY principle.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/app/api/api.py(2 hunks)backend/app/api/endpoints/adapter/chat.py(1 hunks)backend/app/api/endpoints/memory.py(1 hunks)backend/app/core/config.py(1 hunks)backend/app/services/chat/__init__.py(1 hunks)backend/app/services/chat/chat_service.py(7 hunks)backend/app/services/chat/memory_service.py(1 hunks)frontend/src/apis/memory.ts(1 hunks)frontend/src/features/tasks/components/ChatArea.tsx(5 hunks)frontend/src/features/tasks/components/MemoryPanel.tsx(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{py,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
All code comments, inline comments, block comments, docstrings, TODO/FIXME annotations, and type hints descriptions MUST be written in English
Files:
backend/app/services/chat/__init__.pyfrontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsxbackend/app/api/endpoints/memory.pybackend/app/services/chat/memory_service.pyfrontend/src/apis/memory.tsbackend/app/api/api.pybackend/app/api/endpoints/adapter/chat.pybackend/app/services/chat/chat_service.pybackend/app/core/config.py
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Python code MUST be PEP 8 compliant with Black formatter (line length: 88) and isort for import organization
Python code MUST include type hints for all functions and variables
Python functions SHOULD NOT exceed 50 lines (preferred maximum)
Python functions and classes MUST have descriptive names and public functions/classes MUST include docstrings
Python code MUST extract magic numbers to named constants
Files:
backend/app/services/chat/__init__.pybackend/app/api/endpoints/memory.pybackend/app/services/chat/memory_service.pybackend/app/api/api.pybackend/app/api/endpoints/adapter/chat.pybackend/app/services/chat/chat_service.pybackend/app/core/config.py
**/backend/app/services/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Backend business logic MUST be placed in
app/services/directory
Files:
backend/app/services/chat/__init__.pybackend/app/services/chat/memory_service.pybackend/app/services/chat/chat_service.py
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: TypeScript code MUST use strict mode with type checking enabled
TypeScript/React code MUST use Prettier formatter with single quotes, no semicolons
TypeScript/React code MUST pass ESLint with Next.js configuration
React component names MUST use PascalCase convention
Files:
frontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsxfrontend/src/apis/memory.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: React components MUST use functional components with hooks instead of class-based components
Useconstoverlet, never usevarin TypeScript/JavaScript code
Files:
frontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsxfrontend/src/apis/memory.ts
**/components/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
React component files MUST use kebab-case naming convention
Files:
frontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsx
**/src/**/*.{tsx,jsx,css}
📄 CodeRabbit inference engine (AGENTS.md)
Frontend Tailwind CSS MUST use provided CSS variables for color system (e.g., --color-bg-base, --color-text-primary, --color-primary)
Files:
frontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsx
**/src/**/*.{tsx,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.{tsx,jsx}: Frontend responsive design MUST follow mobile-first approach with Tailwind breakpoints
Frontend React forms MUST use react-hook-form with zod validation schema
Frontend components MUST use shadcn/ui component library fromfrontend/src/components/ui/
Files:
frontend/src/features/tasks/components/ChatArea.tsxfrontend/src/features/tasks/components/MemoryPanel.tsx
**/backend/app/api/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Backend API route handlers MUST be created in
app/api/directory
Files:
backend/app/api/endpoints/memory.pybackend/app/api/api.pybackend/app/api/endpoints/adapter/chat.py
🧬 Code graph analysis (5)
frontend/src/features/tasks/components/ChatArea.tsx (1)
frontend/src/features/tasks/components/MemoryPanel.tsx (1)
MemoryPanel(36-330)
frontend/src/features/tasks/components/MemoryPanel.tsx (1)
frontend/src/apis/memory.ts (4)
Memory(18-23)getMemories(47-79)updateMemory(122-149)deleteMemory(157-183)
backend/app/api/endpoints/memory.py (1)
backend/app/services/chat/memory_service.py (6)
is_configured(38-40)get_all_memories(172-209)get_memory(211-240)update_memory(242-277)delete_memory(279-308)health_check(64-80)
backend/app/services/chat/memory_service.py (2)
frontend/src/apis/client.ts (2)
post(86-91)put(93-98)backend/app/api/endpoints/memory.py (3)
get_memory(103-145)update_memory(149-198)delete_memory(202-240)
backend/app/services/chat/chat_service.py (2)
backend/app/services/chat/session_manager.py (1)
get_chat_history(56-82)backend/app/services/chat/memory_service.py (4)
is_configured(38-40)search_memories(128-170)format_memories_for_context(310-335)add_memory(82-126)
🪛 Ruff (0.14.7)
backend/app/api/endpoints/memory.py
51-51: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
97-97: Do not catch blind exception: Exception
(BLE001)
98-98: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
99-99: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
105-105: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
123-123: Abstract raise to an inner function
(TRY301)
128-128: Abstract raise to an inner function
(TRY301)
143-143: Do not catch blind exception: Exception
(BLE001)
144-144: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
145-145: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
152-152: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
171-171: Abstract raise to an inner function
(TRY301)
175-175: Abstract raise to an inner function
(TRY301)
181-181: Abstract raise to an inner function
(TRY301)
196-196: Do not catch blind exception: Exception
(BLE001)
197-197: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
198-198: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
204-204: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
222-222: Abstract raise to an inner function
(TRY301)
226-226: Abstract raise to an inner function
(TRY301)
232-232: Abstract raise to an inner function
(TRY301)
234-234: Consider moving this statement to an else block
(TRY300)
238-238: Do not catch blind exception: Exception
(BLE001)
239-239: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
240-240: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
backend/app/services/chat/memory_service.py
77-77: Consider moving this statement to an else block
(TRY300)
78-78: Do not catch blind exception: Exception
(BLE001)
124-124: Do not catch blind exception: Exception
(BLE001)
168-168: Do not catch blind exception: Exception
(BLE001)
207-207: Do not catch blind exception: Exception
(BLE001)
238-238: Do not catch blind exception: Exception
(BLE001)
275-275: Do not catch blind exception: Exception
(BLE001)
306-306: Do not catch blind exception: Exception
(BLE001)
backend/app/services/chat/chat_service.py
120-120: Do not catch blind exception: Exception
(BLE001)
189-195: Store a reference to the return value of asyncio.create_task
(RUF006)
196-196: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: e2e-tests
🔇 Additional comments (15)
backend/app/core/config.py (1)
116-120: Configuration defaults look reasonable.The mem0 integration settings follow the established pattern. The combination of
MEM0_ENABLED: bool = TruewithMEM0_BASE_URL: str = ""effectively means the service is disabled untilMEM0_BASE_URLis configured, sinceis_configuredinMemoryServicechecksbool(self._base_url) and self._enabled.backend/app/api/endpoints/adapter/chat.py (1)
555-562: LGTM!The
user_idparameter is correctly passed from the authenticatedcurrent_usercontext to enable memory retrieval and persistence in the chat stream.backend/app/api/api.py (1)
5-5: LGTM!The memory router is correctly imported and registered at
/memoriesfollowing the existing API structure conventions.Also applies to: 37-37
backend/app/services/chat/__init__.py (1)
10-22: LGTM!The
memory_serviceis correctly exported following the established module pattern, making it available through the chat package namespace.frontend/src/features/tasks/components/ChatArea.tsx (2)
926-935: LGTM!The memory panel button is well-integrated with appropriate styling and the Brain icon provides clear visual indication of the feature.
1247-1249: LGTM!The MemoryPanel is correctly rendered at the component root level and properly controlled via the
isOpenprop andonClosecallback.frontend/src/features/tasks/components/MemoryPanel.tsx (2)
1-25: LGTM!The component follows React best practices with proper TypeScript interfaces, uses functional components with hooks, and correctly imports from the shadcn/ui component library as per coding guidelines.
36-180: LGTM overall!The component implements memory management features well:
- Health check on open with graceful degradation
- Search with Enter key support
- Inline editing with keyboard shortcuts
- Proper loading and error states
- Clean separation of concerns with callbacks
backend/app/services/chat/memory_service.py (3)
74-80: Broad exception handling is acceptable here.The static analysis hints flag
except Exception(BLE001), but for external HTTP service calls, catching broad exceptions is reasonable since various network, timeout, and parsing errors can occur. The current approach of logging warnings and returning safe defaults (False,[],None) provides graceful degradation.
310-335: LGTM!The
format_memories_for_contextmethod uses a clean XML-like tag format (<user_memories>) which is effective for LLM context injection and clearly delineates the memory section from other prompt content.
23-40: Overall implementation is solid.The MemoryService provides a clean abstraction over the mem0 API with:
- Proper configuration gating via
is_configured- Consistent error handling with graceful fallbacks
- Clear method signatures with type hints and docstrings
- User isolation via
user_idparameterAlso applies to: 82-126, 128-170, 172-209, 211-277, 279-308
backend/app/services/chat/chat_service.py (3)
105-122: Memory retrieval integration looks good.The memory search uses the user's message as query text and gracefully handles:
- Missing user_id (skips memory retrieval)
- Unconfigured memory service (checked via
is_configured)- Failures (logs warning, continues without memory context)
This ensures memory features don't block or break the core chat functionality.
271-302: LGTM!The
_build_messagesmethod correctly combines the system prompt with memory context. The logic handles all cases:
- Memory context only
- System prompt only
- Both combined with newline separation
- Neither (empty combined_system skips system message)
52-70: LGTM!The
chat_streammethod signature is cleanly extended with the optionaluser_idparameter, maintaining backward compatibility. The docstring is appropriately updated.frontend/src/apis/memory.ts (1)
190-207: LGTM! Health check has appropriate error handling.The health check function correctly returns a safe default when the service is unavailable, which is appropriate for a health check endpoint.
| except Exception as e: | ||
| logger.error(f"Error fetching memories for user {current_user.id}: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to fetch memories") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve exception handling with traceback logging and exception chaining.
The current exception handling loses the traceback and doesn't chain the exception properly.
Apply this diff:
except Exception as e:
- logger.error(f"Error fetching memories for user {current_user.id}: {e}")
- raise HTTPException(status_code=500, detail="Failed to fetch memories")
+ logger.exception(f"Error fetching memories for user {current_user.id}: {e}")
+ raise HTTPException(status_code=500, detail="Failed to fetch memories") from eUsing logging.exception automatically includes the traceback, and raise ... from e preserves the exception chain for better debugging.
🧰 Tools
🪛 Ruff (0.14.7)
97-97: Do not catch blind exception: Exception
(BLE001)
98-98: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
99-99: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/memory.py around lines 97 to 99, the except block
currently logs the error with logger.error and raises an HTTPException without
preserving the original traceback; replace the logger.error call with
logger.exception(...) to include the traceback automatically and change the
raise to "raise HTTPException(status_code=500, detail='Failed to fetch
memories') from e" so the original exception is chained and debuggable.
| except Exception as e: | ||
| logger.error(f"Error fetching memory {memory_id}: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to fetch memory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve exception handling with traceback logging and exception chaining.
Same issue as in get_memories: use logging.exception and chain exceptions.
Apply this diff:
except Exception as e:
- logger.error(f"Error fetching memory {memory_id}: {e}")
- raise HTTPException(status_code=500, detail="Failed to fetch memory")
+ logger.exception(f"Error fetching memory {memory_id}: {e}")
+ raise HTTPException(status_code=500, detail="Failed to fetch memory") from e🧰 Tools
🪛 Ruff (0.14.7)
143-143: Do not catch blind exception: Exception
(BLE001)
144-144: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
145-145: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/memory.py around lines 143 to 145, the except block
currently logs the error with logger.error and raises a new HTTPException
without chaining; change it to use logger.exception(...) to include the full
traceback in the logs and raise HTTPException(...) from e to preserve exception
chaining, so the original exception is attached to the HTTPException for better
debugging.
| except Exception as e: | ||
| logger.error(f"Error updating memory {memory_id}: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to update memory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve exception handling with traceback logging and exception chaining.
Same issue: use logging.exception and chain exceptions.
Apply this diff:
except Exception as e:
- logger.error(f"Error updating memory {memory_id}: {e}")
- raise HTTPException(status_code=500, detail="Failed to update memory")
+ logger.exception(f"Error updating memory {memory_id}: {e}")
+ raise HTTPException(status_code=500, detail="Failed to update memory") from e📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.error(f"Error updating memory {memory_id}: {e}") | |
| raise HTTPException(status_code=500, detail="Failed to update memory") | |
| except Exception as e: | |
| logger.exception(f"Error updating memory {memory_id}: {e}") | |
| raise HTTPException(status_code=500, detail="Failed to update memory") from e |
🧰 Tools
🪛 Ruff (0.14.7)
196-196: Do not catch blind exception: Exception
(BLE001)
197-197: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
198-198: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/memory.py around lines 196 to 198, the except block
currently logs the error with logger.error and raises an HTTPException without
chaining; replace it to log the full traceback using logger.exception(...) so
the stack trace is captured, and re-raise the HTTPException with "from e" to
preserve exception chaining (raise HTTPException(status_code=500, detail="Failed
to update memory") from e) so the original exception is linked.
| @router.delete("/{memory_id}") | ||
| async def delete_memory( | ||
| memory_id: str, | ||
| current_user: User = Depends(security.get_current_user), | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add missing return type hint.
The function is missing a return type annotation, which violates the coding guidelines requirement that all functions must include type hints.
Apply this diff:
@router.delete("/{memory_id}")
async def delete_memory(
memory_id: str,
current_user: User = Depends(security.get_current_user),
-):
+) -> dict[str, bool | str]:As per coding guidelines, Python functions must include type hints for all functions.
🧰 Tools
🪛 Ruff (0.14.7)
204-204: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
backend/app/api/endpoints/memory.py lines 201-205: the delete_memory endpoint
signature is missing a return type annotation; update the function signature to
include an explicit return type (e.g., -> None if it returns nothing, or ->
dict/Response/JSONResponse as appropriate for the actual return value) so it
complies with the project's type-hinting guidelines.
| except Exception as e: | ||
| logger.error(f"Error deleting memory {memory_id}: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to delete memory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve exception handling with traceback logging and exception chaining.
Same issue: use logging.exception and chain exceptions.
Apply this diff:
except Exception as e:
- logger.error(f"Error deleting memory {memory_id}: {e}")
- raise HTTPException(status_code=500, detail="Failed to delete memory")
+ logger.exception(f"Error deleting memory {memory_id}: {e}")
+ raise HTTPException(status_code=500, detail="Failed to delete memory") from e📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.error(f"Error deleting memory {memory_id}: {e}") | |
| raise HTTPException(status_code=500, detail="Failed to delete memory") | |
| except Exception as e: | |
| logger.exception(f"Error deleting memory {memory_id}: {e}") | |
| raise HTTPException(status_code=500, detail="Failed to delete memory") from e |
🧰 Tools
🪛 Ruff (0.14.7)
238-238: Do not catch blind exception: Exception
(BLE001)
239-239: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
240-240: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
backend/app/api/endpoints/memory.py around lines 238 to 240, the except block
currently logs the error with logger.error and re-raises a generic
HTTPException; replace logger.error with logger.exception to record the full
traceback and then raise the HTTPException using exception chaining (raise
HTTPException(status_code=500, detail="Failed to delete memory") from e) so the
original exception is preserved.
| from app.services.chat.memory_service import memory_service | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
| logger = logging.getLogger(__name__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate logger definition.
Line 28 duplicates the logger definition from line 27.
from app.services.chat.memory_service import memory_service
logger = logging.getLogger(__name__)
-logger = logging.getLogger(__name__)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from app.services.chat.memory_service import memory_service | |
| logger = logging.getLogger(__name__) | |
| logger = logging.getLogger(__name__) | |
| from app.services.chat.memory_service import memory_service | |
| logger = logging.getLogger(__name__) |
🤖 Prompt for AI Agents
In backend/app/services/chat/chat_service.py around lines 25 to 28, there is a
duplicated logger definition (logger = logging.getLogger(__name__) appears
twice); remove the redundant second definition so there is only a single logger
initialization, ensuring imports and other lines remain unchanged.
| asyncio.create_task( | ||
| memory_service.add_memory( | ||
| user_id=user_id, | ||
| messages=conv_messages, | ||
| metadata={"task_id": task_id, "subtask_id": subtask_id}, | ||
| ) | ||
| ) | ||
| except Exception as e: | ||
| logger.warning(f"Failed to save memory for user {user_id}: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Store the task reference to prevent potential garbage collection.
The static analysis correctly flags that asyncio.create_task without storing the reference can lead to the task being garbage collected before completion (RUF006). While this is fire-and-forget by design, you should keep a reference to ensure the task completes.
# Fire and forget - don't block the response
- asyncio.create_task(
+ task = asyncio.create_task(
memory_service.add_memory(
user_id=user_id,
messages=conv_messages,
metadata={"task_id": task_id, "subtask_id": subtask_id},
)
)
+ # Store reference to prevent GC (fire-and-forget still)
+ _ = taskAlternatively, consider using a background task set pattern:
# At module level
_background_tasks: set = set()
# In the code
task = asyncio.create_task(memory_service.add_memory(...))
_background_tasks.add(task)
task.add_done_callback(_background_tasks.discard)🧰 Tools
🪛 Ruff (0.14.7)
189-195: Store a reference to the return value of asyncio.create_task
(RUF006)
196-196: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In backend/app/services/chat/chat_service.py around lines 189 to 197, the
fire-and-forget asyncio.create_task call is not storing the returned Task which
risks it being garbage-collected before completion; modify the code to capture
the Task in a long-lived collection (e.g., a module-level or service-level set),
add the created task to that set, and attach a done callback to remove it when
complete (or alternatively store the Task on the memory_service instance) so the
background task reference is retained until it finishes.
Summary
Changes
Backend
backend/app/services/chat/memory_service.py- Service for mem0 API integrationbackend/app/api/endpoints/memory.py- REST API endpoints for memory managementbackend/app/core/config.py- Add MEM0_BASE_URL, MEM0_API_KEY, MEM0_ENABLED configbackend/app/services/chat/chat_service.py- Retrieve memories before chat, save after completionbackend/app/api/api.py- Register memory routerFrontend
frontend/src/apis/memory.ts- API client for memory managementfrontend/src/features/tasks/components/MemoryPanel.tsx- Sidebar panel for managing memoriesfrontend/src/features/tasks/components/ChatArea.tsx- Add memory button and panel integrationArchitecture
The mem0 long-term memory complements the existing Redis short-term session memory:
Configuration
Test plan
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.