Add AI chat history, conversation search, and per-user personalization#16
Conversation
Co-authored-by: QuickMash <106212829+QuickMash@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Adds conversation memory and user-scoped conversation search to Komli so AI responses can incorporate prior turns and users can find past chats.
Changes:
- Passes DB conversation history + user name into
processing.ai.send()to build a context-aware Ollama prompt. - Adds
/api/searchendpoint and SQLite query helper to search titles/message content for the authenticated user. - Adds a sidebar search UI with debounced requests and clickable results.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
processing/ai.py |
Expands send() to include prior turns and personalize the system prompt. |
app.py |
Fetches conversation history before AI calls; adds /api/search API endpoint. |
login/server.py |
Implements search_conversations() with LIKE escaping and result shaping. |
templates/index.html |
Adds sidebar search input, styles, and debounced client-side search rendering. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| messages = [system_message] | ||
|
|
||
| # Include prior conversation turns so the AI has memory of the chat | ||
| if history: | ||
| for msg in history: | ||
| role = 'user' if msg.get('message_type') == 'user' else 'assistant' | ||
| messages.append({'role': role, 'content': msg['content']}) | ||
|
|
||
| messages.append({'role': 'user', 'content': user_input}) |
There was a problem hiding this comment.
send() now appends the entire conversation history to the Ollama prompt. Since get_conversation_messages() returns all messages in a conversation, prompts can grow without bound and eventually exceed the model context window or cause very slow requests. Consider truncating/summarizing history (e.g., last N turns or last N characters/tokens) before calling Ollama, ideally using a configurable limit.
| if user_email: | ||
| try: | ||
| if conversation_id: | ||
| conversation_record = server.get_conversation(int(conversation_id), user_email) | ||
| if conversation_record: | ||
| target_conversation_id = int(conversation_id) | ||
| else: | ||
| target_conversation_id = server.get_or_create_active_conversation(user_email) | ||
| else: | ||
| target_conversation_id = server.get_or_create_active_conversation(user_email) | ||
|
|
||
| # Fetch existing messages so the AI sees the full conversation history | ||
| if target_conversation_id: | ||
| history = server.get_conversation_messages(target_conversation_id) | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
The history-fetch/conversation-resolution block swallows all exceptions and then proceeds, which can silently disable both history and DB logging for logged-in users (e.g., non-integer conversation_id, DB errors). Since this code is critical to the new “AI has memory” behavior, consider handling expected errors (like ValueError) with a fallback to the active conversation and logging unexpected exceptions so failures are visible.
| resultsSection.style.display = 'block'; | ||
| resultsList.innerHTML = '<div class="text-center py-2"><i class="fas fa-spinner fa-spin"></i></div>'; | ||
|
|
||
| fetch('/api/search?q=' + encodeURIComponent(query)) |
There was a problem hiding this comment.
The search UI calls fetch('/api/search?...') using an absolute root path. The backend routes are prefixed by web_dir (configurable) and the app is explicitly designed to run under a non-root base path; this call (and any new calls) will break when web_dir isn’t /. Consider using a base path value rendered from the template (or switching to a relative URL) so the request is correctly scoped under web_dir.
| fetch('/api/search?q=' + encodeURIComponent(query)) | |
| fetch('api/search?q=' + encodeURIComponent(query)) |
| def send(user_input: str, history: list = None, user_name: str = None) -> str: | ||
| name_part = f" You are talking to {user_name}." if user_name else "" | ||
| system_message = { | ||
| 'role': 'system', | ||
| 'content': f'You are {name}. {sys_prompt} Version: {version}. You are only allowed to speak with markdown formatting. Begin normal messages with ` and end them with `' | ||
| 'content': f'You are {name}. {sys_prompt}{name_part} Version: {version}. You are only allowed to speak with markdown formatting. Begin normal messages with ` and end them with `' | ||
| } |
There was a problem hiding this comment.
user_name comes from user-controlled profile data and is interpolated directly into the system prompt. This enables prompt-injection at the highest priority (system) level (e.g., a display name containing instructions/newlines/backticks), which can undermine safety constraints and may increase the risk of the model emitting unsafe HTML that is later marked safe via Markup(). Consider sanitizing/normalizing the name (length limit, strip newlines/control chars, quote/escape) and/or moving the personalization into a lower-priority message instead of the system prompt.
The AI had no memory of prior messages — every request was sent to Ollama as a single-turn exchange with no conversation context. Additionally, there was no way to search past conversations.
Changes
processing/ai.py— Context-awaresend()history: list(prior DB messages) anduser_name: strapp.py— Pre-fetch history before AI call; search endpointrespond()now resolves the active conversation and fetches its messages before callingai.send(), then passes them as historyGET /api/search?q=<query>endpoint scoped to the authenticated userlogin/server.py—search_conversations()%,_, and\before pattern construction to prevent wildcard manipulationtemplates/index.html— Sidebar search UI/api/search, results rendered as clickable conversation itemsOriginal prompt
🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.