feat: issue 2097 momentum features#2108
Conversation
Stijnus
left a comment
There was a problem hiding this comment.
Code Review — PR #2108
Thanks for the contribution! The individual ideas here are solid, but there are several issues that should be addressed before merging.
1. Split into separate PRs
This PR bundles 5 unrelated features into a single commit (project memory, plan mode, document uploads, file lock/unlock, auto prompt optimization). Each is independently useful and should be reviewed/merged separately. This makes it easier to review, revert if needed, and bisect issues.
2. Document attachments are incomplete (text/PDF content never reaches the LLM)
In uploadUtils.ts, all files (including .txt, .pdf, .json) are read via readFileAsDataUrl(), but for non-image files the data URL is discarded — imageData is set to '':
uploads.push({ file, imageData: isImageFile(file) ? dataUrl : '' });The actual text content of these documents is never extracted or sent to the LLM. The UI shows a file preview card, but the content is silently dropped. This makes the document upload feature misleading — users will think their PDF/text files are being sent to the model when they aren't.
3. Race condition in project settings persistence
In Chat.client.tsx, two useEffect hooks fire when currentChatId changes:
// Effect 1: Load settings when chatId changes
useEffect(() => {
const settings = getProjectSettings(currentChatId);
setProjectMemory(settings.memory);
setPlanMode(settings.planMode);
}, [currentChatId]);
// Effect 2: Save settings when memory/planMode changes
useEffect(() => {
setProjectSettings(currentChatId, { memory: projectMemory, planMode });
}, [currentChatId, projectMemory, planMode]);When currentChatId changes, both effects run. Effect 2 may execute with stale state (previous chat's values) before Effect 1 loads the new chat's settings — overwriting the new chat's saved settings with wrong data. This needs a guard (e.g. a skipNextSave ref or combining the effects).
4. Prompt injection risk in Project Memory
In stream-text.ts, user-provided memory is directly interpolated into the system prompt without any sanitization:
systemPrompt = `${systemPrompt}
PROJECT MEMORY (USER-PROVIDED):
${trimmedMemory}
---
`;While the user is technically "attacking their own session," this is still poor practice. The memory content should be clearly fenced or placed in user-role messages instead to prevent accidental system prompt override.
5. No localStorage cleanup for project settings
projectSettings.ts stores entries as bolt_project_settings:{chatId}. These are never cleaned up when a chat is deleted, leading to unbounded localStorage accumulation over time.
6. Remove docs/issue-2097-tracker.txt
This tracking file doesn't belong in the repository — it should be tracked in the GitHub issue itself.
What's done well
uploadUtils.tsextraction is well-structured with proper MIME type and extension validationProjectMemoryDialogcomponent is clean and follows existing UI patterns- Auto prompt optimization for small models is a nice idea with a reasonable threshold (16K tokens)
- File lock/unlock reuses existing
workbenchStoreinfrastructure properly PanelHeaderButtontitle prop is a clean, non-breaking addition
Suggested path forward
- Split into separate PRs (one per feature)
- Fix document upload to actually extract and send text content to the LLM
- Fix the project settings race condition
- Remove the tracker file
- Add localStorage cleanup when chats are deleted
Happy to re-review once these are addressed!
Stijnus
left a comment
There was a problem hiding this comment.
Code Review — PR #2108
Thanks for the contribution! The individual ideas here are solid, but there are several issues that should be addressed before merging.
1. Split into separate PRs
This PR bundles 5 unrelated features into a single commit (project memory, plan mode, document uploads, file lock/unlock, auto prompt optimization). Each is independently useful and should be reviewed/merged separately. This makes it easier to review, revert if needed, and bisect issues.
2. Document attachments are incomplete (text/PDF content never reaches the LLM)
In uploadUtils.ts, all files (including .txt, .pdf, .json) are read via readFileAsDataUrl(), but for non-image files the data URL is discarded — imageData is set to '':
uploads.push({ file, imageData: isImageFile(file) ? dataUrl : '' });The actual text content of these documents is never extracted or sent to the LLM. The UI shows a file preview card, but the content is silently dropped. This makes the document upload feature misleading — users will think their PDF/text files are being sent to the model when they aren't.
3. Race condition in project settings persistence
In Chat.client.tsx, two useEffect hooks fire when currentChatId changes:
// Effect 1: Load settings when chatId changes
useEffect(() => {
const settings = getProjectSettings(currentChatId);
setProjectMemory(settings.memory);
setPlanMode(settings.planMode);
}, [currentChatId]);
// Effect 2: Save settings when memory/planMode changes
useEffect(() => {
setProjectSettings(currentChatId, { memory: projectMemory, planMode });
}, [currentChatId, projectMemory, planMode]);When currentChatId changes, both effects run. Effect 2 may execute with stale state (previous chat values) before Effect 1 loads the new chat settings — potentially overwriting the new chat's saved settings with wrong data. This needs a guard (e.g. a skipNextSave ref or combining the effects).
4. Prompt injection risk in Project Memory
In stream-text.ts, user-provided memory is directly interpolated into the system prompt without any sanitization:
systemPrompt = `${systemPrompt}
PROJECT MEMORY (USER-PROVIDED):
${trimmedMemory}
---
`;While the user is technically "attacking their own session," this is still poor practice. The memory content should be clearly fenced or placed in user-role messages instead to prevent accidental system prompt override.
5. No localStorage cleanup for project settings
projectSettings.ts stores entries as bolt_project_settings:{chatId}. These are never cleaned up when a chat is deleted, leading to unbounded localStorage accumulation over time.
6. Remove docs/issue-2097-tracker.txt
This tracking file doesn't belong in the repository — it should be tracked in the GitHub issue itself.
What's done well
uploadUtils.tsextraction is well-structured with proper MIME type and extension validationProjectMemoryDialogcomponent is clean and follows existing UI patterns- Auto prompt optimization for small models is a nice idea with a reasonable threshold (16K tokens)
- File lock/unlock reuses existing
workbenchStoreinfrastructure properly PanelHeaderButtontitle prop is a clean, non-breaking addition
Suggested path forward
- Split into separate PRs (one per feature)
- Fix document upload to actually extract and send text content to the LLM
- Fix the project settings race condition
- Remove the tracker file
- Add localStorage cleanup when chats are deleted
Happy to re-review once these are addressed!
|
Thanks for the review — I’ve split out the document upload fix into its own PR: #2114 It extracts text/PDF content and ensures it’s sent to the LLM (ATTACHMENTS block), keeps attachment text aligned with uploads/screenshots, and includes PDF parsing via pdfjs-dist. Tests: typecheck, lint, and a Playwright e2e smoke verifying the chat payload contains the attachment text. I’ll follow up shortly with the separate PRs for project memory and plan mode (with the race-condition + localStorage cleanup fixes). |
|
Follow‑up: the remaining split PRs are up as well.
Each includes the race‑condition guard + localStorage cleanup, and both have typecheck/lint + Playwright e2e smoke tests noted in the PR bodies. |
|
More splits are up:
Both include typecheck/lint + Playwright e2e smoke tests documented in the PR bodies. |
Summary
Testing
Refs #2097