Skip to content

feat: enable message queuing in ask mode#366

Merged
rossmanko merged 3 commits intomainfrom
feat/queue-message-ask-mode
Apr 13, 2026
Merged

feat: enable message queuing in ask mode#366
rossmanko merged 3 commits intomainfrom
feat/queue-message-ask-mode

Conversation

@rossmanko
Copy link
Copy Markdown
Contributor

@rossmanko rossmanko commented Apr 13, 2026

Summary

  • Remove agent-only restrictions on message queuing so users can queue messages and use stop-and-send while streaming in ask mode
  • Remove the effect that cleared the queue when switching from agent to ask mode
  • Allow the QueuedMessagesPanel to render in all chat modes
  • Simplify daily usage warning to show "remaining today" instead of reset time details

Test plan

  • Start a chat in ask mode, send a message, and while streaming type another message — verify it gets queued
  • Verify the queued messages panel appears in ask mode
  • Switch queue behavior to "stop-and-send" and verify it stops the current stream and sends immediately in ask mode
  • Verify agent mode queuing still works as before
  • Verify queue is cleared when navigating to a different chat
  • Verify daily usage warning shows "X remaining today" without reset time info

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Message queuing now works consistently across all chat modes.
    • Queued messages panel displays whenever messages are pending.
  • Bug Fixes

    • Message submission and queue processing during streaming behave consistently regardless of the active chat mode.
    • Queued messages are no longer cleared when switching chat modes.
  • Style

    • Rate limit warning now says “remaining today.”
  • Chores

    • Transcript save failures are logged less prominently.

Remove agent-only restrictions so users can queue messages and use
stop-and-send while a response is streaming in ask mode too.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown

vercel bot commented Apr 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
hackerai Ready Ready Preview, Comment Apr 13, 2026 7:39pm

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fa11f0d6-e240-4387-8d61-98cdc2da527d

📥 Commits

Reviewing files that changed from the base of the PR and between 551b7f3 and c6bbc0c.

📒 Files selected for processing (1)
  • lib/chat/summarization/index.ts
✅ Files skipped from review due to trivial changes (1)
  • lib/chat/summarization/index.ts

📝 Walkthrough

Walkthrough

Decouples agent-only gating from queue and streaming submission logic so submissions and queued-message processing/display occur across all chat modes, and minor wording/logging tweaks were made.

Changes

Cohort / File(s) Summary
Submission / Queue Logic
app/hooks/useChatHandlers.ts, app/components/chat.tsx, app/components/ChatInput/ChatInput.tsx
Removed chatMode === "agent" gating from streaming submit and queue-processing paths; submissions allowed when status === "streaming" regardless of chatMode; messageQueueRef clearing on switching to "ask" removed.
Queued Messages UI
app/components/ChatInput/ChatInput.tsx
QueuedMessagesPanel now renders whenever messageQueue.length > 0 (removed prior chatMode === "agent" restriction); isStreaming prop remains status === "streaming".
Rate Limit Copy
app/components/RateLimitWarning.tsx
Updated sliding-window message to say “remaining today” (no behavior change).
Transcript Save Logging
lib/chat/summarization/index.ts
Changed transcript-sandbox save failure logging from console.error to console.warn (control flow unchanged).

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatInput
    participant ChatController
    participant QueueManager
    participant StreamProcessor

    User->>ChatInput: submit message
    ChatInput->>ChatController: handleSubmit(status, chatMode)
    ChatController->>QueueManager: enqueue or route (queue / stop-and-send)
    alt status == "streaming"
        ChatController->>StreamProcessor: allow submit during streaming
        StreamProcessor->>QueueManager: process queued items
    else status == "ready"
        QueueManager->>StreamProcessor: normal send flow
    end
    QueueManager->>ChatInput: update messageQueue
    ChatInput->>User: render QueuedMessagesPanel (when queue length > 0)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through queues in every mode,
Streams kept flowing down the road,
Panels now show what I had stacked,
No agent gate — the path's untracked,
A happy rabbit skips and glows.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: enabling message queuing functionality in ask mode, which is the core objective of this pull request.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/queue-message-ask-mode

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/hooks/useChatHandlers.ts (2)

200-228: ⚠️ Potential issue | 🟠 Major

Validate first, then await stopActiveStream() for stop-and-send.

This branch now runs in ask mode too, but it stops the active response before the validation block below. An invalid replacement prompt can cancel the in-flight answer and then be rejected. It also bypasses stopActiveStream(), so cancel/save are fire-and-forget and local Codex chats fall into the temp-stream cancel path instead of the local-provider no-op path.

Suggested change
+      let shouldStopAndSend = false;
+
       if (status === "streaming") {
         const validFiles = uploadedFiles.filter(
           (file) => file.uploaded && file.url && file.fileId,
         );
 
         if (queueBehavior === "queue") {
           // Queue the message - will auto-send after current response completes
           queueMessage(
             input,
             validFiles.map((f) => ({
               file: f.file,
               fileId: f.fileId! as Id<"files">,
               url: f.url!,
             })),
           );
           clearInput();
           clearUploadedFiles();
           return;
         } else if (queueBehavior === "stop-and-send") {
-          // Immediately stop current stream and send right away
-          stop();
-
-          // Cancel the stream in database and save current message state
-          if (
-            !temporaryChatsEnabledRef.current &&
-            !isCodexLocal(selectedModel)
-          ) {
-            cancelStreamMutation({ chatId }).catch((error) => {
-              console.error("Failed to cancel stream:", error);
-            });
-
-            const lastMessage = messages[messages.length - 1];
-            if (lastMessage && lastMessage.role === "assistant") {
-              saveAssistantMessage({
-                id: lastMessage.id,
-                chatId,
-                role: lastMessage.role,
-                parts: lastMessage.parts,
-              }).catch((error) => {
-                console.error("Failed to save message on stop:", error);
-              });
-            }
-          } else {
-            // Temporary chats: signal cancel via temp stream coordination
-            cancelTempStreamMutation({ chatId }).catch(() => {});
-          }
-          // Continue to send the new message immediately below (don't return)
+          shouldStopAndSend = true;
         }
       }
       // Check token limit before sending based on user plan
       const tokenCount = countInputTokens(input, uploadedFiles);
       const maxTokens = getMaxTokensForSubscription(subscription, {
         mode: chatMode,
       });
+
+      if (shouldStopAndSend) {
+        await stopActiveStream();
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/hooks/useChatHandlers.ts` around lines 200 - 228, The "stop-and-send"
branch currently calls stop() and performs fire-and-forget cancel/save before
validating the new prompt; change it to validate first, then await
stopActiveStream() to gracefully stop the in-flight response; replace the direct
stop() call with await stopActiveStream() and ensure
cancelStreamMutation/saveAssistantMessage/cancelTempStreamMutation are awaited
(or handled inside stopActiveStream) so local Codex chats use the local-provider
no-op path via isCodexLocal(selectedModel) and temporaryChatsEnabledRef.current
is respected rather than falling into the temp-stream cancel path.

182-200: ⚠️ Potential issue | 🟠 Major

Queued ask-mode messages now skip token enforcement.

With Line 182 no longer gated to agent mode, the queueBehavior === "queue" path returns before the ask-mode file limit and maxTokens checks on Lines 237-260. Those queued items are later submitted by handleSendNow without revalidation, and the auto-dequeue flow in app/components/chat.tsx also runs regardless of mode, so oversized ask-mode requests can be enqueued and then sent anyway.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/hooks/useChatHandlers.ts` around lines 182 - 200, The current early
return in the status === "streaming" block lets the queueBehavior === "queue"
path enqueue messages without running ask-mode validations (file count and
maxTokens) before queueMessage is called; update the logic in useChatHandlers so
that either (A) the queueBehavior === "queue" branch only executes for agent
mode, or (B) run the ask-mode validation checks (the same file limit and
maxTokens logic used later) before calling queueMessage and returning; adjust
the queueMessage call site and/or handleSendNow to ensure queued ask-mode items
are validated (use the existing symbols queueMessage, handleSendNow,
queueBehavior, and uploadedFiles) so oversized ask-mode requests cannot be
enqueued.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@app/hooks/useChatHandlers.ts`:
- Around line 200-228: The "stop-and-send" branch currently calls stop() and
performs fire-and-forget cancel/save before validating the new prompt; change it
to validate first, then await stopActiveStream() to gracefully stop the
in-flight response; replace the direct stop() call with await stopActiveStream()
and ensure cancelStreamMutation/saveAssistantMessage/cancelTempStreamMutation
are awaited (or handled inside stopActiveStream) so local Codex chats use the
local-provider no-op path via isCodexLocal(selectedModel) and
temporaryChatsEnabledRef.current is respected rather than falling into the
temp-stream cancel path.
- Around line 182-200: The current early return in the status === "streaming"
block lets the queueBehavior === "queue" path enqueue messages without running
ask-mode validations (file count and maxTokens) before queueMessage is called;
update the logic in useChatHandlers so that either (A) the queueBehavior ===
"queue" branch only executes for agent mode, or (B) run the ask-mode validation
checks (the same file limit and maxTokens logic used later) before calling
queueMessage and returning; adjust the queueMessage call site and/or
handleSendNow to ensure queued ask-mode items are validated (use the existing
symbols queueMessage, handleSendNow, queueBehavior, and uploadedFiles) so
oversized ask-mode requests cannot be enqueued.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bae2346f-cd6e-40a0-b5d2-7d18d878fc8d

📥 Commits

Reviewing files that changed from the base of the PR and between 5c659bf and e99152d.

📒 Files selected for processing (3)
  • app/components/ChatInput/ChatInput.tsx
  • app/components/chat.tsx
  • app/hooks/useChatHandlers.ts
💤 Files with no reviewable changes (1)
  • app/components/chat.tsx

…f reset time

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This is an unrecoverable condition (stale WebSocket on long local
streams) that doesn't affect the user — logging as error is noise.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@rossmanko rossmanko merged commit 61ec845 into main Apr 13, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant