Skip to content

Chat Interface: LLM Call Test & Branding/UI Revamp #127

Merged
Ayush8923 merged 21 commits intomainfrom
feat/chat-llm-call
Apr 30, 2026
Merged

Chat Interface: LLM Call Test & Branding/UI Revamp #127
Ayush8923 merged 21 commits intomainfrom
feat/chat-llm-call

Conversation

@Ayush8923
Copy link
Copy Markdown
Collaborator

@Ayush8923 Ayush8923 commented Apr 28, 2026

Issue: #126

Summary:

  • Adds a chat page at /chat (root / redirects to it)
  • Sends messages to the LLM, waits for the response, shows it in the UI
  • Adds chat as a sidebar item right alongside Documents, Configurations, etc.
  • Implement poling mechanism for llm responses.
  • Branding component refresh
  • Sidebar reordered to match user workflow: Documents → Knowledge Base → Configurations → Guardrails → Chat → Evaluations
  • Sidebar UI revamp according to logo theme.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced new Chat interface with message history and conversation management
    • Added chat configuration picker for model selection
    • Added chat suggestions for quick interactions
  • Improvements

    • Reorganized navigation with Chat as primary feature
    • Enhanced sidebar with collapsible controls
    • Updated accent color scheme with new blue palette
    • Added new secondary button variant
    • Replaced text-based branding with logo image
  • Updates

    • Navigation default routes now point to Chat
    • Theme tokens system with consistent design colors

@Ayush8923 Ayush8923 linked an issue Apr 28, 2026 that may be closed by this pull request
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request introduces a comprehensive chat feature to the application, including new chat UI components, backend API routes that proxy LLM calls, Zustand-based chat state management with persistence, updated navigation to prioritize the chat route, new icons, and corresponding styling and color updates across the interface.

Changes

Cohort / File(s) Summary
Chat UI Components
app/components/chat/ChatMessage.tsx, app/components/chat/ChatMessageList.tsx, app/components/chat/ChatInput.tsx, app/components/chat/ChatConfigPicker.tsx, app/components/chat/ChatEmptyState.tsx, app/components/chat/index.ts
Five new React components for rendering chat messages, input handling, configuration selection, and empty state UIs, plus a barrel export module for convenient imports.
Chat Infrastructure & State
app/lib/types/chat.ts, app/lib/chatClient.ts, app/lib/store/chat.ts
New type definitions for chat/LLM domain models, a client utility module for LLM call creation/polling with abort and timeout handling, and a Zustand store for persisted chat state with rehydration and message truncation logic.
Chat Page & Routing
app/(main)/chat/page.tsx, app/page.tsx, app/lib/navConfig.ts, middleware.ts
New primary chat page with message/config/conversation state management and error handling; root page redirected to /chat; navigation config reordered to prioritize chat; middleware updated to treat chat routes as public access points.
API Route Handlers
app/api/llm/call/route.ts, app/api/llm/call/[job_id]/route.ts
Two new Next.js API routes that forward LLM call creation and polling requests to the backend service and return standardized response payloads.
Icons
app/components/icons/sidebar/ChatIcon.tsx, app/components/icons/sidebar/SendIcon.tsx, app/components/icons/index.tsx
Two new SVG icon components (ChatIcon, SendIcon) exported via the icons index for use in chat and navigation UI.
Sidebar & Navigation Components
app/components/Sidebar.tsx, app/components/SettingsSidebar.tsx, app/components/user-menu/Branding.tsx, app/components/PageHeader.tsx, app/components/Button.tsx
Sidebar refactored for chat-first navigation with AppContext integration, collapse controls, updated colors, and improved cursor states; SettingsSidebar adds hydration-aware rendering; Branding switched to image logo; PageHeader changed to "open" button behavior; Button type extended with "secondary" variant.
Styling & Theme Updates
app/globals.css, app/lib/colors.ts, app/(main)/document/page.tsx, app/components/document/DocumentListing.tsx, app/components/settings/ProviderSidebar.tsx, app/(main)/settings/credentials/page.tsx, app/(main)/settings/onboarding/page.tsx, app/components/settings/onboarding/OrganizationList.tsx, app/components/settings/onboarding/ProjectList.tsx
Accent color palette updated with new blue tones, global chat-pulse animation added, design tokens replace hardcoded HSL values throughout, background classes updated from bg-bg-secondary to bg-bg-primary across settings pages, and cursor-pointer classes added to clickable elements.
Utilities & Configuration
app/lib/apiClient.ts, app/lib/context/AuthContext.tsx, app/lib/constants.ts, app/lib/types/configs.ts, package.json, .env.example
API client enhanced with acceptEmpty option for 204 responses; logout now resets chat store; new localStorage key for chat state; package.json adds Zustand dependency; minor cleanup of config types documentation; .env.example formatting normalized.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatPage as Chat Page
    participant Store as useChatStore
    participant Client as ChatClient
    participant API as API Routes<br/>/api/llm/call
    participant Backend as Backend Service

    User->>ChatPage: Send message with config
    ChatPage->>Store: Append user message
    ChatPage->>Store: Append pending assistant message
    ChatPage->>Client: createLLMCall(request)
    Client->>API: POST /api/llm/call
    API->>Backend: Forward POST request
    Backend-->>API: Return job_id
    API-->>Client: Return job_id
    Client->>API: Start polling /api/llm/call/{job_id}
    loop Poll until completion
        API->>Backend: GET job status
        Backend-->>API: Return status & data
        API-->>Client: Return status response
        Client->>Client: Check if complete
    end
    Client->>ChatPage: Success with assistant text
    ChatPage->>Store: Update assistant message to complete
    ChatPage->>Store: Set conversation_id
    ChatPage->>User: Display complete message
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • chore: redid color scheme to vanilla b/w vercel-esque theming #3 — Both PRs modify the color system by updating app/lib/colors.ts and replacing hardcoded HSL color values with design-token-based Tailwind classes throughout the UI.
  • Phase:1- Refactoring  #90 — Both PRs add and modify API route handlers to proxy backend service calls via a shared apiClient utility, affecting the request/response flow architecture.
  • UI: STT Evals #44 — Both PRs modify app/components/Sidebar.tsx to restructure navigation items, routes, and styling in response to changing application priorities.

Suggested labels

enhancement

Suggested reviewers

  • nishika26
  • Prajna1999
  • vprashrex

Poem

🐰 A chat arrives with hop and cheer,
New messages flowing loud and clear,
Blue tokens shine and purple fade,
Zustand store has come to aid!
With polling hearts and message streams,
The rabbit builds the chat-bot dreams! 💬✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.90% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly identifies the main change: introducing a chat interface with LLM call functionality and related UI/branding updates. It accurately represents the substantial feature additions in this PR.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/chat-llm-call

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Ayush8923 Ayush8923 self-assigned this Apr 28, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (6)
app/components/user-menu/Branding.tsx (1)

9-12: Right-size next/image intrinsic dimensions to match rendered size.

Lines 9–11 declare a full intrinsic size (801x311) while CSS constrains display to h-10; this over-declares the image and causes priority to preload unnecessarily large variants. Align declared dimensions with rendered size using the aspect-ratio-preserving values below.

Suggested change
-        width={801}
-        height={311}
+        width={104}
+        height={40}
+        sizes="104px"
         className="h-10 w-auto"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/user-menu/Branding.tsx` around lines 9 - 12, The next/image in
the Branding component is declaring intrinsic dimensions width={801}
height={311} while CSS forces a small rendered height via className="h-10
w-auto", causing oversized preloads; update the intrinsic width/height to scaled
values that preserve the original aspect ratio and match the rendered size
(e.g., compute new width/height consistent with h-10 and the original 801x311
ratio) so the Image component in Branding.tsx uses appropriately smaller
intrinsic dimensions and avoids loading unnecessarily large priority variants.
app/components/chat/ChatConfigPicker.tsx (1)

108-121: Add ARIA menu semantics for better accessibility.

Consider adding aria-expanded/aria-haspopup on the trigger and role attributes on the menu/menu items to improve screen-reader navigation.

Also applies to: 149-153, 187-193

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatConfigPicker.tsx` around lines 108 - 121, The trigger
button that toggles the menu (the element using onClick={() => !disabled &&
setOpen(prev => !prev)} and the label/Icons) should include aria-expanded={open}
and aria-haspopup="menu" (and an aria-controls pointing to the menu's id); the
menu container (the div rendered when open) should have role="menu" and a unique
id, and each selectable entry rendered inside that container should have
role="menuitem" (or role="menuitemradio"/"menuitemcheckbox" as appropriate) plus
tabIndex and keyboard handling; update the components around setOpen/open, the
trigger button and the menu div (and the item render logic around lines
referenced) to add these attributes so screen-readers can correctly detect and
navigate the menu.
app/api/llm/call/route.ts (1)

14-21: Consider using a pure function instead of mutating the body.

The appendSecretToCallback function mutates its input parameter. While this works correctly, a pure function returning the modified body would be more predictable and easier to test.

♻️ Optional refactor to pure function
-function appendSecretToCallback(body: Record<string, unknown>): void {
+function withSecretCallback(body: Record<string, unknown>): Record<string, unknown> {
   const secret = process.env.WEBHOOK_SECRET;
-  if (!secret) return;
+  if (!secret) return body;
   const url = body.callback_url;
-  if (typeof url !== "string" || url.length === 0) return;
+  if (typeof url !== "string" || url.length === 0) return body;
   const sep = url.includes("?") ? "&" : "?";
-  body.callback_url = `${url}${sep}secret=${encodeURIComponent(secret)}`;
+  return { ...body, callback_url: `${url}${sep}secret=${encodeURIComponent(secret)}` };
 }

Then in POST:

     const body = (await request.json()) as Record<string, unknown>;
-    appendSecretToCallback(body);
+    const finalBody = withSecretCallback(body);

     const { status, data } = await apiClient(request, "/api/v1/llm/call", {
       method: "POST",
-      body: JSON.stringify(body),
+      body: JSON.stringify(finalBody),
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/call/route.ts` around lines 14 - 21, The helper
appendSecretToCallback currently mutates its input body; change it to a pure
function that returns a new body object (e.g., function
appendSecretToCallback(body: Record<string, unknown>): Record<string, unknown>)
that leaves the original untouched, only adding/overwriting callback_url in the
returned object when WEBHOOK_SECRET exists and url is valid; update callers (the
POST handler) to use the returned value instead of relying on in-place mutation
and preserve the same runtime behavior and types (encodeURIComponent, same
separator logic).
app/globals.css (1)

82-90: Consider adding dark mode accent color overrides.

The dark mode media query updates background, foreground, muted, and border colors but doesn't override the accent colors (--accent, --accent-hover). The new blue accent (#1f4496) may have insufficient contrast on dark backgrounds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/globals.css` around lines 82 - 90, The dark-mode :root block updates
background/foreground colors but doesn't override the accent variables, so
--accent and --accent-hover stay as the light-theme blue and may lack contrast;
add overrides for --accent and --accent-hover inside the `@media`
(prefers-color-scheme: dark) :root (the CSS variables named --accent and
--accent-hover) and pick darker-theme-friendly values (e.g., a lighter/brighter
blue variant or contrast-optimized hex) to ensure sufficient contrast on the
dark backgrounds and update any related accent variables the same way.
app/components/chat/ChatMessageList.tsx (1)

18-20: Auto-scroll triggers on any message change, not just appends.

The effect depends on the entire messages array reference. This means editing an existing message's content or status (e.g., when a pending message receives its response) will also trigger a scroll. This is likely the intended behavior for a chat UI, but if you want scroll-to-bottom only on new messages, consider tracking messages.length or the last message's id instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatMessageList.tsx` around lines 18 - 20, The current
useEffect in ChatMessageList that calls bottomRef.current?.scrollIntoView({
behavior: "smooth", block: "end" }) depends on the whole messages array and will
run on any change; change the dependency to a more specific signal (e.g.,
messages.length or the id of the last message such as
messages[messages.length-1]?.id) so scrolling only happens when a message is
appended, not when an existing message is updated; update the dependency array
and ensure bottomRef and messages are still referenced inside the effect (keep
the same effect body but replace [messages] with [messages.length] or
[messages.at(-1)?.id]).
app/api/llm/webhook/[callback_id]/route.ts (1)

28-37: Consider constant-time comparison for webhook secret.

The direct string equality comparison (provided === expected) is vulnerable to timing attacks, which could allow an attacker to guess the secret character by character by measuring response times. For webhook secrets in production, use a constant-time comparison.

🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto";
+
+function safeCompare(a: string, b: string): boolean {
+  if (a.length !== b.length) return false;
+  return timingSafeEqual(Buffer.from(a), Buffer.from(b));
+}
+
 function isAuthorized(request: Request): boolean {
   const expected = process.env.WEBHOOK_SECRET;
   if (!expected) return true;
   const url = new URL(request.url);
   const provided =
     url.searchParams.get("secret") ||
     request.headers.get("x-webhook-secret") ||
     "";
-  return provided === expected;
+  return safeCompare(provided, expected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 28 - 37, Replace the
direct equality check in isAuthorized with a constant-time comparison: keep the
early return when process.env.WEBHOOK_SECRET is missing, otherwise obtain
expected and provided, convert both to Buffers (e.g., Buffer.from(expected) and
Buffer.from(provided || "")), and use crypto.timingSafeEqual after first
checking lengths match (if lengths differ return false) to avoid throwing;
ensure you import/require Node's crypto and reference the isAuthorized function,
expected and provided variables when updating the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.env.example:
- Line 3: The GUARDRAILS_TOKEN environment variable line currently has spaces
around the equals sign; change the assignment to use dotenv-compatible KEY=value
formatting by removing the spaces so the line reads GUARDRAILS_TOKEN=<value or
empty> (i.e., no spaces around '=') to fix linting and dotenv parsing issues.

In `@app/api/llm/call/`[job_id]/result/route.ts:
- Around line 19-27: Replace the non-atomic getResult() then clearResult()
pattern with a new atomic popResult(jobId) helper in the job store (e.g., the
module that currently exports getResult/clearResult) that retrieves the
LLMJobRecord, deletes it from store.results, and clears/deletes any associated
timer from store.timers in one operation; then update the route handler to call
popResult(job_id) instead of getResult() followed by clearResult() so a single
call both returns and removes the record atomically (preserve the same return
semantics as getResult).

In `@app/components/chat/ChatInput.tsx`:
- Around line 68-71: The button's onClick currently calls onSend directly; add
the same defensive guard used in handleKeyDown so the click handler checks
disabled/isPending and that value.trim() is non-empty before invoking onSend.
Update the button's onClick in ChatInput.tsx to mirror the condition (e.g., if
(!disabled && !isPending && value.trim()) onSend()), using the existing
canSend/disabled/isPending/value variables so clicks won't trigger sends when
the component is not ready.

In `@app/components/user-menu/Branding.tsx`:
- Line 8: The logo image in the Branding component uses a terse alt ("Kaapi");
update the alt attribute on the <img> in Branding (or the component that renders
the logo) to a more descriptive label such as "Kaapi — [short descriptive
tagline]" or "Kaapi logo" so screen readers convey intent; modify the alt value
in Branding.tsx where the image is rendered (the alt prop on the logo <img>)
accordingly.

In `@app/lib/chatClient.ts`:
- Around line 76-83: fetchWebhookResult does not accept or pass an AbortSignal
so its fetch call cannot be cancelled; modify fetchWebhookResult to accept a
signal?: AbortSignal parameter and forward it into the fetch init (signal:
signal) and then update callers (e.g., pollLLMCall and other polling helpers
referenced around lines 110-128 and 150-159) to pass their AbortSignal through;
ensure all fetch calls in functions named fetchWebhookResult (and similar
polling fetch helpers) receive and use the provided signal so in-flight requests
are aborted when signal.aborted is triggered.
- Around line 11-12: This module is client-side but still uses apiFetch and raw
fetch, which bypasses the app's 401/token-refresh flow; replace uses of apiFetch
in createLLMCall and the raw fetch calls in the polling logic (and the
occurrences noted around lines 31-39 and 76-98) with clientFetch so token
refresh and AUTH_EXPIRED_EVENT handling are applied. Import clientFetch from the
same api client module, pass the same endpoint and options you currently send to
apiFetch/fetch, and ensure response-stream handling (for streaming/polling in
createLLMCall) remains identical after switching to clientFetch (i.e., use
clientFetch(...).then(res => res.body / res.json() as before). Keep function
names createLLMCall and the polling loop intact; only replace the network call
helpers.
- Around line 225-267: The configToBlob function currently drops
SavedConfig.promptContent; update configToBlob to copy config.promptContent into
blob.completion.prompt_template.template (creating prompt_template if missing)
so the resulting ConfigBlob preserves the saved prompt template; specifically,
in configToBlob (function name) when building the blob variable, if
config.promptContent is set/non-empty, set blob.completion.prompt_template = {
template: config.promptContent } (or merge into an existing prompt_template) so
chat requests retain the saved prompt template.

In `@app/lib/llmJobStore.ts`:
- Around line 39-45: The current implementation uses process-local storage via
globalRef/__llmJobStore with Store.results and Store.timers which breaks in
multi-instance/serverless setups; replace this in-memory Store with a
pluggable/shared implementation (e.g., Redis/Upstash) and update publish() and
getResult() to use that backing store: define an abstract JobStore interface
matching the existing methods/shape (results map, expiration/timers behavior),
provide a Redis-backed JobStore implementation, wire initialization where
globalRef.__llmJobStore is set to the Redis-backed instance (or a factory)
instead of an in-memory Map, and update any code referencing store, results,
timers, publish(), and getResult() to call the new shared store methods so
callbacks and polls work across instances.

In `@app/page.tsx`:
- Around line 172-174: The finally block is clearing isPending for stale/aborted
requests; fix by making the AbortController instance local (const controller =
new AbortController()), assign it to abortRef.current when starting the request,
and in the finally handlers only clear isPending (and any UI state) if
abortRef.current === controller so only the most recent request can disable the
composer/config picker; apply the same guard for the other similar block that
uses abortRef and isPending.

---

Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 14-21: The helper appendSecretToCallback currently mutates its
input body; change it to a pure function that returns a new body object (e.g.,
function appendSecretToCallback(body: Record<string, unknown>): Record<string,
unknown>) that leaves the original untouched, only adding/overwriting
callback_url in the returned object when WEBHOOK_SECRET exists and url is valid;
update callers (the POST handler) to use the returned value instead of relying
on in-place mutation and preserve the same runtime behavior and types
(encodeURIComponent, same separator logic).

In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 28-37: Replace the direct equality check in isAuthorized with a
constant-time comparison: keep the early return when process.env.WEBHOOK_SECRET
is missing, otherwise obtain expected and provided, convert both to Buffers
(e.g., Buffer.from(expected) and Buffer.from(provided || "")), and use
crypto.timingSafeEqual after first checking lengths match (if lengths differ
return false) to avoid throwing; ensure you import/require Node's crypto and
reference the isAuthorized function, expected and provided variables when
updating the logic.

In `@app/components/chat/ChatConfigPicker.tsx`:
- Around line 108-121: The trigger button that toggles the menu (the element
using onClick={() => !disabled && setOpen(prev => !prev)} and the label/Icons)
should include aria-expanded={open} and aria-haspopup="menu" (and an
aria-controls pointing to the menu's id); the menu container (the div rendered
when open) should have role="menu" and a unique id, and each selectable entry
rendered inside that container should have role="menuitem" (or
role="menuitemradio"/"menuitemcheckbox" as appropriate) plus tabIndex and
keyboard handling; update the components around setOpen/open, the trigger button
and the menu div (and the item render logic around lines referenced) to add
these attributes so screen-readers can correctly detect and navigate the menu.

In `@app/components/chat/ChatMessageList.tsx`:
- Around line 18-20: The current useEffect in ChatMessageList that calls
bottomRef.current?.scrollIntoView({ behavior: "smooth", block: "end" }) depends
on the whole messages array and will run on any change; change the dependency to
a more specific signal (e.g., messages.length or the id of the last message such
as messages[messages.length-1]?.id) so scrolling only happens when a message is
appended, not when an existing message is updated; update the dependency array
and ensure bottomRef and messages are still referenced inside the effect (keep
the same effect body but replace [messages] with [messages.length] or
[messages.at(-1)?.id]).

In `@app/components/user-menu/Branding.tsx`:
- Around line 9-12: The next/image in the Branding component is declaring
intrinsic dimensions width={801} height={311} while CSS forces a small rendered
height via className="h-10 w-auto", causing oversized preloads; update the
intrinsic width/height to scaled values that preserve the original aspect ratio
and match the rendered size (e.g., compute new width/height consistent with h-10
and the original 801x311 ratio) so the Image component in Branding.tsx uses
appropriately smaller intrinsic dimensions and avoids loading unnecessarily
large priority variants.

In `@app/globals.css`:
- Around line 82-90: The dark-mode :root block updates background/foreground
colors but doesn't override the accent variables, so --accent and --accent-hover
stay as the light-theme blue and may lack contrast; add overrides for --accent
and --accent-hover inside the `@media` (prefers-color-scheme: dark) :root (the CSS
variables named --accent and --accent-hover) and pick darker-theme-friendly
values (e.g., a lighter/brighter blue variant or contrast-optimized hex) to
ensure sufficient contrast on the dark backgrounds and update any related accent
variables the same way.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f05d161c-5c1b-45b1-858e-65012dff0129

📥 Commits

Reviewing files that changed from the base of the PR and between 2884ac6 and 9d681b4.

⛔ Files ignored due to path filters (1)
  • public/kaapi-logo.png is excluded by !**/*.png
📒 Files selected for processing (28)
  • .env.example
  • app/(main)/document/page.tsx
  • app/api/llm/call/[job_id]/result/route.ts
  • app/api/llm/call/[job_id]/route.ts
  • app/api/llm/call/route.ts
  • app/api/llm/webhook/[callback_id]/route.ts
  • app/components/Button.tsx
  • app/components/Sidebar.tsx
  • app/components/auth/TokenVerifyPage.tsx
  • app/components/chat/ChatConfigPicker.tsx
  • app/components/chat/ChatEmptyState.tsx
  • app/components/chat/ChatInput.tsx
  • app/components/chat/ChatMessage.tsx
  • app/components/chat/ChatMessageList.tsx
  • app/components/chat/index.ts
  • app/components/icons/index.tsx
  • app/components/icons/sidebar/ChatIcon.tsx
  • app/components/icons/sidebar/SendIcon.tsx
  • app/components/settings/SettingsSidebar.tsx
  • app/components/user-menu/Branding.tsx
  • app/globals.css
  • app/lib/chatClient.ts
  • app/lib/colors.ts
  • app/lib/llmJobStore.ts
  • app/lib/navConfig.ts
  • app/lib/types/chat.ts
  • app/page.tsx
  • middleware.ts

Comment thread .env.example
Comment thread app/api/llm/call/[job_id]/result/route.ts Outdated
Comment thread app/components/chat/ChatInput.tsx
Comment thread app/components/user-menu/Branding.tsx Outdated
Comment thread app/lib/chatClient.ts
Comment thread app/lib/chatClient.ts Outdated
Comment thread app/lib/chatClient.ts
Comment thread app/lib/llmJobStore.ts Outdated
Comment thread app/page.tsx Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
app/lib/chatClient.ts (3)

240-246: ⚠️ Potential issue | 🟠 Major

promptContent is not preserved in ConfigBlob.

The configToBlob function drops config.promptContent, meaning chat requests built from saved configs may ignore the saved prompt template. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 240 - 246, The ConfigBlob created in
configToBlob currently omits config.promptContent so saved prompt templates are
lost; update the blob construction in configToBlob (and the
ConfigBlob/completion type if needed) to include completion.promptContent =
config.promptContent when present, preserving the original prompt template for
later chat requests (ensure you update any related types/interfaces like
ConfigBlob and the call sites that consume completion.promptContent).

75-82: ⚠️ Potential issue | 🟠 Major

Abort signal not passed to fetch.

The fetchWebhookResult function accepts no signal parameter, so in-flight polling requests cannot be cancelled when the user aborts. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 75 - 82, The fetchWebhookResult function
currently cannot be cancelled; add an optional parameter (e.g., signal?:
AbortSignal) to fetchWebhookResult(jobId: string, apiKey: string, signal?:
AbortSignal): Promise<LLMCallStatusResponse | null> and pass that signal into
the fetch options (include signal alongside headers and credentials) so
in-flight requests can be aborted; also update any callers (polling helpers) to
forward their AbortSignal when invoking fetchWebhookResult so user aborts
actually cancel the request.

30-38: ⚠️ Potential issue | 🟠 Major

Should use clientFetch for browser-side API calls.

This module is client-side but uses apiFetch, bypassing the app's standard 401 refresh and auth-expired handling. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 30 - 38, The createLLMCall function in
app/lib/chatClient.ts is using apiFetch (server-style) instead of the
browser-safe clientFetch, so replace the apiFetch call in createLLMCall with
clientFetch("/api/llm/call", { method: "POST", body: JSON.stringify(body) }) and
let the app's clientFetch handle auth/401 refresh; if callers currently pass an
apiKey and you must preserve that behavior, forward it as a header (e.g.,
"x-api-key") in the init object; update the function signature accordingly
(remove apiKey if unused) and ensure the returned type remains
Promise<LLMCallCreateResponse>.
app/page.tsx (1)

230-235: ⚠️ Potential issue | 🟠 Major

isPending is still cleared by stale requests.

The finally block correctly guards abortRef.current = null with the controller check, but setIsPending(false) on line 234 executes unconditionally. If request A is aborted because request B starts, A's finally block still clears isPending, re-enabling UI controls while B is in-flight.

🐛 Suggested fix
       } finally {
         if (abortRef.current === controller) {
           abortRef.current = null;
+          setIsPending(false);
         }
-        setIsPending(false);
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/page.tsx` around lines 230 - 235, The finally block unconditionally calls
setIsPending(false) causing stale request A to clear the pending state for a
newer request B; update the finally to only clear isPending when the finishing
controller is still the active one (same guard used for abortRef.current): move
or wrap setIsPending(false) inside the existing if (abortRef.current ===
controller) block (or check the same condition before calling setIsPending) so
only the current request can reset the pending state; reference abortRef,
controller, and setIsPending in the change.
🧹 Nitpick comments (7)
app/components/Sidebar.tsx (2)

115-123: Consider standardizing icon sizing in the map.

The ChatIcon addition follows the existing pattern where most icons don't have explicit sizing. Note that GearIcon has className="w-5 h-5" while others rely on default sizing. This inconsistency was pre-existing but could be standardized for uniform rendering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` around lines 115 - 123, The icon map (iconMap)
has inconsistent sizing—GearIcon is the only entry with an explicit className
while others (e.g., ChatIcon, ClipboardIcon, DocumentFileIcon, BookOpenIcon,
ShieldCheckIcon, SlidersIcon) rely on defaults; update the map to standardize
sizes by either removing the explicit className from GearIcon or applying a
shared size (e.g., a single className like "w-5 h-5") to all icons, or wrap each
icon with a small Icon component/utility that enforces the uniform size, so
rendering is consistent across the icons.

33-39: Consider reading sidebarCollapsed from context instead of props.

The component receives collapsed as a prop (line 34) but writes state changes via setSidebarCollapsed from useApp() (line 39). Meanwhile, PageHeader reads and writes directly from the context. This asymmetry works but could be simplified by using the context consistently for both read and write operations.

♻️ Suggested refactor
 export default function Sidebar({
-  collapsed,
   activeRoute = "/",
 }: SidebarProps) {
   const router = useRouter();
   const { currentUser, googleProfile, isAuthenticated, logout } = useAuth();
-  const { setSidebarCollapsed } = useApp();
+  const { sidebarCollapsed: collapsed, setSidebarCollapsed } = useApp();

This would also require updating SidebarProps to remove collapsed if it's no longer needed as a prop.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` around lines 33 - 39, The Sidebar component
currently accepts a collapsed prop but uses setSidebarCollapsed from useApp();
change Sidebar to read the collapsed state from context instead of via the
collapsed prop: remove the collapsed prop from SidebarProps and any callers that
pass it, use useApp() inside Sidebar to get the current sidebarCollapsed value
(alongside setSidebarCollapsed), and update any references that relied on the
prop to use that context value; ensure PageHeader and Sidebar both use the same
context-based state (useApp, setSidebarCollapsed) to keep behavior consistent.
app/api/llm/webhook/[callback_id]/route.ts (1)

17-26: Consider constant-time comparison for webhook secret.

The secret comparison on line 25 uses === which is vulnerable to timing attacks. While the risk is low for webhook secrets (attacker would need many requests and precise timing), using constant-time comparison is a security best practice.

🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto";
+
+function safeCompare(a: string, b: string): boolean {
+  if (a.length !== b.length) return false;
+  return timingSafeEqual(Buffer.from(a), Buffer.from(b));
+}
+
 function isAuthorized(request: Request): boolean {
   const expected = process.env.WEBHOOK_SECRET;
   if (!expected) return true;
   const url = new URL(request.url);
   const provided =
     url.searchParams.get("secret") ||
     request.headers.get("x-webhook-secret") ||
     "";
-  return provided === expected;
+  return safeCompare(provided, expected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 17 - 26, The
isAuthorized function currently does a plain === comparison of the webhook
secret; replace that with a constant-time comparison using
crypto.timingSafeEqual: import Node's crypto, convert both provided and expected
to Buffers (e.g., Buffer.from(...)) and if their lengths differ return false,
otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and return that
boolean; keep the existing behavior when process.env.WEBHOOK_SECRET is unset
(still return true) and ensure you handle empty/null provided values safely
before buffering.
app/api/llm/call/route.ts (1)

26-31: Consider extracting upstream error details on non-2xx responses.

When the upstream /api/v1/llm/call returns an error status, the current code passes through data directly. Per coding guidelines, error messages should be extracted using body.error || body.message || body.detail. If data contains structured error info, consider normalizing it for consistency.

♻️ Suggested improvement
     const { status, data } = await apiClient(request, "/api/v1/llm/call", {
       method: "POST",
       body: JSON.stringify(body),
     });

+    if (!data?.success && status >= 400) {
+      const errorMsg = data?.error || data?.message || data?.detail || "Request failed";
+      return NextResponse.json(
+        { success: false, error: errorMsg, data: null },
+        { status },
+      );
+    }
+
     return NextResponse.json(data, { status });

As per coding guidelines: Extract error messages from API responses using the pattern body.error || body.message || body.detail when adding new API routes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/call/route.ts` around lines 26 - 31, The route currently proxies
the upstream response directly (const { status, data } = await apiClient(...);
return NextResponse.json(data, { status });) which can expose inconsistent error
structures; update the handler to detect non-2xx statuses and normalize the
error payload by extracting the message from data.error || data.message ||
data.detail (falling back to a generic message), then return a consistent JSON
error object (e.g., { error: message, details: data }) with the original status
using NextResponse.json; keep usage of apiClient, status, data and
NextResponse.json to locate and modify the code.
app/lib/types/chat.ts (1)

11-19: Consider making status required on ChatMessage.

The status field is marked optional (status?: ChatMessageStatus), but the UI code in page.tsx always sets it ("pending", "complete", "error"). Making it required would provide better type safety and document the invariant.

💡 Suggested change
 export interface ChatMessage {
   id: string;
   role: ChatRole;
   content: string;
   createdAt: number;
-  status?: ChatMessageStatus;
+  status: ChatMessageStatus;
   jobId?: string;
   error?: string;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/types/chat.ts` around lines 11 - 19, The ChatMessage interface
currently makes status optional but the UI (e.g., page.tsx) always sets it;
update the ChatMessage type by removing the optional modifier from status (make
status: ChatMessageStatus) and then update all message creation sites (places
that construct ChatMessage objects, such as in page.tsx/new message builders or
any tests) to explicitly supply a valid ChatMessageStatus value
("pending"|"complete"|"error" or the enum members) so the code typechecks and
the invariant is documented.
app/lib/chatClient.ts (1)

232-238: Only first tool's max_num_results is used.

When multiple tools exist, max_num_results is taken only from tools[0] (line 236), ignoring values from other tools. This may be intentional, but could lead to unexpected behavior if tools have different limits.

💡 Consider documenting or handling multiple tools
   const tools: Tool[] = config.tools ?? [];
   if (tools.length > 0) {
     const knowledge_base_ids = tools.flatMap((t) => t.knowledge_base_ids);
     if (knowledge_base_ids.length > 0) {
       params.knowledge_base_ids = knowledge_base_ids;
-      params.max_num_results = tools[0].max_num_results;
+      // Use the maximum value across all tools, or the first tool's value
+      params.max_num_results = Math.max(...tools.map((t) => t.max_num_results ?? 0));
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 232 - 238, The current code only uses
tools[0].max_num_results when setting params.max_num_results which ignores other
tools' limits; update the logic in the block that processes tools (the variables
tools, knowledge_base_ids, params) to compute a combined value (e.g., take the
maximum of all tools' max_num_results via tools.map(t => t.max_num_results) and
Math.max, or pick another aggregation like the minimum if you prefer
conservative limits) and assign that aggregated value to params.max_num_results
instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids
with tools.flatMap as before.
app/components/chat/ChatEmptyState.tsx (1)

46-59: Consider dark mode compatibility for hover states.

The hover:bg-neutral-50 class uses a hardcoded light color that may not adapt well in dark mode contexts. Consider using a theme-aware token like hover:bg-bg-secondary or similar if dark mode support is planned.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatEmptyState.tsx` around lines 46 - 59, The hover state
on the suggestion buttons uses a hardcoded light color (class
hover:bg-neutral-50) which breaks in dark mode; update the button classes
rendered in the SUGGESTIONS map (the element using onSuggestion,
isAuthenticated, hasConfig) to use a theme-aware token such as
hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary /
hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover
background adapts to dark/light themes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/lib/chatClient.ts`:
- Around line 143-152: The promise-based polling wait in chatClient.ts leaks
abort listeners because the onAbort handler is only removed when abort fires;
fix it by capturing the onAbort function in a variable and, when the timer
resolves (before calling resolve), remove the abort listener via
signal.removeEventListener("abort", onAbort) and clear the timeout; ensure you
guard signal exists and only add/remove the listener around
signal.addEventListener and signal.removeEventListener so the listener is
cleaned up whether the timer fires or abort occurs.

---

Duplicate comments:
In `@app/lib/chatClient.ts`:
- Around line 240-246: The ConfigBlob created in configToBlob currently omits
config.promptContent so saved prompt templates are lost; update the blob
construction in configToBlob (and the ConfigBlob/completion type if needed) to
include completion.promptContent = config.promptContent when present, preserving
the original prompt template for later chat requests (ensure you update any
related types/interfaces like ConfigBlob and the call sites that consume
completion.promptContent).
- Around line 75-82: The fetchWebhookResult function currently cannot be
cancelled; add an optional parameter (e.g., signal?: AbortSignal) to
fetchWebhookResult(jobId: string, apiKey: string, signal?: AbortSignal):
Promise<LLMCallStatusResponse | null> and pass that signal into the fetch
options (include signal alongside headers and credentials) so in-flight requests
can be aborted; also update any callers (polling helpers) to forward their
AbortSignal when invoking fetchWebhookResult so user aborts actually cancel the
request.
- Around line 30-38: The createLLMCall function in app/lib/chatClient.ts is
using apiFetch (server-style) instead of the browser-safe clientFetch, so
replace the apiFetch call in createLLMCall with clientFetch("/api/llm/call", {
method: "POST", body: JSON.stringify(body) }) and let the app's clientFetch
handle auth/401 refresh; if callers currently pass an apiKey and you must
preserve that behavior, forward it as a header (e.g., "x-api-key") in the init
object; update the function signature accordingly (remove apiKey if unused) and
ensure the returned type remains Promise<LLMCallCreateResponse>.

In `@app/page.tsx`:
- Around line 230-235: The finally block unconditionally calls
setIsPending(false) causing stale request A to clear the pending state for a
newer request B; update the finally to only clear isPending when the finishing
controller is still the active one (same guard used for abortRef.current): move
or wrap setIsPending(false) inside the existing if (abortRef.current ===
controller) block (or check the same condition before calling setIsPending) so
only the current request can reset the pending state; reference abortRef,
controller, and setIsPending in the change.

---

Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 26-31: The route currently proxies the upstream response directly
(const { status, data } = await apiClient(...); return NextResponse.json(data, {
status });) which can expose inconsistent error structures; update the handler
to detect non-2xx statuses and normalize the error payload by extracting the
message from data.error || data.message || data.detail (falling back to a
generic message), then return a consistent JSON error object (e.g., { error:
message, details: data }) with the original status using NextResponse.json; keep
usage of apiClient, status, data and NextResponse.json to locate and modify the
code.

In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 17-26: The isAuthorized function currently does a plain ===
comparison of the webhook secret; replace that with a constant-time comparison
using crypto.timingSafeEqual: import Node's crypto, convert both provided and
expected to Buffers (e.g., Buffer.from(...)) and if their lengths differ return
false, otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and
return that boolean; keep the existing behavior when process.env.WEBHOOK_SECRET
is unset (still return true) and ensure you handle empty/null provided values
safely before buffering.

In `@app/components/chat/ChatEmptyState.tsx`:
- Around line 46-59: The hover state on the suggestion buttons uses a hardcoded
light color (class hover:bg-neutral-50) which breaks in dark mode; update the
button classes rendered in the SUGGESTIONS map (the element using onSuggestion,
isAuthenticated, hasConfig) to use a theme-aware token such as
hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary /
hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover
background adapts to dark/light themes.

In `@app/components/Sidebar.tsx`:
- Around line 115-123: The icon map (iconMap) has inconsistent sizing—GearIcon
is the only entry with an explicit className while others (e.g., ChatIcon,
ClipboardIcon, DocumentFileIcon, BookOpenIcon, ShieldCheckIcon, SlidersIcon)
rely on defaults; update the map to standardize sizes by either removing the
explicit className from GearIcon or applying a shared size (e.g., a single
className like "w-5 h-5") to all icons, or wrap each icon with a small Icon
component/utility that enforces the uniform size, so rendering is consistent
across the icons.
- Around line 33-39: The Sidebar component currently accepts a collapsed prop
but uses setSidebarCollapsed from useApp(); change Sidebar to read the collapsed
state from context instead of via the collapsed prop: remove the collapsed prop
from SidebarProps and any callers that pass it, use useApp() inside Sidebar to
get the current sidebarCollapsed value (alongside setSidebarCollapsed), and
update any references that relied on the prop to use that context value; ensure
PageHeader and Sidebar both use the same context-based state (useApp,
setSidebarCollapsed) to keep behavior consistent.

In `@app/lib/chatClient.ts`:
- Around line 232-238: The current code only uses tools[0].max_num_results when
setting params.max_num_results which ignores other tools' limits; update the
logic in the block that processes tools (the variables tools,
knowledge_base_ids, params) to compute a combined value (e.g., take the maximum
of all tools' max_num_results via tools.map(t => t.max_num_results) and
Math.max, or pick another aggregation like the minimum if you prefer
conservative limits) and assign that aggregated value to params.max_num_results
instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids
with tools.flatMap as before.

In `@app/lib/types/chat.ts`:
- Around line 11-19: The ChatMessage interface currently makes status optional
but the UI (e.g., page.tsx) always sets it; update the ChatMessage type by
removing the optional modifier from status (make status: ChatMessageStatus) and
then update all message creation sites (places that construct ChatMessage
objects, such as in page.tsx/new message builders or any tests) to explicitly
supply a valid ChatMessageStatus value ("pending"|"complete"|"error" or the enum
members) so the code typechecks and the invariant is documented.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0718bf9b-4790-477c-817e-163c86605459

📥 Commits

Reviewing files that changed from the base of the PR and between 9d681b4 and 5c2adfc.

📒 Files selected for processing (15)
  • app/api/llm/call/[job_id]/result/route.ts
  • app/api/llm/call/route.ts
  • app/api/llm/webhook/[callback_id]/route.ts
  • app/components/PageHeader.tsx
  • app/components/Sidebar.tsx
  • app/components/chat/ChatEmptyState.tsx
  • app/components/chat/ChatMessage.tsx
  • app/components/document/DocumentListing.tsx
  • app/components/user-menu/Branding.tsx
  • app/globals.css
  • app/lib/chatClient.ts
  • app/lib/llmJobStore.ts
  • app/lib/types/chat.ts
  • app/page.tsx
  • middleware.ts
✅ Files skipped from review due to trivial changes (1)
  • app/components/document/DocumentListing.tsx
🚧 Files skipped from review as they are similar to previous changes (5)
  • middleware.ts
  • app/api/llm/call/[job_id]/result/route.ts
  • app/components/user-menu/Branding.tsx
  • app/components/chat/ChatMessage.tsx
  • app/globals.css

Comment thread app/lib/chatClient.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
app/api/llm/webhook/[callback_id]/route.ts (1)

17-21: Consider using constant-time comparison for secret validation.

The === comparison for secrets is susceptible to timing attacks. While this is a low-risk scenario (webhook from trusted backend), using a constant-time comparison would be more robust.

🔧 Suggested improvement
+import { timingSafeEqual } from "crypto";
+
+function safeCompare(a: string, b: string): boolean {
+  if (a.length !== b.length) return false;
+  return timingSafeEqual(Buffer.from(a), Buffer.from(b));
+}
+
 function isAuthorized(request: Request): boolean {
   const expected = process.env.WEBHOOK_SECRET;
   if (!expected) return true;
   const url = new URL(request.url);
   const provided =
     url.searchParams.get("secret") ||
     request.headers.get("x-webhook-secret") ||
     "";
-  return provided === expected;
+  return safeCompare(provided, expected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 17 - 21, The secret
comparison using `provided === expected` is vulnerable to timing attacks;
replace it with a constant-time comparison by using Node's
crypto.timingSafeEqual: convert `provided` and `expected` to Buffers (e.g.,
Buffer.from(..., 'utf8')), first check lengths and return false if different,
then call `crypto.timingSafeEqual(bufProvided, bufExpected)` to return the
boolean; update the code around the `provided` and `expected` variables in
route.ts (where the current equality is performed) to perform these Buffer and
timing-safe checks.
app/page.tsx (1)

46-51: Remove duplicate genId function.

This function duplicates generateCallbackId imported from chatClient.ts (line 22). Use generateCallbackId for message IDs to reduce code duplication.

♻️ Suggested fix
-function genId() {
-  if (typeof crypto !== "undefined" && "randomUUID" in crypto) {
-    return crypto.randomUUID();
-  }
-  return `${Date.now()}-${Math.random().toString(36).slice(2, 10)}`;
-}

Then replace usages of genId() with generateCallbackId():

       const userMessage: ChatMessage = {
-        id: genId(),
+        id: generateCallbackId(),
         role: "user",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/page.tsx` around lines 46 - 51, The project defines a duplicate ID
generator `genId()` in page.tsx while an equivalent `generateCallbackId` is
already exported from chatClient.ts; remove the local `genId` function and
replace all calls to `genId()` with `generateCallbackId()` (ensure
`generateCallbackId` is imported where needed), updating any references in
message creation logic so the single canonical generator (`generateCallbackId`)
is used consistently.
app/lib/chatClient.ts (1)

226-232: max_num_results uses only the first tool's value.

When multiple tools exist, max_num_results is taken from tools[0] only. If different tools have different max_num_results values, the others are silently ignored.

💡 Possible alternatives
  1. Use the maximum value across all tools:
-      params.max_num_results = tools[0].max_num_results;
+      params.max_num_results = Math.max(...tools.map(t => t.max_num_results ?? 0));
  1. Or document this behavior if intentional (first tool takes precedence).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 226 - 232, The current block sets
params.max_num_results from only tools[0], ignoring other tools; update the
logic in the function that contains variables tools and params so
max_num_results is derived from all tools (e.g., compute the maximum of
tools.map(t => t.max_num_results) while guarding against undefined/null) and
assign that computed value to params.max_num_results (or, if intended, add a
clear comment/documentation that the first tool takes precedence); ensure you
still collect knowledge_base_ids via tools.flatMap as before.
app/components/Sidebar.tsx (1)

30-30: Avoid duplicating public-route policy in the sidebar.

Line 30 hardcodes public routes locally while middleware.ts maintains a broader public-route list. This can drift and create UI gating inconsistencies over time. Consider moving public-route definitions to a shared config consumed by both places.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` at line 30, PUBLIC_ROUTES in Sidebar.tsx
duplicates the public-route policy defined in middleware.ts; extract the
canonical set into a shared module (e.g., export a PUBLIC_ROUTES or
isPublicRoute helper) and import it into both Sidebar.tsx and middleware.ts so
both use the same source of truth. Update Sidebar.tsx to remove the local const
PUBLIC_ROUTES and import the shared symbol (e.g., PUBLIC_ROUTES or
isPublicRoute) and adjust any usage accordingly; likewise replace the local
definition in middleware.ts with an import from the same shared module so UI and
middleware remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/components/Sidebar.tsx`:
- Around line 152-153: The collapsed sidebar is only visually hidden; update the
Sidebar component JSX (the element with className using `${collapsed ? "w-0
overflow-hidden" : "w-60"}`) to remove it from the accessibility tree and
interaction when collapsed by adding accessibility and interaction guards: set
aria-hidden={collapsed} and hidden={collapsed} on that container, add
tabIndex={collapsed ? -1 : 0} (or remove focusability) and apply a CSS utility
like pointer-events-none when collapsed (e.g., include `${collapsed ?
"pointer-events-none" : ""}`) so keyboard, mouse, and screen-readers cannot
interact with collapsed nav.

---

Nitpick comments:
In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 17-21: The secret comparison using `provided === expected` is
vulnerable to timing attacks; replace it with a constant-time comparison by
using Node's crypto.timingSafeEqual: convert `provided` and `expected` to
Buffers (e.g., Buffer.from(..., 'utf8')), first check lengths and return false
if different, then call `crypto.timingSafeEqual(bufProvided, bufExpected)` to
return the boolean; update the code around the `provided` and `expected`
variables in route.ts (where the current equality is performed) to perform these
Buffer and timing-safe checks.

In `@app/components/Sidebar.tsx`:
- Line 30: PUBLIC_ROUTES in Sidebar.tsx duplicates the public-route policy
defined in middleware.ts; extract the canonical set into a shared module (e.g.,
export a PUBLIC_ROUTES or isPublicRoute helper) and import it into both
Sidebar.tsx and middleware.ts so both use the same source of truth. Update
Sidebar.tsx to remove the local const PUBLIC_ROUTES and import the shared symbol
(e.g., PUBLIC_ROUTES or isPublicRoute) and adjust any usage accordingly;
likewise replace the local definition in middleware.ts with an import from the
same shared module so UI and middleware remain consistent.

In `@app/lib/chatClient.ts`:
- Around line 226-232: The current block sets params.max_num_results from only
tools[0], ignoring other tools; update the logic in the function that contains
variables tools and params so max_num_results is derived from all tools (e.g.,
compute the maximum of tools.map(t => t.max_num_results) while guarding against
undefined/null) and assign that computed value to params.max_num_results (or, if
intended, add a clear comment/documentation that the first tool takes
precedence); ensure you still collect knowledge_base_ids via tools.flatMap as
before.

In `@app/page.tsx`:
- Around line 46-51: The project defines a duplicate ID generator `genId()` in
page.tsx while an equivalent `generateCallbackId` is already exported from
chatClient.ts; remove the local `genId` function and replace all calls to
`genId()` with `generateCallbackId()` (ensure `generateCallbackId` is imported
where needed), updating any references in message creation logic so the single
canonical generator (`generateCallbackId`) is used consistently.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3c3639f9-5ae9-4214-a8c3-276c5fa00a2d

📥 Commits

Reviewing files that changed from the base of the PR and between 5c2adfc and c2ff835.

⛔ Files ignored due to path filters (5)
  • public/file.svg is excluded by !**/*.svg
  • public/globe.svg is excluded by !**/*.svg
  • public/next.svg is excluded by !**/*.svg
  • public/vercel.svg is excluded by !**/*.svg
  • public/window.svg is excluded by !**/*.svg
📒 Files selected for processing (14)
  • .env.example
  • app/(main)/settings/credentials/page.tsx
  • app/(main)/settings/onboarding/page.tsx
  • app/api/llm/webhook/[callback_id]/route.ts
  • app/components/Sidebar.tsx
  • app/components/chat/ChatConfigPicker.tsx
  • app/components/chat/ChatInput.tsx
  • app/components/settings/ProviderSidebar.tsx
  • app/components/settings/SettingsSidebar.tsx
  • app/components/settings/onboarding/OrganizationList.tsx
  • app/components/settings/onboarding/ProjectList.tsx
  • app/lib/chatClient.ts
  • app/page.tsx
  • middleware.ts
✅ Files skipped from review due to trivial changes (4)
  • app/(main)/settings/onboarding/page.tsx
  • app/components/settings/onboarding/ProjectList.tsx
  • app/(main)/settings/credentials/page.tsx
  • app/components/settings/onboarding/OrganizationList.tsx
🚧 Files skipped from review as they are similar to previous changes (4)
  • app/components/chat/ChatInput.tsx
  • middleware.ts
  • app/components/settings/SettingsSidebar.tsx
  • app/components/chat/ChatConfigPicker.tsx

Comment thread app/components/Sidebar.tsx
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
app/components/Sidebar.tsx (1)

30-30: Duplicate PUBLIC_ROUTES may drift from middleware definition.

This local PUBLIC_ROUTES set only contains "/" and "/chat", but middleware.ts defines additional public routes ("/invite", "/verify", "/coming-soon/*"). Consider importing or sharing this constant from a single source to avoid inconsistencies.

♻️ Extract to a shared constant

Create a shared constant in a lib file:

// app/lib/routes.ts
export const PUBLIC_ROUTES = new Set<string>([
  "/",
  "/chat",
  "/invite",
  "/verify",
  "/coming-soon/guardrails",
  "/coming-soon/model-testing",
  "/coming-soon/redteaming",
  "/coming-soon/text-to-speech",
]);

Then import in both middleware.ts and Sidebar.tsx.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` at line 30, The local PUBLIC_ROUTES Set in
Sidebar.tsx is missing entries defined in middleware.ts and can drift; extract a
single exported constant (e.g., export const PUBLIC_ROUTES = new
Set<string>([...] ) ) into a shared module (suggested name: app/lib/routes.ts)
containing "/", "/chat", "/invite", "/verify" and the specific
"/coming-soon/..." entries, then replace the local const in Sidebar.tsx and the
definition in middleware.ts with imports from that shared PUBLIC_ROUTES to
ensure consistency.
app/components/settings/SettingsSidebar.tsx (1)

55-64: Consider accessibility attributes for the collapse control.

While the collapse button has an aria-label, the collapsed sidebar state should also prevent focus/interaction with hidden content. This same issue exists in Sidebar.tsx and was flagged in a past review. For consistency, both sidebars should apply the same accessibility treatment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/settings/SettingsSidebar.tsx` around lines 55 - 64, The
collapse control needs full accessibility treatment: update the button in
SettingsSidebar (the onClick that calls setSidebarCollapsed and the
ChevronLeftIcon) to include aria-expanded reflecting the collapsed state and
aria-controls pointing at the sidebar panel ID, and ensure the sidebar content
container toggles focusability/interaction when collapsed (e.g., apply the same
inert/aria-hidden and remove-from-tab-order behavior used in Sidebar.tsx so
hidden content cannot be focused). Mirror the same attribute names and
focus-management approach from Sidebar.tsx to keep behaviour consistent with the
existing setSidebarCollapsed state handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/`(main)/chat/page.tsx:
- Line 62: Add a short clarifying comment where useConfigs is called to explain
the deliberate pageSize: 0 choice; update the line containing useConfigs({
pageSize: 0 }) (referencing useConfigs, configs, and loadSingleVersion) to
include a one-line comment such as "Don't load all configs; fetch selected
config on-demand" so future maintainers understand that configs will be empty
and loadSingleVersion is invoked on demand.

---

Nitpick comments:
In `@app/components/settings/SettingsSidebar.tsx`:
- Around line 55-64: The collapse control needs full accessibility treatment:
update the button in SettingsSidebar (the onClick that calls setSidebarCollapsed
and the ChevronLeftIcon) to include aria-expanded reflecting the collapsed state
and aria-controls pointing at the sidebar panel ID, and ensure the sidebar
content container toggles focusability/interaction when collapsed (e.g., apply
the same inert/aria-hidden and remove-from-tab-order behavior used in
Sidebar.tsx so hidden content cannot be focused). Mirror the same attribute
names and focus-management approach from Sidebar.tsx to keep behaviour
consistent with the existing setSidebarCollapsed state handling.

In `@app/components/Sidebar.tsx`:
- Line 30: The local PUBLIC_ROUTES Set in Sidebar.tsx is missing entries defined
in middleware.ts and can drift; extract a single exported constant (e.g., export
const PUBLIC_ROUTES = new Set<string>([...] ) ) into a shared module (suggested
name: app/lib/routes.ts) containing "/", "/chat", "/invite", "/verify" and the
specific "/coming-soon/..." entries, then replace the local const in Sidebar.tsx
and the definition in middleware.ts with imports from that shared PUBLIC_ROUTES
to ensure consistency.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 324bee64-71d2-4e0f-a5c1-35d21adfe1ff

📥 Commits

Reviewing files that changed from the base of the PR and between c2ff835 and cb83ed0.

📒 Files selected for processing (7)
  • app/(main)/chat/page.tsx
  • app/components/Sidebar.tsx
  • app/components/auth/TokenVerifyPage.tsx
  • app/components/settings/SettingsSidebar.tsx
  • app/lib/navConfig.ts
  • app/page.tsx
  • middleware.ts
✅ Files skipped from review due to trivial changes (1)
  • app/lib/navConfig.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • app/components/auth/TokenVerifyPage.tsx
  • app/page.tsx

Comment thread app/(main)/chat/page.tsx Outdated
@Ayush8923 Ayush8923 changed the title Chat Interface: LLM Call Test Chat Interface: LLM Call Test & Branding/UI Revamp Apr 29, 2026
@Ayush8923 Ayush8923 linked an issue Apr 30, 2026 that may be closed by this pull request
Comment thread app/(main)/chat/page.tsx Outdated
pageSize: 0,
});

const [messages, setMessages] = useState<ChatMessage[]>([]);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use proper store for chat state management using zustand + local storage
keep this in separate file /lib/chat/store.ts

so that chat history would be persisted atleast or else on refresh it will go

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out.
I have implemented this. Please check again.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
app/(main)/chat/page.tsx (1)

169-169: 💤 Low value

Rename to avoid variable shadowing.

The variable text shadows the function parameter text from line 93. While the code works correctly (since trimmed is used for the user input), renaming improves clarity.

Suggested rename
-       const text = extractAssistantText(result.llm_response?.response);
+       const assistantText = extractAssistantText(result.llm_response?.response);
        const newConversationId =
          result.llm_response?.response?.conversation_id ?? conversationId;
        if (newConversationId && newConversationId !== conversationId) {
          setConversationId(newConversationId);
        }

        updateMessageInStore(assistantMessage.id, {
          content:
-           text ||
+           assistantText ||
            "(The assistant returned an empty response — try again or pick a different configuration.)",
          status: "complete",
        });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/`(main)/chat/page.tsx at line 169, The local variable named text returned
from extractAssistantText(result.llm_response?.response) shadows the function
parameter text; rename the local variable (for example to assistantText or
aiText) wherever it’s declared and subsequently used to avoid shadowing and
improve clarity—update the declaration and all references to the new name (the
call to extractAssistantText and any subsequent uses of that value) in the same
scope.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@app/`(main)/chat/page.tsx:
- Line 169: The local variable named text returned from
extractAssistantText(result.llm_response?.response) shadows the function
parameter text; rename the local variable (for example to assistantText or
aiText) wherever it’s declared and subsequently used to avoid shadowing and
improve clarity—update the declaration and all references to the new name (the
call to extractAssistantText and any subsequent uses of that value) in the same
scope.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 83ec8647-2804-4c9c-aa90-9978b4ca063e

📥 Commits

Reviewing files that changed from the base of the PR and between cb83ed0 and 817ac6a.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (14)
  • .env.example
  • app/(main)/chat/page.tsx
  • app/api/llm/call/route.ts
  • app/components/chat/ChatConfigPicker.tsx
  • app/components/chat/ChatEmptyState.tsx
  • app/components/chat/ChatInput.tsx
  • app/lib/apiClient.ts
  • app/lib/chatClient.ts
  • app/lib/constants.ts
  • app/lib/context/AuthContext.tsx
  • app/lib/store/chat.ts
  • app/lib/types/chat.ts
  • app/lib/types/configs.ts
  • package.json
💤 Files with no reviewable changes (1)
  • app/lib/types/configs.ts
✅ Files skipped from review due to trivial changes (4)
  • app/lib/constants.ts
  • package.json
  • app/lib/chatClient.ts
  • app/lib/types/chat.ts

@Ayush8923 Ayush8923 merged commit 15da08a into main Apr 30, 2026
2 checks passed
@Ayush8923 Ayush8923 deleted the feat/chat-llm-call branch April 30, 2026 10:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

UI Update: Align with new branding Chat Interface: Enhance user interaction

2 participants