Skip to content

feat: AI sandbox admin UI with integrations management#693

Open
2witstudios wants to merge 5 commits intomasterfrom
rest-of-api-sandbox
Open

feat: AI sandbox admin UI with integrations management#693
2witstudios wants to merge 5 commits intomasterfrom
rest-of-api-sandbox

Conversation

@2witstudios
Copy link
Owner

@2witstudios 2witstudios commented Feb 16, 2026

Summary

  • Adds full integrations management UI: user settings page, drive settings panel, agent integrations panel, audit log viewer
  • New components: ConnectIntegrationDialog, DisconnectConfirmDialog, IntegrationStatusBadge, JsonSchemaBuilder, OpenAPIImportDialog, ToolBuilder
  • SWR hooks for providers, connections, grants, and audit logs
  • OpenAPI 3.x spec import with tool selection

Code review fixes (68 → production quality):

  • CRITICAL: Audit route total now uses a proper count() query instead of logs.length, fixing broken pagination
  • HIGH: ConnectionStatus type exported and used everywhere (was string)
  • HIGH: Error state UI added to IntegrationAuditLog and AgentIntegrationsPanel (was infinite loading on SWR failure)
  • HIGH: aria-label added to all JsonSchemaBuilder inputs and delete button
  • HIGH: 5MB file size validation + safe type guard on OpenAPI file upload
  • HIGH: Removed unused Loader2 import, replaced raw <button> with shadcn Button
  • MEDIUM: useMemo added for computed Sets/filters in settings page, DriveIntegrations, audit log, and agent panel
  • SECURITY: UUID regex validation on connectionId query param

Test plan

  • Verify tsc --noEmit passes (confirmed locally)
  • Visit /settings/integrations — check loading, empty, and connected states
  • Visit drive settings — verify DriveIntegrations and IntegrationAuditLog render
  • Open agent settings panel — verify AgentIntegrationsPanel with error/empty states
  • Test OpenAPI import with file > 5MB — should show toast error
  • Verify audit log pagination shows correct "X of total" and Next disables at end
  • Screen reader check: JsonSchemaBuilder inputs have accessible labels

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Full integration management UI: connect/configure/disconnect providers, status badges, connect/disconnect dialogs, per-agent integrations panel.
    • Drive-scoped integration audit logs: filterable, paginated view with timestamps, status, duration.
    • OpenAPI import flow (URL/file) with selectable tools and progress UI, plus a Custom Tool Builder and JSON schema editor.
    • Client-side hooks powering integrations and audit log data fetching.
  • Bug Fixes

    • Audit queries and APIs correctly scoped to a drive for accurate results.

Add integration management UI across settings, drive settings, and agent
panels. Includes provider connections, audit logging, OpenAPI import,
tool grants, and JSON schema builder.

Code review fixes applied: proper count query for audit pagination,
ConnectionStatus type safety, error state UI on all SWR consumers,
useMemo for computed values, aria-labels for accessibility, file size
validation on OpenAPI upload, and shadcn Button consistency.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a drive-scoped integrations audit API and an OpenAPI import API, updates audit repository to be drive-scoped, and implements extensive frontend integration features: types, hooks, connect/disconnect flows, import/tool-builder UIs, audit log UI, agent/drive integration panels, and related dialogs/components.

Changes

Cohort / File(s) Summary
API: Audit & Import
apps/web/src/app/api/drives/[driveId]/integrations/audit/route.ts, apps/web/src/app/api/integrations/providers/import-openapi/route.ts
New GET /api/drives/[driveId]/integrations/audit (auth, OWNER/ADMIN check, query params limit/offset/connectionId/success, parallel count+rows, error handling) and new POST /api/integrations/providers/import-openapi (session+CSRF, admin-only, zod validation, calls importOpenAPISpec, structured error responses).
Backend repo change
packages/lib/src/integrations/repositories/audit-repository.ts, packages/lib/src/integrations/repositories/audit-repository.test.ts
getAuditLogsByConnection now requires driveId and filters by driveId+connectionId; countAuditLogsByErrorType refactored to DB GROUP BY aggregation; tests updated.
Frontend types & hooks
apps/web/src/components/integrations/types.ts, apps/web/src/hooks/useIntegrations.ts
Adds SafeProvider/SafeConnection/SafeGrant/AuditLogEntry types and SWR hooks: useProviders, useUserConnections, useDriveConnections, useAgentGrants, useIntegrationAuditLogs (supports limit/offset/connectionId/success).
Settings & pages
apps/web/src/app/settings/integrations/page.tsx, apps/web/src/app/settings/page.tsx, apps/web/src/app/dashboard/[driveId]/settings/page.tsx
New Integrations settings page, reorganized settings menu, and drive settings page now renders DriveIntegrations and IntegrationAuditLog components.
Drive & agent UI
apps/web/src/components/settings/DriveIntegrations.tsx, apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx, apps/web/src/components/layout/.../AiChatView.tsx
Drive-scoped integrations management card, agent integrations panel with enable/disable and grant config, and panel wired into AiChatView.
Audit log UI
apps/web/src/components/settings/IntegrationAuditLog.tsx, apps/web/src/app/api/drives/[driveId]/integrations/audit/route.ts
Client audit log component with filters, pagination, connection name mapping, status badges; backed by new server audit route.
Connect/disconnect flows & badges
apps/web/src/components/integrations/ConnectIntegrationDialog.tsx, apps/web/src/components/integrations/DisconnectConfirmDialog.tsx, apps/web/src/components/integrations/IntegrationStatusBadge.tsx
Connect dialog supporting OAuth and API-key flows (user vs drive scope), disconnect confirmation dialog, and a status badge component.
Tool creation & OpenAPI import UI
apps/web/src/components/integrations/ToolBuilder.tsx, apps/web/src/components/integrations/OpenAPIImportDialog.tsx, apps/web/src/app/api/integrations/providers/[providerId]/route.ts
ToolBuilder modal and OpenAPI import dialog (URL/file import, select operations); provider PUT accepts optional addTools to merge new tools into provider config.
Schema builder & helpers
apps/web/src/components/integrations/JsonSchemaBuilder.tsx, apps/web/src/components/integrations/types.ts
JsonSchemaBuilder UI for parameter editing and parametersToJsonSchema helper; exported SchemaParameter and props types used by ToolBuilder.
Supporting UI & dialogs
apps/web/src/components/integrations/*, apps/web/src/components/settings/*
Several new components and dialogs (ProviderPickerDialog, Connect/Disconnect dialogs, OpenAPI import and tool builder dialogs), per-item loading states, skeletons, toasts, and other integration UI primitives.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI as Connect Dialog
    participant API as App Server
    participant DB as Database
    participant OAuth as External OAuth Provider

    User->>UI: Submit connect (providerId,name,credentials,returnUrl)
    UI->>API: POST /api/[user|drives]/integrations
    API->>DB: Create/update connection record
    alt provider uses OAuth
        API-->>UI: { redirectUrl }
        UI->>User: Redirect to OAuth provider
        User->>OAuth: Grant permissions
        OAuth-->>API: Callback with code/tokens
        API->>DB: Persist tokens/update connection
        API-->>UI: Success response
    else API-key or direct
        API-->>UI: Success response with connection
    end
    UI->>User: Show toast, close dialog, refresh connections
Loading
sequenceDiagram
    participant User
    participant AuditUI as Audit Log Component
    participant API as /api/drives/[id]/integrations/audit
    participant DB as Database

    User->>AuditUI: Open audit tab / apply filters
    AuditUI->>API: GET ?limit=&offset=&connectionId=&success=
    API->>DB: Query total count and rows (parallel) scoped by driveId + filters
    DB-->>API: total, logs[]
    API-->>AuditUI: { logs, total }
    AuditUI->>User: Render table and pagination controls
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇 I nibbled specs and hopped through logs tonight,

OAuth leaps and JSON schemas stitched just right,
Tools and badges lined up in a row,
Audit trails whisper what the integrations know,
A rabbit cheers — connections take flight!

🚥 Pre-merge checks | ✅ 2 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.73% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'feat: AI sandbox admin UI with integrations management' is partially related to the changeset but does not capture the main primary change clearly. While integrations management is a key part, the phrase 'AI sandbox admin UI' is vague and does not accurately describe the comprehensive nature of the changes, which span integrations management, audit logging, provider import, and multiple UI components across different pages. Consider a more specific and descriptive title such as 'feat: Add integrations management UI with audit logging and OpenAPI import' that clearly highlights the primary changes without introducing vague terms like 'AI sandbox admin UI'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into master

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rest-of-api-sandbox

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 161f80380f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 47 to 48
if (connectionId) {
whereClause = eq(integrationAuditLog.connectionId, connectionId);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Scope connection audit queries by drive

When connectionId is provided, this branch builds a predicate that only checks integrationAuditLog.connectionId and drops driveId, so a drive-scoped endpoint can return counts for logs outside the requested drive. Since connectionId is only UUID-validated, an owner/admin of one drive can query activity from other drives if they know a connection id; include the drive predicate in this branch (and its matching fetch path) to keep audit data tenant-scoped.

Useful? React with 👍 / 👎.

Comment on lines 61 to 63
connectionId
? getAuditLogsByConnection(db, connectionId, { limit, offset })
: successParam !== null

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Combine success filter with connection filter

IntegrationAuditLog sends both connectionId and success when both UI filters are selected, but this conditional always takes the connectionId path and ignores success, so users still see mixed success/failure rows and totals after choosing a status filter. The API should apply both predicates together when both query params are present to match the filtering controls.

Useful? React with 👍 / 👎.

Comment on lines +49 to +50
const isOAuth = provider?.providerType === 'builtin';
const requiresApiKey = !isOAuth;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Infer OAuth mode from auth method metadata

Using providerType === 'builtin' as the OAuth check conflates provider class with auth scheme. Built-in providers can require credentials, and non-builtin providers can be OAuth2; with this logic the dialog either hides required credential fields or blocks OAuth providers behind an API-key requirement, causing valid connect attempts to fail against /api/user|drives/.../integrations. Determine OAuth vs credential flow from provider auth-method metadata instead of providerType.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/app/api/drives/`[driveId]/integrations/audit/route.ts:
- Around line 36-37: The current parseInt usage for limit and offset can yield
NaN for non-numeric inputs; update the parsing to validate results from
parseInt(searchParams.get(...), 10) (or Number()) and fall back to safe defaults
before clamping: compute a raw value (e.g., limitRaw/offsetRaw) from
searchParams.get, check Number.isNaN(limitRaw) and Number.isNaN(offsetRaw), set
limit to default 50 when NaN and offset to default 0 when NaN, then apply
Math.min(limit, 200) for limit and ensure offset is at least 0; reference the
existing variables limit, offset, parseInt, and searchParams.get to locate and
change the logic.
- Around line 46-56: When filtering by connectionId the code omits scoping to
driveId causing cross-drive leaks: update the route handler logic (the
whereClause construction around integrationAuditLog and connectionId) to always
include eq(integrationAuditLog.driveId, driveId) when a connectionId is
provided, and modify the call that currently invokes
getAuditLogsByConnection(...) to pass the driveId as an additional argument;
then change the getAuditLogsByConnection function signature in
packages/lib/src/integrations/repositories/audit-repository.ts to accept driveId
and include eq(integrationAuditLog.driveId, driveId) in its where clause (in
addition to eq(integrationAuditLog.connectionId, connectionId)) so both the
count and data queries are drive-scoped.
- Line 13: The current UUID_RE constant is incorrect for
integrationConnections.id (which uses createId() and produces CUID2 IDs) so
replace the UUID_RE check with a CUID2-compatible validation: update the UUID_RE
constant to a regex that matches CUID2 (lowercase alphanumeric starting with a
letter and the expected CUID2 length/format) and change the connectionId
validation in the route handler that references UUID_RE to use this new CUID2
regex so valid integrationConnections.id values (from createId()) are accepted.

In `@apps/web/src/app/api/integrations/providers/import-openapi/route.ts`:
- Around line 20-27: The POST handler currently calls
authenticateRequestWithOptions and then calls verifyAdminAuth again, causing
double authentication; update the flow to reuse the initial auth result from
authenticateRequestWithOptions (check with isAuthError(auth) as now) and then
perform admin verification using that auth payload instead of
reauthenticating—either call a new/overloaded verifyAdminAuth(auth) or replace
verifyAdminAuth with a simple role/permission check against auth.user or
auth.claims (referencing functions POST, authenticateRequestWithOptions,
isAuthError, and verifyAdminAuth) so the already-authenticated user is used for
admin verification and the redundant request-level auth is removed.
- Around line 48-51: The catch block in route.ts currently returns 400 for all
exceptions; update it to return 400 only for known/validation import errors and
500 for unexpected server errors: detect typed errors thrown by
importOpenAPISpec (e.g., a custom ImportError or a validation error class) or
check for an explicit status property on the error (error.status === 400) and
use that status in NextResponse.json, otherwise respond with status 500; keep
the existing loggers.api.error call but include the error details and ensure the
response body contains a safe message (use error.message for known errors, a
generic "Internal server error" for unexpected ones).

In `@apps/web/src/app/settings/integrations/page.tsx`:
- Around line 30-87: The page currently ignores SWR errors from useProviders and
useUserConnections so fetch failures render as empty state; destructure the
error values (e.g., providersError from useProviders and connectionsError from
useUserConnections) and add an early-render error UI (same pattern as
IntegrationAuditLog) that shows a dismissible/error banner with the error
message and prevents the normal "no integrations" content from rendering; update
IntegrationsSettingsPage to check providersError || connectionsError and display
the error banner (and optionally a retry action that calls
mutate/mutateConnections) before rendering provider/connection lists or the
loading state.
- Around line 41-51: The code currently injects user-controlled
searchParams.get('error') into toast.error inside the useEffect (with
mutateConnections and router.replace), which can display misleading content;
change this to avoid echoing raw URL text by mapping known error keys to
predefined user-safe messages or by showing a generic message like "Connection
failed" (optionally appending a short, sanitized status code or truncated
alphanumeric token only after whitelist validation), and update the toast.error
call to use that mapped/sanitized string instead of searchParams.get('error').

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx`:
- Around line 170-188: The numeric Input uses defaultValue
(grant.rateLimitOverride?.requestsPerMinute) so it won't reflect SWR-updated
grant data; change it to remount when the grant's rate limit changes by adding a
key tied to that value (e.g.,
key={`${grant.id}-${grant.rateLimitOverride?.requestsPerMinute ?? ''}`}) or
convert the Input into a controlled component using value derived from the grant
and onChange that calls handleUpdateGrant; update references to grant.id,
defaultValue, and handleUpdateGrant accordingly and keep the disabled behavior
(updatingGrant === grant.id) intact.

In `@apps/web/src/components/integrations/ConnectIntegrationDialog.tsx`:
- Around line 56-81: The hardcoded returnUrl in ConnectIntegrationDialog causes
drive-scoped OAuth to redirect to the user integrations page; update the body
construction in the submit flow (where returnUrl is set) to compute returnUrl
based on scope and driveId — e.g., when scope === 'drive' and driveId is present
set returnUrl to the drive settings path (like `/dashboard/${driveId}/settings`)
otherwise fall back to '/settings/integrations'; ensure the computed returnUrl
is included in the body sent by post and that no change breaks the user-scoped
flow (where visibility is added).

In `@apps/web/src/components/integrations/JsonSchemaBuilder.tsx`:
- Around line 131-139: The code uses param.name.trim() to check emptiness but
then uses the original param.name as the property key and in required, causing
space-padded keys; change the loop in JsonSchemaBuilder to compute a trimmedName
= param.name.trim() and use trimmedName when assigning properties[trimmedName]
and when pushing into required, while keeping description/type/required logic
the same.
- Around line 58-59: The list is using key={index} in parameters.map which
breaks reconciliation; update the SchemaParameter type to include a stable id
(e.g., id: string), generate that id when creating parameters in addParameter
(use crypto.randomUUID() or a counter), ensure existing parameter creation sites
(addParameter, any initial state) populate the id, and replace key={index} with
key={param.id} in the parameters.map render so React can track items correctly.
- Line 3: Remove the unused React hook import by deleting the "useState" import
from the top of JsonSchemaBuilder.tsx (the import statement currently reading
"import { useState } from 'react';"); since the component receives all
parameters via props and does not use useState anywhere in the JsonSchemaBuilder
component or its helper functions, simply remove the unused symbol to eliminate
the lint warning and unused dependency.

In `@apps/web/src/components/integrations/OpenAPIImportDialog.tsx`:
- Around line 125-142: The handleImport function currently re-sends the full
specText to the import API causing redundant transfer; change the flow so
parseSpec returns a lightweight reference (e.g., parseId) and store that
reference (instead of the full spec) in component state, then update
handleImport to call post('/api/integrations/providers/import-openapi', {
parseId, selectedOperations: Array.from(selectedOps) }) rather than sending
specText; update parseSpec and the parse endpoint/server contract to return and
persist the parseId/parsed result so import can reference it, and adjust
onImported/resetForm logic to use the parseId flow.
- Line 79: The MAX_FILE_SIZE constant is currently declared inside the
OpenAPIImportDialog component and is re-created on every render; move the
declaration for MAX_FILE_SIZE (currently 5 * 1024 * 1024) to module scope at the
top of the file so it is created once, and remove the in-component declaration;
update any references inside the OpenAPIImportDialog component to use the
module-scoped MAX_FILE_SIZE.
- Around line 63-77: The handleFetchUrl function currently fetches an arbitrary
user-supplied specUrl from the client; to mitigate SSRF-like risks validate the
URL before fetching (ensure allowed schemes like https, reject file/ftp/http for
localhost/169.254.169.254/etc or private IP ranges) or move the fetch to a
server-side endpoint that performs the same validation and requests the URL on
behalf of the client; update handleFetchUrl to either call that server API
(e.g., submit specUrl to an import endpoint that returns spec text) or implement
client-side URL parsing/whitelisting using the specUrl state and only call
parseSpec after passing validation.

In `@apps/web/src/components/integrations/ToolBuilder.tsx`:
- Around line 64-65: The frontend is sending an addTools payload (constructed in
ToolBuilder.tsx as the tool object using toolName->slug) but the backend
updateProviderSchema does not accept addTools and the tool id must use CUID2;
either implement addTools in the API schema or remove the UI flow. To fix:
update the backend validation (updateProviderSchema) to accept an addTools
array/object and persist it in the PUT /api/integrations/providers handler, or
remove the addTools UI; if keeping it, change the frontend ToolBuilder.tsx
creation of the tool id to use createId() from `@paralleldrive/cuid2` (import
createId and call it to produce id) and store the human-readable name in a
separate name property so the id matches the CUID2 regex, then ensure the
frontend sends addTools in the request body and the backend handler reads and
validates that field.
- Around line 56-60: The validation currently in handleSave only checks for
empty strings; enhance it to sanitize pathTemplate before sending by enforcing
it starts with '/' and contains no whitespace (reject or trim input with a toast
error), optionally normalize consecutive slashes and ensure it uses a safe
character set (e.g., alphanumerics, -, _, /, :, and {} if templates are
allowed). Update handleSave to perform these checks on the pathTemplate variable
and return early with a clear toast.error when validation fails so only
sanitized pathTemplate values are sent to the backend.
- Around line 94-95: There is a redundant double call to resetForm(): one in the
save-success path (the handler that calls onOpenChange(false) after saving) and
another in the Dialog's onOpenChange callback; remove one to keep a single reset
path—preferably delete the resetForm() call from the save-success handler and
let the Dialog's onOpenChange (the inline handler on the Dialog component that
calls resetForm() when closed) be the single place that resets the form; ensure
the save-success code still calls onOpenChange(false) to close the Dialog but
does not call resetForm() itself.

In `@apps/web/src/components/integrations/types.ts`:
- Around line 8-19: The SafeProvider interface's providerType is currently typed
as string; narrow it to a discriminated union (e.g., 'builtin' | 'apiKey' |
'oauth' — extend as needed) so checks like ConnectIntegrationDialog's
provider?.providerType === 'builtin' become type-safe; update the SafeProvider
declaration and any code constructing SafeProvider objects (or API response
typings) to use the new union and fix any resulting type errors where
providerType values are produced or compared.

In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 27-57: Destructure the error values from useDriveConnections and
useProviders (e.g., const { connections, isLoading: loadingConnections, error:
connectionsError, mutate: mutateConnections } = useDriveConnections(driveId) and
const { providers, isLoading: loadingProviders, error: providersError, mutate:
mutateProviders } = useProviders()), compute a combined error state, and render
the same error UI pattern used in IntegrationAuditLog/AgentIntegrationsPanel
inside the DriveIntegrations CardContent (after the loading check) so users see
an ErrorBanner/message when connectionsError or providersError is truthy; wire
the banner’s retry action to call mutateConnections and mutateProviders as
appropriate and avoid treating an error as an empty list.

In `@apps/web/src/components/settings/IntegrationAuditLog.tsx`:
- Around line 58-66: Hoist the pure helper function formatTimestamp out of the
IntegrationAuditLog component so it isn’t re-created on every render; move the
function declaration (the const formatTimestamp = (ts: string) => { ... }) to
module scope above the component and keep its implementation unchanged, then
update any internal references to formatTimestamp inside IntegrationAuditLog to
use the now-top-level function.

In `@apps/web/src/hooks/useIntegrations.ts`:
- Around line 56-71: The function useIntegrationAuditLogs quietly omits offset=0
(and could omit limit=0) due to truthy checks; change the conditions to
explicitly check for undefined (e.g., params.offset !== undefined and
params.limit !== undefined) when setting searchParams so zero values are
included in the query string; keep the existing success check (params.success
!== undefined) as-is and preserve useSWR usage and returned shape.
- Around line 7-11: The fetcher function currently throws a generic
Error('Failed to fetch') on non-ok responses, losing server status and body;
update the fetcher (function name: fetcher) to read the response body (try
res.json(), fall back to res.text()), capture res.status and res.statusText, and
throw an Error whose message includes the URL, status (code and text) and the
server error payload so callers can surface meaningful messages and debug
details.
🧹 Nitpick comments (12)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/app/api/integrations/providers/import-openapi/route.ts`:
- Around line 20-27: The POST handler currently calls
authenticateRequestWithOptions and then calls verifyAdminAuth again, causing
double authentication; update the flow to reuse the initial auth result from
authenticateRequestWithOptions (check with isAuthError(auth) as now) and then
perform admin verification using that auth payload instead of
reauthenticating—either call a new/overloaded verifyAdminAuth(auth) or replace
verifyAdminAuth with a simple role/permission check against auth.user or
auth.claims (referencing functions POST, authenticateRequestWithOptions,
isAuthError, and verifyAdminAuth) so the already-authenticated user is used for
admin verification and the redundant request-level auth is removed.
- Around line 48-51: The catch block in route.ts currently returns 400 for all
exceptions; update it to return 400 only for known/validation import errors and
500 for unexpected server errors: detect typed errors thrown by
importOpenAPISpec (e.g., a custom ImportError or a validation error class) or
check for an explicit status property on the error (error.status === 400) and
use that status in NextResponse.json, otherwise respond with status 500; keep
the existing loggers.api.error call but include the error details and ensure the
response body contains a safe message (use error.message for known errors, a
generic "Internal server error" for unexpected ones).

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx`:
- Around line 170-188: The numeric Input uses defaultValue
(grant.rateLimitOverride?.requestsPerMinute) so it won't reflect SWR-updated
grant data; change it to remount when the grant's rate limit changes by adding a
key tied to that value (e.g.,
key={`${grant.id}-${grant.rateLimitOverride?.requestsPerMinute ?? ''}`}) or
convert the Input into a controlled component using value derived from the grant
and onChange that calls handleUpdateGrant; update references to grant.id,
defaultValue, and handleUpdateGrant accordingly and keep the disabled behavior
(updatingGrant === grant.id) intact.

In `@apps/web/src/components/integrations/JsonSchemaBuilder.tsx`:
- Line 3: Remove the unused React hook import by deleting the "useState" import
from the top of JsonSchemaBuilder.tsx (the import statement currently reading
"import { useState } from 'react';"); since the component receives all
parameters via props and does not use useState anywhere in the JsonSchemaBuilder
component or its helper functions, simply remove the unused symbol to eliminate
the lint warning and unused dependency.

In `@apps/web/src/components/integrations/OpenAPIImportDialog.tsx`:
- Around line 125-142: The handleImport function currently re-sends the full
specText to the import API causing redundant transfer; change the flow so
parseSpec returns a lightweight reference (e.g., parseId) and store that
reference (instead of the full spec) in component state, then update
handleImport to call post('/api/integrations/providers/import-openapi', {
parseId, selectedOperations: Array.from(selectedOps) }) rather than sending
specText; update parseSpec and the parse endpoint/server contract to return and
persist the parseId/parsed result so import can reference it, and adjust
onImported/resetForm logic to use the parseId flow.
- Line 79: The MAX_FILE_SIZE constant is currently declared inside the
OpenAPIImportDialog component and is re-created on every render; move the
declaration for MAX_FILE_SIZE (currently 5 * 1024 * 1024) to module scope at the
top of the file so it is created once, and remove the in-component declaration;
update any references inside the OpenAPIImportDialog component to use the
module-scoped MAX_FILE_SIZE.
- Around line 63-77: The handleFetchUrl function currently fetches an arbitrary
user-supplied specUrl from the client; to mitigate SSRF-like risks validate the
URL before fetching (ensure allowed schemes like https, reject file/ftp/http for
localhost/169.254.169.254/etc or private IP ranges) or move the fetch to a
server-side endpoint that performs the same validation and requests the URL on
behalf of the client; update handleFetchUrl to either call that server API
(e.g., submit specUrl to an import endpoint that returns spec text) or implement
client-side URL parsing/whitelisting using the specUrl state and only call
parseSpec after passing validation.

In `@apps/web/src/components/integrations/ToolBuilder.tsx`:
- Around line 56-60: The validation currently in handleSave only checks for
empty strings; enhance it to sanitize pathTemplate before sending by enforcing
it starts with '/' and contains no whitespace (reject or trim input with a toast
error), optionally normalize consecutive slashes and ensure it uses a safe
character set (e.g., alphanumerics, -, _, /, :, and {} if templates are
allowed). Update handleSave to perform these checks on the pathTemplate variable
and return early with a clear toast.error when validation fails so only
sanitized pathTemplate values are sent to the backend.
- Around line 94-95: There is a redundant double call to resetForm(): one in the
save-success path (the handler that calls onOpenChange(false) after saving) and
another in the Dialog's onOpenChange callback; remove one to keep a single reset
path—preferably delete the resetForm() call from the save-success handler and
let the Dialog's onOpenChange (the inline handler on the Dialog component that
calls resetForm() when closed) be the single place that resets the form; ensure
the save-success code still calls onOpenChange(false) to close the Dialog but
does not call resetForm() itself.

In `@apps/web/src/components/integrations/types.ts`:
- Around line 8-19: The SafeProvider interface's providerType is currently typed
as string; narrow it to a discriminated union (e.g., 'builtin' | 'apiKey' |
'oauth' — extend as needed) so checks like ConnectIntegrationDialog's
provider?.providerType === 'builtin' become type-safe; update the SafeProvider
declaration and any code constructing SafeProvider objects (or API response
typings) to use the new union and fix any resulting type errors where
providerType values are produced or compared.

In `@apps/web/src/components/settings/IntegrationAuditLog.tsx`:
- Around line 58-66: Hoist the pure helper function formatTimestamp out of the
IntegrationAuditLog component so it isn’t re-created on every render; move the
function declaration (the const formatTimestamp = (ts: string) => { ... }) to
module scope above the component and keep its implementation unchanged, then
update any internal references to formatTimestamp inside IntegrationAuditLog to
use the now-top-level function.

In `@apps/web/src/hooks/useIntegrations.ts`:
- Around line 7-11: The fetcher function currently throws a generic
Error('Failed to fetch') on non-ok responses, losing server status and body;
update the fetcher (function name: fetcher) to read the response body (try
res.json(), fall back to res.text()), capture res.status and res.statusText, and
throw an Error whose message includes the URL, status (code and text) and the
server error payload so callers can surface meaningful messages and debug
details.
apps/web/src/components/integrations/OpenAPIImportDialog.tsx (3)

125-142: Full spec is re-sent on import — redundant network transfer.

handleImport sends the entire specText to the API again (line 130), despite it already being parsed in parseSpec. If the spec is large (up to 5MB), this doubles the bandwidth. Consider having the parse endpoint return a reference/ID that handleImport can use instead of re-transmitting the full spec.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/OpenAPIImportDialog.tsx` around lines
125 - 142, The handleImport function currently re-sends the full specText to the
import API causing redundant transfer; change the flow so parseSpec returns a
lightweight reference (e.g., parseId) and store that reference (instead of the
full spec) in component state, then update handleImport to call
post('/api/integrations/providers/import-openapi', { parseId,
selectedOperations: Array.from(selectedOps) }) rather than sending specText;
update parseSpec and the parse endpoint/server contract to return and persist
the parseId/parsed result so import can reference it, and adjust
onImported/resetForm logic to use the parseId flow.

79-79: Move MAX_FILE_SIZE to module scope.

This constant is re-created on every render. Since it's a static value, declare it at the module level alongside the component.

♻️ Proposed fix
+const MAX_FILE_SIZE = 5 * 1024 * 1024; // 5MB
+
 export function OpenAPIImportDialog({ open, onOpenChange, onImported }: OpenAPIImportDialogProps) {
   ...
-  const MAX_FILE_SIZE = 5 * 1024 * 1024; // 5MB
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/OpenAPIImportDialog.tsx` at line 79, The
MAX_FILE_SIZE constant is currently declared inside the OpenAPIImportDialog
component and is re-created on every render; move the declaration for
MAX_FILE_SIZE (currently 5 * 1024 * 1024) to module scope at the top of the file
so it is created once, and remove the in-component declaration; update any
references inside the OpenAPIImportDialog component to use the module-scoped
MAX_FILE_SIZE.

63-77: Client-side fetch to arbitrary URL — consider SSRF implications.

handleFetchUrl fetches directly from a user-supplied URL in the browser. While this is client-side (not a server-side SSRF), it could be used to probe internal network resources accessible from the user's browser (e.g., http://localhost:*, http://169.254.169.254/). Consider validating the URL scheme and hostname before fetching, or moving the fetch to the server-side API route where you can apply stricter controls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/OpenAPIImportDialog.tsx` around lines 63
- 77, The handleFetchUrl function currently fetches an arbitrary user-supplied
specUrl from the client; to mitigate SSRF-like risks validate the URL before
fetching (ensure allowed schemes like https, reject file/ftp/http for
localhost/169.254.169.254/etc or private IP ranges) or move the fetch to a
server-side endpoint that performs the same validation and requests the URL on
behalf of the client; update handleFetchUrl to either call that server API
(e.g., submit specUrl to an import endpoint that returns spec text) or implement
client-side URL parsing/whitelisting using the specUrl state and only call
parseSpec after passing validation.
apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx (1)

170-188: Rate limit input won't update after SWR revalidation due to defaultValue.

Using defaultValue (line 177) means the input only reflects the initial value on mount. After mutateGrants() updates the grant data, the input will still show the old value until the component remounts. Consider using a key prop on the Input tied to the grant's rate limit to force re-mount on data changes, or switch to a controlled pattern.

♻️ Quick fix using key prop
                        <Input
                          id={`ratelimit-${grant.id}`}
+                         key={`ratelimit-${grant.id}-${grant.rateLimitOverride?.requestsPerMinute ?? ''}`}
                          type="number"
                          min={1}
                          max={1000}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx` around
lines 170 - 188, The numeric Input uses defaultValue
(grant.rateLimitOverride?.requestsPerMinute) so it won't reflect SWR-updated
grant data; change it to remount when the grant's rate limit changes by adding a
key tied to that value (e.g.,
key={`${grant.id}-${grant.rateLimitOverride?.requestsPerMinute ?? ''}`}) or
convert the Input into a controlled component using value derived from the grant
and onChange that calls handleUpdateGrant; update references to grant.id,
defaultValue, and handleUpdateGrant accordingly and keep the disabled behavior
(updatingGrant === grant.id) intact.
apps/web/src/app/api/integrations/providers/import-openapi/route.ts (2)

20-27: Double authentication: auth result is unused after the error check.

authenticateRequestWithOptions authenticates and returns a result on line 21, but only the error is checked. Then verifyAdminAuth presumably re-authenticates internally. Consider passing the already-authenticated user to the admin check to avoid redundant work, or use the auth result for the admin role verification.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/app/api/integrations/providers/import-openapi/route.ts` around
lines 20 - 27, The POST handler currently calls authenticateRequestWithOptions
and then calls verifyAdminAuth again, causing double authentication; update the
flow to reuse the initial auth result from authenticateRequestWithOptions (check
with isAuthError(auth) as now) and then perform admin verification using that
auth payload instead of reauthenticating—either call a new/overloaded
verifyAdminAuth(auth) or replace verifyAdminAuth with a simple role/permission
check against auth.user or auth.claims (referencing functions POST,
authenticateRequestWithOptions, isAuthError, and verifyAdminAuth) so the
already-authenticated user is used for admin verification and the redundant
request-level auth is removed.

48-51: All errors return 400 — unexpected server errors should return 500.

The catch block returns a 400 for every error, including unexpected failures (e.g., network issues, internal bugs). Import validation errors are expected (400), but other errors should be 500.

♻️ Proposed fix
   } catch (error) {
     loggers.api.error('Error importing OpenAPI spec:', error as Error);
     const message = error instanceof Error ? error.message : 'Failed to import OpenAPI spec';
-    return NextResponse.json({ error: message }, { status: 400 });
+    const status = error instanceof SyntaxError || (error instanceof Error && error.message.includes('Invalid')) ? 400 : 500;
+    return NextResponse.json({ error: message }, { status });
   }

Alternatively, have importOpenAPISpec throw typed errors that can be distinguished.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/app/api/integrations/providers/import-openapi/route.ts` around
lines 48 - 51, The catch block in route.ts currently returns 400 for all
exceptions; update it to return 400 only for known/validation import errors and
500 for unexpected server errors: detect typed errors thrown by
importOpenAPISpec (e.g., a custom ImportError or a validation error class) or
check for an explicit status property on the error (error.status === 400) and
use that status in NextResponse.json, otherwise respond with status 500; keep
the existing loggers.api.error call but include the error details and ensure the
response body contains a safe message (use error.message for known errors, a
generic "Internal server error" for unexpected ones).
apps/web/src/hooks/useIntegrations.ts (1)

7-11: Generic error message loses server context.

The fetcher throws 'Failed to fetch' for all non-ok responses, discarding the server's status code and error body. This makes debugging harder and prevents consumers from showing meaningful error messages.

Suggested improvement
 const fetcher = async (url: string) => {
   const res = await fetchWithAuth(url);
-  if (!res.ok) throw new Error('Failed to fetch');
+  if (!res.ok) {
+    const text = await res.text().catch(() => res.statusText);
+    throw new Error(text || `Request failed (${res.status})`);
+  }
   return res.json();
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/hooks/useIntegrations.ts` around lines 7 - 11, The fetcher
function currently throws a generic Error('Failed to fetch') on non-ok
responses, losing server status and body; update the fetcher (function name:
fetcher) to read the response body (try res.json(), fall back to res.text()),
capture res.status and res.statusText, and throw an Error whose message includes
the URL, status (code and text) and the server error payload so callers can
surface meaningful messages and debug details.
apps/web/src/components/settings/IntegrationAuditLog.tsx (1)

58-66: Nit: formatTimestamp can be hoisted outside the component.

This is a pure function with no dependency on component state or props. Moving it outside the component avoids re-creating the function reference on every render, though the performance impact is negligible.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/IntegrationAuditLog.tsx` around lines 58 -
66, Hoist the pure helper function formatTimestamp out of the
IntegrationAuditLog component so it isn’t re-created on every render; move the
function declaration (the const formatTimestamp = (ts: string) => { ... }) to
module scope above the component and keep its implementation unchanged, then
update any internal references to formatTimestamp inside IntegrationAuditLog to
use the now-top-level function.
apps/web/src/components/integrations/types.ts (1)

8-19: Consider narrowing providerType to a union type.

ConnectIntegrationDialog checks provider?.providerType === 'builtin' (line 49) to determine OAuth vs. API-key flow. Using providerType: string means that check is not type-safe. A union type would catch typos and make the contract explicit.

Suggested change
- providerType: string;
+ providerType: 'builtin' | 'custom';

If there are other provider types, extend the union accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/types.ts` around lines 8 - 19, The
SafeProvider interface's providerType is currently typed as string; narrow it to
a discriminated union (e.g., 'builtin' | 'apiKey' | 'oauth' — extend as needed)
so checks like ConnectIntegrationDialog's provider?.providerType === 'builtin'
become type-safe; update the SafeProvider declaration and any code constructing
SafeProvider objects (or API response typings) to use the new union and fix any
resulting type errors where providerType values are produced or compared.
apps/web/src/components/integrations/JsonSchemaBuilder.tsx (1)

3-3: Unused import: useState is not used in this component.

Parameters are fully controlled via props. Remove the unused import.

Fix
-import { useState } from 'react';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/JsonSchemaBuilder.tsx` at line 3, Remove
the unused React hook import by deleting the "useState" import from the top of
JsonSchemaBuilder.tsx (the import statement currently reading "import { useState
} from 'react';"); since the component receives all parameters via props and
does not use useState anywhere in the JsonSchemaBuilder component or its helper
functions, simply remove the unused symbol to eliminate the lint warning and
unused dependency.
apps/web/src/components/integrations/ToolBuilder.tsx (2)

56-60: Validation is minimal — consider sanitizing pathTemplate.

Only empty-string checks are performed. The pathTemplate is sent directly to the backend and could contain arbitrary content. If the backend doesn't already validate this, consider a lightweight client-side check (e.g., must start with /, no whitespace).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/ToolBuilder.tsx` around lines 56 - 60,
The validation currently in handleSave only checks for empty strings; enhance it
to sanitize pathTemplate before sending by enforcing it starts with '/' and
contains no whitespace (reject or trim input with a toast error), optionally
normalize consecutive slashes and ensure it uses a safe character set (e.g.,
alphanumerics, -, _, /, :, and {} if templates are allowed). Update handleSave
to perform these checks on the pathTemplate variable and return early with a
clear toast.error when validation fails so only sanitized pathTemplate values
are sent to the backend.

94-95: Double resetForm() on save path.

When save succeeds, resetForm() is called at line 86, then onOpenChange(false) at line 85 triggers the Dialog's onOpenChange handler (line 95) which calls resetForm() again. This is harmless but redundant — consider removing the reset from either the save handler or the Dialog's close callback to keep one clear reset path.

Suggested simplification

Let the Dialog onOpenChange handler be the single place that resets:

-      toast.success(`Tool "${toolName}" added`);
-      onSaved();
-      onOpenChange(false);
-      resetForm();
+      toast.success(`Tool "${toolName}" added`);
+      onSaved();
+      onOpenChange(false);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/ToolBuilder.tsx` around lines 94 - 95,
There is a redundant double call to resetForm(): one in the save-success path
(the handler that calls onOpenChange(false) after saving) and another in the
Dialog's onOpenChange callback; remove one to keep a single reset
path—preferably delete the resetForm() call from the save-success handler and
let the Dialog's onOpenChange (the inline handler on the Dialog component that
calls resetForm() when closed) be the single place that resets the form; ensure
the save-success code still calls onOpenChange(false) to close the Dialog but
does not call resetForm() itself.

Comment on lines 64 to 65
const tool = {
id: toolName.trim().toLowerCase().replace(/[^a-z0-9_-]/g, '_'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for addTools handler to see how backend processes tool IDs
rg -n "addTools" --type=ts --type=tsx -B 5 -A 10

Repository: 2witstudios/PageSpace

Length of output: 92


🏁 Script executed:

#!/bin/bash
# Check the backend/API route for tools to understand ID expectations
fd -e ts -e tsx -path "*/api/*" | xargs rg -l "tool" | head -5

Repository: 2witstudios/PageSpace

Length of output: 367


🏁 Script executed:

#!/bin/bash
# Check other ID generation patterns in the codebase to see CUID2 usage
rg "cuid2\|import.*cuid" --type=ts --type=tsx | head -20

Repository: 2witstudios/PageSpace

Length of output: 92


🏁 Script executed:

#!/bin/bash
# Look at the database schema for tools to see how IDs are defined
fd -e ts -e tsx | xargs rg -l "tools.*table\|schema.*tools" | head -5

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Search for addTools handler in the codebase
rg -n "addTools" -A 10 -B 2

Repository: 2witstudios/PageSpace

Length of output: 1152


🏁 Script executed:

#!/bin/bash
# Find the tools database schema and table definition
rg -n "tools" apps/realtime/src | head -20

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Search for CUID2 imports and usage patterns
rg "cuid2" | head -20

Repository: 2witstudios/PageSpace

Length of output: 1703


🏁 Script executed:

#!/bin/bash
# Look at the validation.ts file mentioned in the learning to understand ID patterns
cat -n apps/realtime/src/validation.ts | head -50

Repository: 2witstudios/PageSpace

Length of output: 1894


🏁 Script executed:

#!/bin/bash
# Search for tool-related API routes and handlers
find apps -name "*tool*" -type f | head -20

Repository: 2witstudios/PageSpace

Length of output: 1131


🏁 Script executed:

#!/bin/bash
# Find the API route handler for integrations/providers
find apps -path "*/api/*" -name "*provider*" -o -path "*/api/*" -name "*integrations*" | head -20

Repository: 2witstudios/PageSpace

Length of output: 340


🏁 Script executed:

#!/bin/bash
# Search for the PUT handler that processes addTools
rg -n "addTools" -B 5 -A 15 apps/web/src/lib apps/realtime apps/api 2>/dev/null | head -50

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Look for how tool IDs are validated or stored in the database
rg -n "tool.*id\|toolId" --type-add 'web:*.{ts,tsx}' -t web apps/web/src/lib | head -30

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Check the database schema for tools table if it exists
find apps -name "*.ts" | xargs rg -l "tools.*table\|export.*tools" | head -5

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Read the provider API route handler
find apps/web/src/app/api/integrations/providers -name "*.ts" -o -name "route.ts"

Repository: 2witstudios/PageSpace

Length of output: 251


🏁 Script executed:

#!/bin/bash
# List all files in the providers directory
ls -la apps/web/src/app/api/integrations/providers/

Repository: 2witstudios/PageSpace

Length of output: 359


🏁 Script executed:

#!/bin/bash
# Find and read the dynamic provider route handler
cat apps/web/src/app/api/integrations/providers/\[providerId\]/route.ts 2>/dev/null || find apps/web/src/app/api/integrations/providers -name "route.ts" -exec cat {} \;

Repository: 2witstudios/PageSpace

Length of output: 4990


🏁 Script executed:

#!/bin/bash
# Search for where addTools is actually handled in the codebase
rg -n "addTools" -B 3 -A 10

Repository: 2witstudios/PageSpace

Length of output: 1217


🏁 Script executed:

#!/bin/bash
# Look for tool schema definitions or validation
rg -n "tool.*schema\|Tool.*schema" --type-add 'all:*.ts' -t all | head -30

Repository: 2witstudios/PageSpace

Length of output: 124


🏁 Script executed:

#!/bin/bash
# Check integrations lib for how tools are stored/updated
find apps -path "*/integrations*" -name "*.ts" | head -20

Repository: 2witstudios/PageSpace

Length of output: 1371


🏁 Script executed:

#!/bin/bash
# Find the updateProvider function implementation
rg -n "export.*updateProvider\|function updateProvider" --type ts

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Search for where tool objects are created or validated
rg -n "type Tool\|interface Tool" --type ts

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Look at the integrations library structure
find packages/lib -path "*/integrations*" -type f -name "*.ts" | head -20

Repository: 2witstudios/PageSpace

Length of output: 1198


🏁 Script executed:

#!/bin/bash
# Check if there's any handler for addTools in mutations
rg -n "addTools\|add.*[Tt]ool" packages/lib --type ts | head -20

Repository: 2witstudios/PageSpace

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Read the integrations index to see what functions are exported
cat -n packages/lib/src/integrations/index.ts

Repository: 2witstudios/PageSpace

Length of output: 5253


🏁 Script executed:

#!/bin/bash
# Find all provider-related files
find packages/lib/src/integrations -name "*provider*" -type f

Repository: 2witstudios/PageSpace

Length of output: 201


🏁 Script executed:

#!/bin/bash
# Search more broadly for updateProvider
rg -n "updateProvider" -A 5 -B 2

Repository: 2witstudios/PageSpace

Length of output: 12305


🏁 Script executed:

#!/bin/bash
# Check the route handler again for how it actually processes the request body
cat -n apps/web/src/app/api/integrations/providers/\[providerId\]/route.ts | tail -40

Repository: 2witstudios/PageSpace

Length of output: 1763


🏁 Script executed:

#!/bin/bash
# Read ToolBuilder.tsx to see the full tool object structure
cat -n apps/web/src/components/integrations/ToolBuilder.tsx | sed -n '60,90p'

Repository: 2witstudios/PageSpace

Length of output: 1086


🏁 Script executed:

#!/bin/bash
# Look for any actual tool implementation or database operations
rg -n "interface.*Tool\|type.*Tool" packages/lib/src/integrations

Repository: 2witstudios/PageSpace

Length of output: 47


The addTools feature is non-functional—the backend does not process the tool object being sent.

The API route at PUT /api/integrations/providers/{providerId} does not include addTools in its validation schema. The updateProviderSchema only accepts name, description, iconUrl, documentationUrl, config, and enabled. Consequently, any addTools payload sent from ToolBuilder.tsx line 80 is either rejected or silently discarded.

Additionally, the tool ID at line 65 uses a name-derived slug instead of @paralleldrive/cuid2 (which the codebase enforces for all IDs per convention). The slug format allows underscores and hyphens but does not match the CUID2 regex pattern. If this feature is meant to be implemented, use createId() from @paralleldrive/cuid2 and store the human-readable name separately.

Verify whether this feature is work-in-progress or should be removed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/integrations/ToolBuilder.tsx` around lines 64 - 65,
The frontend is sending an addTools payload (constructed in ToolBuilder.tsx as
the tool object using toolName->slug) but the backend updateProviderSchema does
not accept addTools and the tool id must use CUID2; either implement addTools in
the API schema or remove the UI flow. To fix: update the backend validation
(updateProviderSchema) to accept an addTools array/object and persist it in the
PUT /api/integrations/providers handler, or remove the addTools UI; if keeping
it, change the frontend ToolBuilder.tsx creation of the tool id to use
createId() from `@paralleldrive/cuid2` (import createId and call it to produce id)
and store the human-readable name in a separate name property so the id matches
the CUID2 regex, then ensure the frontend sends addTools in the request body and
the backend handler reads and validates that field.

…s, and cleanups

Fix cross-drive audit log data leak by always scoping queries to driveId
and replacing broken UUID regex with CUID2 validator. Fix non-functional
ToolBuilder addTools by adding Zod schema and merge logic for config.tools.
Fix hardcoded OAuth returnUrl, reflected content injection, offset=0 bug,
and add missing SWR error handling across integration components.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/app/api/integrations/providers/`[providerId]/route.ts:
- Around line 106-116: The current handler (in route.ts) replaces a
caller-provided updateData.config when addTools is present, causing data loss
and also allows duplicate tools; change the logic that builds updateData.config
(using addTools, existingConfig, existingTools and updateData.config) to merge
in updateData.config on top of existingConfig and then merge tools by combining
existingTools, (updateData.config?.tools || []), and addTools, deduping by tool
id (unique id key) so duplicates are removed while preserving caller overrides;
ensure you reference provider.config, addTools, updateData, existingConfig, and
existingTools when implementing the merge.
- Around line 16-29: The toolSchema's id field currently uses z.string().min(1)
but must enforce the repo CUID2 format; update the id validator in toolSchema
(the id property inside toolSchema) to validate against the CUID2 pattern (e.g.
regex ^[a-z][a-z0-9]{1,31}$) or use the `@paralleldrive/cuid2` validator to ensure
generated IDs conform to CUID2 before persisting; keep the same error semantics
but replace z.string().min(1) with a zod check that rejects non-CUID2 values so
downstream logic always receives a valid CUID2 id.

In `@apps/web/src/app/settings/integrations/page.tsx`:
- Around line 89-94: getProviderDetailHref currently hardcodes only the
'google-calendar' slug; replace this with a scalable lookup by creating a
mapping (e.g., PROVIDER_DETAIL_ROUTES) keyed by provider slug and use it inside
getProviderDetailHref(SafeConnection) to return
PROVIDER_DETAIL_ROUTES[connection.provider?.slug] || null, or if provider
metadata includes a route/slug-derived path (e.g.,
connection.provider.detailPath) prefer deriving the route from that metadata;
update getProviderDetailHref to consult the map/metadata so adding providers
only requires updating the config or provider object rather than changing the
function.

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx`:
- Around line 180-188: The onBlur handler in AgentIntegrationsPanel.tsx
currently uses a truthy check (val ? ...) which treats 0 as falsy and clears
rateLimitOverride; change it to an explicit null/NaN check: compute val as null
when the input is empty and parseInt otherwise, then use val !== null &&
!Number.isNaN(val) ? { requestsPerMinute: val } : null when calling
handleUpdateGrant; update the onBlur block (the handler that defines val,
current, and calls handleUpdateGrant) so 0 is preserved and invalid parses are
treated as null.

In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 74-81: The "Connect" Button (onClick -> setShowProviderPicker) is
unintentionally disabled during initial load because availableProviders derives
from providers which defaults to [], so change the UI to account for isLoading:
either disable the Button when isLoading is true as well (e.g.,
disabled={isLoading || availableProviders.length === 0}) or conditionally
hide/replace the Button until providers finish loading (render Button only when
isLoading is false), ensuring you reference the Button component,
setShowProviderPicker, availableProviders and isLoading in the change.

In `@apps/web/src/hooks/useIntegrations.ts`:
- Around line 7-14: The fetcher function builds an Error that includes the raw
response body (see fetcher, fetchWithAuth, res and body), which can leak
verbose/internal server details; instead truncate and sanitize the body before
putting it into the Error message (e.g. strip newlines, limit to a safe length
like 200 chars and append "…", or omit the body entirely in production). Update
the error construction to use the sanitized/truncatedBody variable so consumers
of fetcher(error.message) never receive unbounded or multiline server content.

In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 196-215: countAuditLogsByErrorType currently pulls all matching
rows via database.query.integrationAuditLog.findMany and does in-memory grouping
with a Map; replace that with a database-side aggregation (GROUP BY) to avoid
O(n) memory/CPU. Update the implementation that uses whereClause to run a
grouped query (e.g. via Drizzle's sql template or a group-by/count query) that
SELECTs error_type and COUNT(*) grouped by error_type, then map the DB rows to
the same return shape ({ errorType, count }). Remove the in-memory Map logic and
ensure you still handle nullable/empty errorType consistently.
🧹 Nitpick comments (5)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/app/api/integrations/providers/`[providerId]/route.ts:
- Around line 16-29: The toolSchema's id field currently uses z.string().min(1)
but must enforce the repo CUID2 format; update the id validator in toolSchema
(the id property inside toolSchema) to validate against the CUID2 pattern (e.g.
regex ^[a-z][a-z0-9]{1,31}$) or use the `@paralleldrive/cuid2` validator to ensure
generated IDs conform to CUID2 before persisting; keep the same error semantics
but replace z.string().min(1) with a zod check that rejects non-CUID2 values so
downstream logic always receives a valid CUID2 id.

In `@apps/web/src/app/settings/integrations/page.tsx`:
- Around line 89-94: getProviderDetailHref currently hardcodes only the
'google-calendar' slug; replace this with a scalable lookup by creating a
mapping (e.g., PROVIDER_DETAIL_ROUTES) keyed by provider slug and use it inside
getProviderDetailHref(SafeConnection) to return
PROVIDER_DETAIL_ROUTES[connection.provider?.slug] || null, or if provider
metadata includes a route/slug-derived path (e.g.,
connection.provider.detailPath) prefer deriving the route from that metadata;
update getProviderDetailHref to consult the map/metadata so adding providers
only requires updating the config or provider object rather than changing the
function.

In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 74-81: The "Connect" Button (onClick -> setShowProviderPicker) is
unintentionally disabled during initial load because availableProviders derives
from providers which defaults to [], so change the UI to account for isLoading:
either disable the Button when isLoading is true as well (e.g.,
disabled={isLoading || availableProviders.length === 0}) or conditionally
hide/replace the Button until providers finish loading (render Button only when
isLoading is false), ensuring you reference the Button component,
setShowProviderPicker, availableProviders and isLoading in the change.

In `@apps/web/src/hooks/useIntegrations.ts`:
- Around line 7-14: The fetcher function builds an Error that includes the raw
response body (see fetcher, fetchWithAuth, res and body), which can leak
verbose/internal server details; instead truncate and sanitize the body before
putting it into the Error message (e.g. strip newlines, limit to a safe length
like 200 chars and append "…", or omit the body entirely in production). Update
the error construction to use the sanitized/truncatedBody variable so consumers
of fetcher(error.message) never receive unbounded or multiline server content.

In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 196-215: countAuditLogsByErrorType currently pulls all matching
rows via database.query.integrationAuditLog.findMany and does in-memory grouping
with a Map; replace that with a database-side aggregation (GROUP BY) to avoid
O(n) memory/CPU. Update the implementation that uses whereClause to run a
grouped query (e.g. via Drizzle's sql template or a group-by/count query) that
SELECTs error_type and COUNT(*) grouped by error_type, then map the DB rows to
the same return shape ({ errorType, count }). Remove the in-memory Map logic and
ensure you still handle nullable/empty errorType consistently.
apps/web/src/app/api/integrations/providers/[providerId]/route.ts (1)

16-29: Tool id field lacks CUID2 format validation.

Per repo conventions, IDs should follow the CUID2 format (^[a-z][a-z0-9]{1,31}$). If tool IDs are generated client-side, consider validating with a regex or the @paralleldrive/cuid2 validator to prevent malformed IDs from being persisted.

Suggested fix
-  id: z.string().min(1),
+  id: z.string().regex(/^[a-z][a-z0-9]{1,31}$/, 'Tool ID must be a valid CUID2'),

Based on learnings: "Enforce using paralleldrive/cuid2 for ID generation across the TypeScript codebase (not UUIDs). IDs should follow the CUID2 format."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/app/api/integrations/providers/`[providerId]/route.ts around
lines 16 - 29, The toolSchema's id field currently uses z.string().min(1) but
must enforce the repo CUID2 format; update the id validator in toolSchema (the
id property inside toolSchema) to validate against the CUID2 pattern (e.g. regex
^[a-z][a-z0-9]{1,31}$) or use the `@paralleldrive/cuid2` validator to ensure
generated IDs conform to CUID2 before persisting; keep the same error semantics
but replace z.string().min(1) with a zod check that rejects non-CUID2 values so
downstream logic always receives a valid CUID2 id.
apps/web/src/hooks/useIntegrations.ts (1)

7-14: Fetcher error message includes raw response body — consider trimming for safety.

The error message on line 11 appends the response body text: `Failed to fetch ${url}: ${res.status}${body ? ` - ${body}` : ''}`. If the server returns verbose error bodies (stack traces, internal details), this ends up in the Error object. While the reviewed UI components only show generic messages for SWR errors, any future consumer calling error.message could inadvertently expose server internals.

Consider truncating the body or stripping it in production:

♻️ Suggested safeguard
 const fetcher = async (url: string) => {
   const res = await fetchWithAuth(url);
   if (!res.ok) {
     const body = await res.text().catch(() => '');
-    throw new Error(`Failed to fetch ${url}: ${res.status}${body ? ` - ${body}` : ''}`);
+    const detail = body ? ` - ${body.slice(0, 200)}` : '';
+    throw new Error(`Failed to fetch ${url}: ${res.status}${detail}`);
   }
   return res.json();
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/hooks/useIntegrations.ts` around lines 7 - 14, The fetcher
function builds an Error that includes the raw response body (see fetcher,
fetchWithAuth, res and body), which can leak verbose/internal server details;
instead truncate and sanitize the body before putting it into the Error message
(e.g. strip newlines, limit to a safe length like 200 chars and append "…", or
omit the body entirely in production). Update the error construction to use the
sanitized/truncatedBody variable so consumers of fetcher(error.message) never
receive unbounded or multiline server content.
apps/web/src/app/settings/integrations/page.tsx (1)

89-94: getProviderDetailHref is hardcoded to a single provider slug.

This works for now but won't scale as more providers are added. Consider moving this to a config map or deriving from provider metadata.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/app/settings/integrations/page.tsx` around lines 89 - 94,
getProviderDetailHref currently hardcodes only the 'google-calendar' slug;
replace this with a scalable lookup by creating a mapping (e.g.,
PROVIDER_DETAIL_ROUTES) keyed by provider slug and use it inside
getProviderDetailHref(SafeConnection) to return
PROVIDER_DETAIL_ROUTES[connection.provider?.slug] || null, or if provider
metadata includes a route/slug-derived path (e.g.,
connection.provider.detailPath) prefer deriving the route from that metadata;
update getProviderDetailHref to consult the map/metadata so adding providers
only requires updating the config or provider object rather than changing the
function.
packages/lib/src/integrations/repositories/audit-repository.ts (1)

196-215: In-memory grouping in countAuditLogsByErrorType is inefficient at scale.

This function fetches all matching rows into memory and groups with a Map. For large audit tables this becomes an O(n) memory and CPU concern. Consider using Drizzle's sql template or a GROUP BY query to push aggregation to the database.

♻️ Sketch using SQL aggregation
-  const logs = await database.query.integrationAuditLog.findMany({
-    where: whereClause,
-    columns: {
-      errorType: true,
-    },
-  });
-
-  // Group by errorType
-  const counts = new Map<string, number>();
-  for (const log of logs) {
-    if (log.errorType) {
-      counts.set(log.errorType, (counts.get(log.errorType) ?? 0) + 1);
-    }
-  }
-
-  return Array.from(counts.entries()).map(([errorType, count]) => ({
-    errorType,
-    count,
-  }));
+  // Push aggregation to the database
+  const results = await database
+    .select({
+      errorType: integrationAuditLog.errorType,
+      count: sql<number>`count(*)`.as('count'),
+    })
+    .from(integrationAuditLog)
+    .where(whereClause)
+    .groupBy(integrationAuditLog.errorType);
+
+  return results
+    .filter((r) => r.errorType != null)
+    .map((r) => ({ errorType: r.errorType!, count: Number(r.count) }));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.ts` around lines
196 - 215, countAuditLogsByErrorType currently pulls all matching rows via
database.query.integrationAuditLog.findMany and does in-memory grouping with a
Map; replace that with a database-side aggregation (GROUP BY) to avoid O(n)
memory/CPU. Update the implementation that uses whereClause to run a grouped
query (e.g. via Drizzle's sql template or a group-by/count query) that SELECTs
error_type and COUNT(*) grouped by error_type, then map the DB rows to the same
return shape ({ errorType, count }). Remove the in-memory Map logic and ensure
you still handle nullable/empty errorType consistently.
apps/web/src/components/settings/DriveIntegrations.tsx (1)

74-81: "Connect" button is disabled during initial load.

While isLoading is true, providers defaults to [], so availableProviders is empty and the button is disabled. This is technically correct but could confuse users who see a visible but greyed-out button before data arrives. Consider also disabling based on isLoading explicitly, or hiding the button entirely until loaded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/DriveIntegrations.tsx` around lines 74 - 81,
The "Connect" Button (onClick -> setShowProviderPicker) is unintentionally
disabled during initial load because availableProviders derives from providers
which defaults to [], so change the UI to account for isLoading: either disable
the Button when isLoading is true as well (e.g., disabled={isLoading ||
availableProviders.length === 0}) or conditionally hide/replace the Button until
providers finish loading (render Button only when isLoading is false), ensuring
you reference the Button component, setShowProviderPicker, availableProviders
and isLoading in the change.

…tization, SQL grouping

- Fix addTools merge to preserve caller config and deduplicate tools by ID
- Fix rate limit onBlur treating 0 as falsy (use explicit null/NaN checks)
- Make Connect button disabled condition explicitly account for loading state
- Truncate and sanitize fetcher error body to prevent info leakage
- Replace in-memory audit log grouping with database-side GROUP BY

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/lib/src/integrations/repositories/audit-repository.ts (1)

62-84: ⚠️ Potential issue | 🔴 Critical

Test mock function signature does not match the updated function — tests are not validating the real implementation.

The production code in route.ts correctly passes driveId to getAuditLogsByConnection. However, the test file defines a local mock function (lines 69-75) with the old signature (db, connectionId, options) that is missing the new driveId parameter. The test at line 299 calls this stale mock, meaning tests are not actually validating the real function behavior. Update the mock function signature to match the actual function: (db, driveId, connectionId, options), and update all test calls to pass the driveId parameter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.ts` around lines
62 - 84, Tests use a stale mock of getAuditLogsByConnection with the old
signature, so they never validate the real implementation; update the test mock
to match the real function signature getAuditLogsByConnection(db, driveId,
connectionId, options) and change every test invocation (including the call at
the failing test) to pass the driveId argument as the second parameter so the
mocked behavior aligns with how route.ts actually calls
getAuditLogsByConnection.
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx`:
- Around line 145-149: The Switch component used for toggling integrations (the
<Switch> with checked={isEnabled}, disabled={!isActive || toggling ===
connection.id}, onCheckedChange={(checked) => handleToggle(connection,
checked)}) is missing an accessible label; add an explicit accessible name by
either supplying an aria-label/aria-labelledby prop to that Switch (e.g.,
`aria-label={`Enable ${connection.name} integration`}`) or by rendering a
<Label> tied to the Switch via id/htmlFor using connection.id, so screen readers
get a clear description of the toggle.
- Around line 65-66: The catch block inside the AgentIntegrationsPanel component
is using a catch parameter named `error` which shadows the component-level
`error` variable; rename the catch parameter to `err` (or similar) in the async
update logic so the condition and toast call become `err instanceof Error ?
err.message : 'Failed to update'`, and update any references in that catch block
(e.g., the toast.error call) to use the new name to avoid shadowing and linter
confusion.
- Around line 170-189: The numeric Input's onBlur currently parses whatever the
user typed and sends it to handleUpdateGrant without enforcing the min/max
bounds; update the onBlur handler on the Input (key pattern
ratelimit-${grant.id} / id ratelimit-${grant.id}) to normalize the value: parse
the trimmed value to an integer (or null for empty), ignore NaN, then clamp
non-null values to the allowed range (1..1000) before calling handleUpdateGrant
with rateLimitOverride: { requestsPerMinute: clampedValue } (or null); if you
prefer rejecting out-of-range input instead of clamping, validate and don't call
handleUpdateGrant and surface a validation state to the user.

In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 90-94: The error banner in DriveIntegrations lacks a retry
affordance; modify the JSX inside the error branch to add a small retry button
(e.g., a Button or plain button element) that calls the provided
mutateConnections and, if available, mutateProviders to revalidate/fetch data
when clicked; locate the error branch in the DriveIntegrations component (the
block rendering "Failed to load integrations") and wire the button's onClick to
async handlers that await mutateConnections() and conditionally await
mutateProviders() to refresh state and hide the error.

In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 188-211: The non-null assertion on the and(...) call used to build
whereClause can be removed by assembling the filters into an array and passing
them to and(...) only when present: build an array of conditions starting with
eq(integrationAuditLog.driveId, driveId), push gte(...) and lte(...) if
startDate and endDate are provided, then call and(...conditions) to set
whereClause; update the .where(...) call to use and(...conditions) combined with
isNotNull(integrationAuditLog.errorType) so you no longer need the trailing "!"
on the and(...) result and the rows.map logic (row.errorType and
Number(row.count)) remains unchanged.
- Around line 62-84: Tests use a stale mock of getAuditLogsByConnection with the
old signature, so they never validate the real implementation; update the test
mock to match the real function signature getAuditLogsByConnection(db, driveId,
connectionId, options) and change every test invocation (including the call at
the failing test) to pass the driveId argument as the second parameter so the
mocked behavior aligns with how route.ts actually calls
getAuditLogsByConnection.
🧹 Nitpick comments (4)
🤖 Fix all nitpicks with AI agents
Verify each finding against the current code and only fix it if needed.


In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx`:
- Around line 65-66: The catch block inside the AgentIntegrationsPanel component
is using a catch parameter named `error` which shadows the component-level
`error` variable; rename the catch parameter to `err` (or similar) in the async
update logic so the condition and toast call become `err instanceof Error ?
err.message : 'Failed to update'`, and update any references in that catch block
(e.g., the toast.error call) to use the new name to avoid shadowing and linter
confusion.
- Around line 170-189: The numeric Input's onBlur currently parses whatever the
user typed and sends it to handleUpdateGrant without enforcing the min/max
bounds; update the onBlur handler on the Input (key pattern
ratelimit-${grant.id} / id ratelimit-${grant.id}) to normalize the value: parse
the trimmed value to an integer (or null for empty), ignore NaN, then clamp
non-null values to the allowed range (1..1000) before calling handleUpdateGrant
with rateLimitOverride: { requestsPerMinute: clampedValue } (or null); if you
prefer rejecting out-of-range input instead of clamping, validate and don't call
handleUpdateGrant and surface a validation state to the user.

In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 90-94: The error banner in DriveIntegrations lacks a retry
affordance; modify the JSX inside the error branch to add a small retry button
(e.g., a Button or plain button element) that calls the provided
mutateConnections and, if available, mutateProviders to revalidate/fetch data
when clicked; locate the error branch in the DriveIntegrations component (the
block rendering "Failed to load integrations") and wire the button's onClick to
async handlers that await mutateConnections() and conditionally await
mutateProviders() to refresh state and hide the error.

In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 188-211: The non-null assertion on the and(...) call used to build
whereClause can be removed by assembling the filters into an array and passing
them to and(...) only when present: build an array of conditions starting with
eq(integrationAuditLog.driveId, driveId), push gte(...) and lte(...) if
startDate and endDate are provided, then call and(...conditions) to set
whereClause; update the .where(...) call to use and(...conditions) combined with
isNotNull(integrationAuditLog.errorType) so you no longer need the trailing "!"
on the and(...) result and the rows.map logic (row.errorType and
Number(row.count)) remains unchanged.
apps/web/src/components/settings/DriveIntegrations.tsx (1)

90-94: Consider adding a retry button to the error state.

The error banner is a good addition, but there's no way for the user to retry without a full page refresh. Since mutateConnections and mutateProviders (if destructured) are available, a small retry button would complete the pattern.

Suggested enhancement
           ) : error ? (
             <div className="flex items-center gap-2 p-4 text-sm text-destructive bg-destructive/10 rounded-lg">
               <AlertCircle className="h-4 w-4" />
               <span>Failed to load integrations</span>
+              <Button
+                variant="ghost"
+                size="sm"
+                className="ml-auto text-destructive"
+                onClick={() => mutateConnections()}
+              >
+                Retry
+              </Button>
             </div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/DriveIntegrations.tsx` around lines 90 - 94,
The error banner in DriveIntegrations lacks a retry affordance; modify the JSX
inside the error branch to add a small retry button (e.g., a Button or plain
button element) that calls the provided mutateConnections and, if available,
mutateProviders to revalidate/fetch data when clicked; locate the error branch
in the DriveIntegrations component (the block rendering "Failed to load
integrations") and wire the button's onClick to async handlers that await
mutateConnections() and conditionally await mutateProviders() to refresh state
and hide the error.
apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx (2)

65-66: error in catch shadows the component-level error variable (line 31).

The catch parameter error shadows the error declared at line 31. While it doesn't cause a runtime bug here, it can confuse readers and linters. Rename it to err or similar.

♻️ Proposed fix
-    } catch (error) {
-      toast.error(error instanceof Error ? error.message : 'Failed to update');
+    } catch (err) {
+      toast.error(err instanceof Error ? err.message : 'Failed to update');
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx` around
lines 65 - 66, The catch block inside the AgentIntegrationsPanel component is
using a catch parameter named `error` which shadows the component-level `error`
variable; rename the catch parameter to `err` (or similar) in the async update
logic so the condition and toast call become `err instanceof Error ? err.message
: 'Failed to update'`, and update any references in that catch block (e.g., the
toast.error call) to use the new name to avoid shadowing and linter confusion.

170-189: Rate limit input doesn't enforce min/max bounds programmatically.

The HTML min={1} / max={1000} attributes are advisory — they don't prevent keyboard entry of out-of-range values. The onBlur handler sends whatever parseInt produces (e.g., 0, -3, 5000) straight to the server. Consider clamping or rejecting out-of-range values client-side.

♻️ Proposed fix
 onBlur={(e) => {
   const raw = e.target.value.trim();
   const val = raw === '' ? null : parseInt(raw, 10);
   const current = grant.rateLimitOverride?.requestsPerMinute ?? null;
-  if (val !== current && (val === null || !Number.isNaN(val))) {
+  if (val !== current && (val === null || (!Number.isNaN(val) && val >= 1 && val <= 1000))) {
     handleUpdateGrant(grant, {
       rateLimitOverride: val != null ? { requestsPerMinute: val } : null,
     });
   }
 }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/ai/page-agents/AgentIntegrationsPanel.tsx` around
lines 170 - 189, The numeric Input's onBlur currently parses whatever the user
typed and sends it to handleUpdateGrant without enforcing the min/max bounds;
update the onBlur handler on the Input (key pattern ratelimit-${grant.id} / id
ratelimit-${grant.id}) to normalize the value: parse the trimmed value to an
integer (or null for empty), ignore NaN, then clamp non-null values to the
allowed range (1..1000) before calling handleUpdateGrant with rateLimitOverride:
{ requestsPerMinute: clampedValue } (or null); if you prefer rejecting
out-of-range input instead of clamping, validate and don't call
handleUpdateGrant and surface a validation state to the user.
packages/lib/src/integrations/repositories/audit-repository.ts (1)

188-211: Solid refactor moving aggregation to the database.

The GROUP BY approach is much better than in-memory grouping. Two minor notes:

  1. Line 196: The non-null assertion ! on and() is safe here since all arguments are defined, but could be avoided with a cleaner pattern.
  2. Line 209: row.errorType! is safe due to the isNotNull filter — no issue.
♻️ Optional: eliminate the non-null assertion by building conditions as an array
-  let whereClause = eq(integrationAuditLog.driveId, driveId);
-
-  if (startDate && endDate) {
-    whereClause = and(
-      whereClause,
-      gte(integrationAuditLog.createdAt, startDate),
-      lte(integrationAuditLog.createdAt, endDate)
-    )!;
-  }
-
-  const rows = await database
-    .select({
-      errorType: integrationAuditLog.errorType,
-      count: count(),
-    })
-    .from(integrationAuditLog)
-    .where(and(whereClause, isNotNull(integrationAuditLog.errorType)))
-    .groupBy(integrationAuditLog.errorType);
+  const conditions = [
+    eq(integrationAuditLog.driveId, driveId),
+    isNotNull(integrationAuditLog.errorType),
+  ];
+
+  if (startDate && endDate) {
+    conditions.push(
+      gte(integrationAuditLog.createdAt, startDate),
+      lte(integrationAuditLog.createdAt, endDate)
+    );
+  }
+
+  const rows = await database
+    .select({
+      errorType: integrationAuditLog.errorType,
+      count: count(),
+    })
+    .from(integrationAuditLog)
+    .where(and(...conditions))
+    .groupBy(integrationAuditLog.errorType);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.ts` around lines
188 - 211, The non-null assertion on the and(...) call used to build whereClause
can be removed by assembling the filters into an array and passing them to
and(...) only when present: build an array of conditions starting with
eq(integrationAuditLog.driveId, driveId), push gte(...) and lte(...) if
startDate and endDate are provided, then call and(...conditions) to set
whereClause; update the .where(...) call to use and(...conditions) combined with
isNotNull(integrationAuditLog.errorType) so you no longer need the trailing "!"
on the and(...) result and the rows.map logic (row.errorType and
Number(row.count)) remains unchanged.

… UX, non-null assertion

- Rename catch parameter to avoid shadowing component-level error variable
- Add aria-label to integration toggle Switch for accessibility
- Clamp rate limit input to [1, 1000] on blur and sync displayed value
- Add Retry button to DriveIntegrations error banner
- Replace non-null assertion with conditions array in countAuditLogsByErrorType
- Fix stale mock signature in audit-repository tests to match real function

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/lib/src/integrations/repositories/audit-repository.test.ts (1)

279-305: ⚠️ Potential issue | 🟡 Minor

Test should assert that driveId was passed to the query.

The test at line 300 passes 'drive-1' as driveId but the assertions (lines 302-303) only verify result length and connectionId. There's no assertion that findMany was called with the driveId in the where clause — which is the whole point of this change.

💚 Proposed fix: add assertion for driveId filtering
     const result = await getAuditLogsByConnection(mockDb, 'drive-1', 'conn-1');

     expect(result).toHaveLength(1);
     expect(result[0].connectionId).toBe('conn-1');
+    expect(mockDb.query.integrationAuditLog.findMany).toHaveBeenCalledWith(
+      expect.objectContaining({
+        where: expect.objectContaining({ driveId: 'drive-1', connectionId: 'conn-1' }),
+      })
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.test.ts` around
lines 279 - 305, The test for getAuditLogsByConnection doesn't assert that the
driveId was forwarded to the DB query; update the test to assert that
mockDb.query.integrationAuditLog.findMany was called with a where clause
containing driveId: 'drive-1' (and connectionId: 'conn-1') so the call from
getAuditLogsByConnection is validated; use
expect(mockDb.query.integrationAuditLog.findMany).toHaveBeenCalledWith(...) or
expect.objectContaining/expect.any to match the call containing where: {
driveId: 'drive-1', connectionId: 'conn-1' }.
🧹 Nitpick comments (2)
packages/lib/src/integrations/repositories/audit-repository.test.ts (1)

47-106: Tests exercise inline mock reimplementations, not the actual repository functions.

All test helpers (logAuditEntry, getAuditLogsByDrive, getAuditLogsByConnection, etc.) are re-implemented locally rather than imported from the source module. This means these tests verify mock behavior, not real query logic. Notably, countAuditLogsByErrorType — which was significantly refactored to use SQL aggregation — has no test coverage at all.

Consider importing the actual functions and mocking only the db dependency so that real query-building logic is exercised. This would catch regressions like incorrect WHERE clauses or aggregation bugs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.test.ts` around
lines 47 - 106, Tests currently reimplement repository helpers (logAuditEntry,
getAuditLogsByDrive, getAuditLogsByConnection, getAuditLogsByDateRange,
getAuditLogsBySuccess) locally instead of exercising the real repository
functions; replace these inline implementations by importing the real functions
from the repository module under test and mock only the MockDb passed into them
so the actual query-building logic runs, and add explicit test(s) for
countAuditLogsByErrorType that supply a mocked db.query.integrationAuditLog (or
db.aggregate equivalent) to assert the correct SQL-like aggregation/WHERE
clauses and returned counts; reference and update tests that call logAuditEntry,
getAuditLogsByDrive, getAuditLogsByConnection, getAuditLogsByDateRange,
getAuditLogsBySuccess, and countAuditLogsByErrorType to use the imported
functions and mocked db.
packages/lib/src/integrations/repositories/audit-repository.ts (1)

189-213: Good refactor from in-memory counting to SQL aggregation; minor note on the non-null assertion.

The move to count() + groupBy is a solid improvement for correctness and performance. Two observations:

  1. Line 211 — row.errorType!: The isNotNull WHERE clause guarantees this at the DB level, so this is safe at runtime. However, the commit message mentions replacing a non-null assertion with a safe conditional. Consider a defensive fallback to match that intent:
♻️ Optional: defensive fallback instead of non-null assertion
-  return rows.map((row) => ({
-    errorType: row.errorType!,
-    count: Number(row.count),
-  }));
+  return rows
+    .filter((row) => row.errorType !== null)
+    .map((row) => ({
+      errorType: row.errorType as string,
+      count: Number(row.count),
+    }));
  1. Number(row.count) on line 212 is correct — PostgreSQL returns bigint counts as strings via Drizzle, so the explicit coercion is needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.ts` around lines
189 - 213, The non-null assertion on row.errorType in the rows.map (produced
from the integrationAuditLog query) should be replaced with a defensive fallback
to avoid using `!`; update the mapping that currently returns { errorType:
row.errorType!, count: Number(row.count) } to safely handle a possible null by
using a fallback (e.g., row.errorType ?? 'UNKNOWN_ERROR' or similar
domain-appropriate sentinel) so consumers always get a string and no assertion
is required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/web/src/components/settings/DriveIntegrations.tsx`:
- Around line 90-97: Retry button only re-fetches connections, not providers.
Destructure the providers' revalidation function (mutate) from useProviders (the
same hook that returns providersError) and update the Retry handler to call both
mutate() and mutateConnections(); ensure the retry button triggers both
revalidations so a providersError is cleared when providers succeed and
connections are refetched.

---

Outside diff comments:
In `@packages/lib/src/integrations/repositories/audit-repository.test.ts`:
- Around line 279-305: The test for getAuditLogsByConnection doesn't assert that
the driveId was forwarded to the DB query; update the test to assert that
mockDb.query.integrationAuditLog.findMany was called with a where clause
containing driveId: 'drive-1' (and connectionId: 'conn-1') so the call from
getAuditLogsByConnection is validated; use
expect(mockDb.query.integrationAuditLog.findMany).toHaveBeenCalledWith(...) or
expect.objectContaining/expect.any to match the call containing where: {
driveId: 'drive-1', connectionId: 'conn-1' }.

---

Nitpick comments:
In `@packages/lib/src/integrations/repositories/audit-repository.test.ts`:
- Around line 47-106: Tests currently reimplement repository helpers
(logAuditEntry, getAuditLogsByDrive, getAuditLogsByConnection,
getAuditLogsByDateRange, getAuditLogsBySuccess) locally instead of exercising
the real repository functions; replace these inline implementations by importing
the real functions from the repository module under test and mock only the
MockDb passed into them so the actual query-building logic runs, and add
explicit test(s) for countAuditLogsByErrorType that supply a mocked
db.query.integrationAuditLog (or db.aggregate equivalent) to assert the correct
SQL-like aggregation/WHERE clauses and returned counts; reference and update
tests that call logAuditEntry, getAuditLogsByDrive, getAuditLogsByConnection,
getAuditLogsByDateRange, getAuditLogsBySuccess, and countAuditLogsByErrorType to
use the imported functions and mocked db.

In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 189-213: The non-null assertion on row.errorType in the rows.map
(produced from the integrationAuditLog query) should be replaced with a
defensive fallback to avoid using `!`; update the mapping that currently returns
{ errorType: row.errorType!, count: Number(row.count) } to safely handle a
possible null by using a fallback (e.g., row.errorType ?? 'UNKNOWN_ERROR' or
similar domain-appropriate sentinel) so consumers always get a string and no
assertion is required.

… defensive fallback

- Retry button now revalidates both connections and providers so
  providersError is cleared on success
- Add assertion that getAuditLogsByConnection forwards driveId to the
  DB query
- Replace row.errorType! with ?? 'UNKNOWN_ERROR' defensive fallback

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/lib/src/integrations/repositories/audit-repository.ts (1)

63-84: ⚠️ Potential issue | 🟡 Minor

Drive-scoping looks correct.

Adding driveId to the AND clause properly ensures connection audit logs are scoped to the owning drive—good security improvement.

One refinement: the route handler validates connectionId with only z.string().min(1), but this codebase standardizes on CUID2 format validation (see id-validators.ts). Other ID fields like userId, pageId, and driveId use dedicated validators. Consider adding a parseConnectionId validator or using a CUID2 schema in the route validation for consistency with the rest of the system.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/lib/src/integrations/repositories/audit-repository.ts` around lines
63 - 84, The route that calls getAuditLogsByConnection currently validates
connectionId with z.string().min(1); update that route's validation to use the
project's CUID2 validator (or add and export a parseConnectionId helper in
id-validators.ts) and replace the z.string().min(1) check with the CUID2
schema/parseConnectionId so connectionId follows the same CUID2 format as
driveId/userId/pageId; ensure you import the validator from id-validators.ts and
use it in the handler before calling getAuditLogsByConnection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@packages/lib/src/integrations/repositories/audit-repository.ts`:
- Around line 63-84: The route that calls getAuditLogsByConnection currently
validates connectionId with z.string().min(1); update that route's validation to
use the project's CUID2 validator (or add and export a parseConnectionId helper
in id-validators.ts) and replace the z.string().min(1) check with the CUID2
schema/parseConnectionId so connectionId follows the same CUID2 format as
driveId/userId/pageId; ensure you import the validator from id-validators.ts and
use it in the handler before calling getAuditLogsByConnection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant