Skip to content

fix(chat): fix deactivated LLM config errors and config selection bug#317

Merged
zaira-bibi merged 1 commit into
mainfrom
zaira/llm-config-select-fix
May 12, 2026
Merged

fix(chat): fix deactivated LLM config errors and config selection bug#317
zaira-bibi merged 1 commit into
mainfrom
zaira/llm-config-select-fix

Conversation

@zaira-bibi
Copy link
Copy Markdown
Contributor

What

Four related bugs in the chat plugin LLM config flow.

1. Misleading backend error for deactivated configs
preprocess_config filtered by is_active=True, so saving with a deactivated config raised "not found or does not belong to this user" instead of a meaningful message.

2. Raw JSON error display
The frontend surfaced the full API response body (Failed to update config: {"detail":"…"}) instead of the human-readable detail string.

3. Single-config selection broken
When chat had no saved LLM config and exactly one config existed, the browser visually auto-selected it (the placeholder <option> was disabled, causing a React/DOM value mismatch), but onChange never fired. The config was never actually selected and models never loaded. The workaround was creating a second config, selecting it, then switching back.

4. Deactivated config — Save and model override still enabled
Selecting a deactivated config showed a warning banner but left both the Save button and model override dropdown active.

Changes

File Change
app/llm/plugin_adapter.py Add _fetch_llm_config_any (no is_active filter); preprocess_config checks existence then activeness separately with distinct errors
tests/llm/test_plugin_adapter.py Add test_preprocess_deactivated_llm_config_id_raises
frontend/lib/plugins/context.tsx Parse JSON error response and extract detail field
frontend/components/ui/Select.tsx Remove disabled from placeholder <option>
frontend/plugins/chat/components/ChatConfigModal.tsx Disable Save button and model override dropdown when selected config is inactive

Test plan

  • Open chat settings with no config saved and exactly one LLM config — config should be selectable and models should load on first click
  • Select a deactivated config — Save button and model override are both disabled; warning banner is visible
  • Save with a valid active config — succeeds normally
  • Deactivate a config after saving it, reopen settings — postprocess still shows the config name (read path unchanged), but attempting to re-save raises the "deactivated" error (backend guard)
  • Trigger a validation error — error message shows plain text, not raw JSON

@zaira-bibi zaira-bibi self-assigned this May 12, 2026
@claude
Copy link
Copy Markdown

claude Bot commented May 12, 2026

Claude finished @zaira-bibi's task in 2m 25s —— View job


Review: PR #317 — fix(chat): fix deactivated LLM config errors and config selection bug

  • Gather context — read changed files and diff
  • Check for previously addressed review comments (none found)
  • Review backend changes (plugin_adapter.py)
  • Review test coverage (test_plugin_adapter.py)
  • Review frontend changes (context.tsx, Select.tsx, ChatConfigModal.tsx)
  • Check for security, correctness, and anti-patterns
  • Post review summary and inline comments

Overview

Clean, well-scoped PR that fixes four related bugs in the chat config flow. The changes are minimal and targeted. Backend adds a two-step validation (exists → active), frontend improves error display, fixes the single-config selection bug, and disables controls for inactive configs.

Findings by Priority

Medium

postprocess_config still uses active-only filter — contradicts PR description

The PR description states "postprocess still shows the config name (read path unchanged)", but postprocess_config at app/llm/plugin_adapter.py:95 uses _fetch_llm_config which filters by is_active=True. This means when a user deactivates a previously-saved config, postprocess returns null for llm_config_name/llm_provider/llm_model.

The frontend's LLMConfigSelect independently fetches with includeInactive: true, so the dropdown still works. But if any other consumer reads the postprocessed config (e.g., chat message handling, or the config display outside the modal), it will see null fields for a deactivated config.

Recommendation: Either use _fetch_llm_config_any in postprocess_config too (so the read path truly preserves the config name), or update the PR description to match the actual behavior. This is worth verifying against the test plan item: "Deactivate a config after saving it, reopen settings — postprocess still shows the config name."

Low (Suggestions)

  1. Select.tsx placeholder — consider hidden instead of removing disabled (inline comment posted). Removing disabled lets users re-select the empty placeholder. Using <option value="" hidden> would fix the single-config bug without allowing placeholder re-selection. Not blocking since LLMConfigSelect guards against empty values.

  2. context.tsx error parsing — good defensive check (inline comment posted). The typeof json.detail === "string" guard correctly handles FastAPI's validation errors (where detail is an array). Well done.

What Looks Good

  • Two-step validation in preprocess_config — clean separation of "not found" vs "deactivated" errors with distinct user-facing messages
  • Test coverage — new test for the deactivated case with proper match="deactivated" assertion
  • Frontend error parsing — properly handles JSON and text responses with a sensible fallback
  • ChatConfigModal disable logicselectedIsInactive flag cleanly gates both Save button and model override
  • No React anti-patterns — dependency arrays are correct, no state mutations, proper cleanup patterns

Test Coverage Assessment

The new test_preprocess_deactivated_llm_config_id_raises test covers the backend change well. One gap: there's no test for preprocess_config with a deactivated config that has is_deleted=True (should hit the "not found" path, not the "deactivated" path). This is an edge case that the _fetch_llm_config_any query handles correctly (it filters is_deleted), but a test would confirm it.

Security

No concerns. The changes maintain existing authorization checks (user_id filtering), and the new _fetch_llm_config_any method still scopes by user_id and is_deleted.

Overall Assessment

Approve with one non-blocking suggestion. The code is clean and the fixes are correct. The main point to verify is whether the postprocess_config active-only filter matches the intended UX for deactivated configs.


Comment thread frontend/components/ui/Select.tsx Outdated
Comment thread frontend/lib/plugins/context.tsx
@zaira-bibi zaira-bibi force-pushed the zaira/llm-config-select-fix branch from 446d634 to 00d9f9f Compare May 12, 2026 10:57
@claude
Copy link
Copy Markdown

claude Bot commented May 12, 2026

Claude finished @zaira-bibi's task in 2m 19s —— View job


Re-review: PR #317 — fix(chat): fix deactivated LLM config errors and config selection bug

  • Gather context — read updated diff and changed files
  • Check previously addressed review comments
  • Review backend changes (plugin_adapter.py, routes.py)
  • Review test coverage (test_plugin_adapter.py)
  • Review frontend changes (context.tsx, Select.tsx, ChatConfigModal.tsx)
  • Check for security, correctness, and anti-patterns
  • Post review summary and inline comments

Overview

Re-reviewed after force push. The changes remain clean and well-scoped. The PR fixes four related bugs in the chat config flow with minimal, targeted modifications. One new observation on routes.py since the previous review.

Previous Review Comments

Thread Status
Select.tsx — consider hidden instead of removing disabled Open (non-blocking suggestion, current approach works with LLMConfigSelect's empty-value guard)
context.tsx — positive acknowledgment of typeof json.detail === "string" guard Resolved (no action needed)

Findings by Priority

Medium

postprocess_config still uses active-only filter — flagged in previous review, still applicable

postprocess_config at app/llm/plugin_adapter.py:95 calls _fetch_llm_config which filters by is_active=True. When a user deactivates a previously-saved config and reopens the modal, postprocess returns null for llm_config_name/llm_provider/llm_model.

The frontend works around this because LLMConfigSelect independently fetches with includeInactive: true (frontend/components/LLMConfigSelect.tsx:48). But the PR description's test plan says "postprocess still shows the config name (read path unchanged)" — this is inaccurate with the current code.

Recommendation: Either use _fetch_llm_config_any in postprocess_config too, or update the test plan to reflect actual behavior. Not blocking since the frontend handles it, but the inconsistency between the PR description and the code could cause confusion.

Low

routes.py error message UX improvement — good change, minor nit

The error message at app/core_plugins/chat/routes.py:316 now reads: "No AI Key found for the current user. Please select an AI key by clicking the configuration icon on the Chat plugin in My Plugins." — this is a helpful UX improvement. One minor note: the message references "My Plugins" which couples the backend to a specific frontend route name. If that page is renamed, this message becomes stale. Consider keeping the message generic: "No AI Key found. Please configure an AI key in chat settings." — but this is non-blocking.

What Looks Good

  • Two-step validation in preprocess_config (plugin_adapter.py:73-77) — clean separation of "not found" vs "deactivated" errors with distinct user-facing messages
  • Security: _fetch_llm_config_any still scopes by user_id and filters is_deleted — no authorization bypass
  • Test coverage: test_preprocess_deactivated_llm_config_id_raises correctly tests the new deactivated path with match="deactivated"
  • Frontend error parsing (context.tsx:84-93) — properly handles JSON and text responses with sensible fallback chain
  • ChatConfigModal disable logic (ChatConfigModal.tsx:108-109) — selectedIsInactive flag cleanly gates both Save button and model override, with adaptive helper text
  • No React anti-patterns — dependency arrays correct, no state mutations, proper cleanup in effects, ref pattern for callback stability in LLMConfigSelect

Test Coverage

The new test covers the backend change well. One minor gap (noted in previous review): no test for a deactivated + deleted config (is_active=False, is_deleted=True) — should hit the "not found" path since _fetch_llm_config_any filters is_deleted. The existing query handles this correctly, but a test would confirm.

Overall Assessment

Approve with non-blocking suggestions. The code is clean, well-tested, and the fixes are correct. The postprocess_config active-only filter is the main point to verify against intended UX, but doesn't block merge since the frontend handles it independently.


@zaira-bibi zaira-bibi force-pushed the zaira/llm-config-select-fix branch from 00d9f9f to 124ea70 Compare May 12, 2026 11:05
@claude
Copy link
Copy Markdown

claude Bot commented May 12, 2026

Claude finished @zaira-bibi's task in 1m 42s —— View job


Re-review: PR #317 — fix(chat): fix deactivated LLM config errors and config selection bug

  • Gather context — read updated diff and all changed files
  • Check and resolve previously addressed review comments
  • Review backend changes (plugin_adapter.py, routes.py)
  • Review test coverage (test_plugin_adapter.py)
  • Review frontend changes (context.tsx, Select.tsx, ChatConfigModal.tsx)
  • Check for security, correctness, and anti-patterns
  • Post review summary and inline comments

Overview

Re-reviewed after force push. The PR is clean, well-scoped, and all four bugs are correctly addressed. The Select.tsx suggestion from the first review was adopted (using hidden instead of removing disabled).

Previous Review Comments

Thread Status
Select.tsx — use hidden instead of removing disabled Resolved — adopted in latest push
context.tsxtypeof json.detail === "string" guard acknowledgment Resolved (no action needed)

Findings by Priority

Medium

postprocess_config still uses active-only filter — flagged in both previous reviews, still applicable

postprocess_config at app/llm/plugin_adapter.py:95 calls _fetch_llm_config which filters by is_active=True. When a user deactivates a previously-saved config and reopens the modal, postprocess returns null for llm_config_name/llm_provider/llm_model.

The frontend works around this because LLMConfigSelect independently fetches with includeInactive: true (frontend/components/LLMConfigSelect.tsx:48). But the PR description's test plan says "postprocess still shows the config name (read path unchanged)" — this is inaccurate with the current code.

Recommendation: Either use _fetch_llm_config_any in postprocess_config too, or update the test plan to reflect actual behavior. Not blocking since the frontend handles it, but the inconsistency between the PR description and the code could cause confusion for future readers.

Low

routes.py:316 — error message couples to frontend route name

The message "…by clicking the configuration icon on the Chat plugin in My Plugins" references the frontend page name "My Plugins". If that page is renamed, this becomes stale. Consider a more generic wording like "Please configure an AI key in your chat plugin settings." — but this is non-blocking.

What Looks Good

  • Two-step validation in preprocess_config (plugin_adapter.py:73-77) — clean separation of "not found" vs "deactivated" with distinct user-facing messages
  • _fetch_llm_config_any properly scopes by user_id and is_deleted — no authorization bypass
  • Select.tsx fixhidden attribute on the placeholder is the correct approach: fixes the React controlled-value mismatch (root cause of bug Sync course sections with Open edX #3) while preventing re-selection of the empty placeholder
  • Error parsing (context.tsx:84-93) — handles JSON, text, and fallback with proper type guards
  • ChatConfigModal disable logicselectedIsInactive flag cleanly gates both Save button and model override dropdown, with adaptive helper text
  • No React anti-patterns — dependency arrays are correct, ref pattern for callback stability in LLMConfigSelect, proper cleanup in effects, no state mutations

Test Coverage

The new test_preprocess_deactivated_llm_config_id_raises covers the backend change well with proper match="deactivated" assertion. One minor gap (noted in previous reviews): no test for a deactivated + soft-deleted config (is_active=False, is_deleted=True) — should hit the "not found" path since _fetch_llm_config_any filters is_deleted. The query handles this correctly, but a test would confirm.

Security

No concerns. The new _fetch_llm_config_any method maintains all existing authorization checks (user_id filtering, is_deleted filtering). The only relaxation is on is_active, which is intentional for the two-step validation.

Overall Assessment

Approve with non-blocking suggestions. The code is clean, well-tested, and all four bugs are correctly fixed. The postprocess_config active-only filter is the main point to verify against intended UX, but doesn't block merge since the frontend handles it independently.


- preprocess_config now distinguishes "not found" vs "deactivated" by
  querying without the is_active filter and checking the flag in Python
- model override dropdown is disabled when selected config is deactivated
- Save button disabled when selected config is deactivated
- API error response JSON is parsed to extract detail before surfacing
  to the user, removing the raw {"detail":"..."} noise
- Select placeholder option is no longer marked disabled, fixing the
  browser/React value mismatch that prevented onChange from firing when
  only one config existed and nothing was yet selected
@zaira-bibi zaira-bibi force-pushed the zaira/llm-config-select-fix branch from 124ea70 to 3ff17e5 Compare May 12, 2026 11:13
@claude
Copy link
Copy Markdown

claude Bot commented May 12, 2026

Claude finished @zaira-bibi's task in 1m 30s —— View job


Re-review: PR #317 — fix(chat): fix deactivated LLM config errors and config selection bug

  • Gather context — read updated diff and all changed files
  • Check and resolve previously addressed review comments
  • Review backend changes (plugin_adapter.py, routes.py)
  • Review test coverage (test_plugin_adapter.py)
  • Review frontend changes (context.tsx, Select.tsx, ChatConfigModal.tsx)
  • Check for security, correctness, and anti-patterns
  • Post review summary and inline comments

Overview

Re-reviewed after force push. The PR is clean, well-scoped, and all four bugs are correctly addressed. The Select.tsx suggestion from the first review was adopted (hidden instead of removing disabled).

Previous Review Comments

Thread Status
Select.tsx — use hidden instead of removing disabled Resolved — adopted in latest push
context.tsxtypeof json.detail === "string" guard acknowledgment Resolved (no action needed)

Findings by Priority

Low

routes.py:316 — Double period typo (inline comment posted)

The error message ends with .. instead of .: "…your chat plugin settings..". Trivial fix.

Medium (Carried forward — non-blocking)

postprocess_config behavior for deactivated configs — PR description vs. code

postprocess_config at app/llm/plugin_adapter.py:79 now uses _fetch_llm_config_any (no is_active filter). This means it does resolve deactivated configs to their name/provider/model — which is actually the right behavior and matches the test plan item "postprocess still shows the config name."

This was flagged in previous reviews because the earlier code still used the active-only filter in postprocess_config. The latest push fixed this by switching postprocess_config to also use _fetch_llm_config_any. Good change — this is now resolved.

What Looks Good

  • Two-step validation in preprocess_config (plugin_adapter.py:57-61) — clean separation of "not found" vs "deactivated" errors with distinct user-facing messages
  • Security: _fetch_llm_config_any still scopes by user_id and filters is_deleted — no authorization bypass
  • Select.tsx fixhidden attribute on the placeholder is the correct approach: fixes the React controlled-value mismatch (root cause of bug Sync course sections with Open edX #3) while preventing re-selection of the empty placeholder
  • Error parsing (context.tsx:84-93) — handles JSON, text, and fallback with proper type guards. The typeof json.detail === "string" check correctly handles FastAPI's RequestValidationError where detail is an array
  • ChatConfigModal disable logicselectedIsInactive flag cleanly gates both Save button and model override dropdown, with adaptive helper text
  • LLMConfigSelect — ref pattern for onConfigSelect callback stability is well done; includeInactive: true fetch and deactivation warning banner work correctly
  • Test coveragetest_preprocess_deactivated_llm_config_id_raises covers the new backend path with proper match="deactivated" assertion
  • No React anti-patterns — dependency arrays are correct, proper cleanup in effects, no state mutations

Test Coverage

Good. One minor gap (noted in previous reviews): no test for a deactivated + soft-deleted config (is_active=False, is_deleted=True) — should hit the "not found" path since _fetch_llm_config_any filters is_deleted. The query handles this correctly, but a test would confirm.

Security

No concerns. The new _fetch_llm_config_any method maintains all existing authorization checks (user_id filtering, is_deleted filtering). The only relaxation is on is_active, which is intentional for the two-step validation.

Overall Assessment

Approve with one trivial nit (double period in routes.py:316). The code is clean, well-tested, and all four bugs are correctly fixed. The postprocess_config issue raised in previous reviews is now resolved — it correctly uses _fetch_llm_config_any to show deactivated config names on the read path.


except LLMConfigNotFoundError as exc:
logger.warning("LLMConfig %s not found for user %s: %s", request.llm_config_id, current_user.id, exc)
detail = "No AI Key found for the current user."
detail = "No AI Key found for the current user. Please configure an AI key in your chat plugin settings.."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Low — Double period at end of sentence

Typo: the message ends with .. instead of ..

Suggested change
detail = "No AI Key found for the current user. Please configure an AI key in your chat plugin settings.."
detail = "No AI Key found for the current user. Please configure an AI key in your chat plugin settings."

@zaira-bibi zaira-bibi merged commit 0e2bd04 into main May 12, 2026
6 checks passed
@zaira-bibi zaira-bibi deleted the zaira/llm-config-select-fix branch May 12, 2026 11:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants