Skip to content

fix: improve local model provider robustness and UX#2112

Merged
Stijnus merged 1 commit intostackblitz-labs:mainfrom
Stijnus:fix/local-provider-improvements
Feb 7, 2026
Merged

fix: improve local model provider robustness and UX#2112
Stijnus merged 1 commit intostackblitz-labs:mainfrom
Stijnus:fix/local-provider-improvements

Conversation

@Stijnus
Copy link
Collaborator

@Stijnus Stijnus commented Feb 5, 2026

Summary

  • Eliminates duplicated Docker URL rewriting code (was copy-pasted 4 times across Ollama/LMStudio)
  • Adds error handling and 5-second timeouts to all local provider model fetches
  • Fixes Ollama provider mutating internal SDK state by using createOllama() properly
  • Fixes LLMManager singleton ignoring env updates on subsequent Cloudflare Worker requests
  • Adds connection status indicators (green/red dots) for local providers in the model selector
  • Shows helpful "is X running?" messages when a local provider has no models

Issues Fixed

  1. Duplicated code: Docker URL rewriting was identical in 4 places — extracted to BaseProvider.resolveDockerUrl()
  2. No error handling: Ollama/LMStudio getDynamicModels had zero try/catch — a crashed server would throw unhandled TypeError: fetch failed
  3. No timeouts: One slow/hung local provider blocked all providers via Promise.all — added 5s AbortSignal.timeout
  4. Fragile SDK usage: Ollama mutated ollamaInstance.config.baseURL via type cast — replaced with createOllama({ baseURL })
  5. Stale env in singleton: LLMManager.getInstance(env) ignored env after first call — now updates on each call
  6. Cache key too broad: Serialized entire serverEnv into cache key — narrowed to only provider-relevant keys
  7. as any casts: LMStudio/OpenAILike used serverEnv as any — replaced with shared convertEnvToRecord()
  8. No connection feedback: Users had no idea if local providers were running — added status dots and helpful empty-state messages

Files Changed

  • app/lib/modules/llm/base-provider.ts — Added resolveDockerUrl(), convertEnvToRecord(), createTimeoutSignal(), narrowed cache key
  • app/lib/modules/llm/providers/ollama.ts — Rewrote with shared helpers, proper createOllama(), error handling
  • app/lib/modules/llm/providers/lmstudio.ts — Rewrote with shared helpers, error handling
  • app/lib/modules/llm/providers/openai-like.ts — Added timeout, replaced as any, used logger
  • app/lib/modules/llm/manager.ts — Fixed singleton env update, fixed typo
  • app/components/chat/ModelSelector.tsx — Added local provider status dots and helpful empty states

Test plan

  • Select Ollama provider with Ollama running — green dot, models listed
  • Select Ollama provider with Ollama stopped — red dot, "is Ollama running?" message
  • Select LMStudio provider with LM Studio stopped — red dot, helpful message
  • Verify model loading doesn't hang when one local provider is unreachable (5s timeout)
  • Verify cloud providers (OpenAI, Anthropic, etc.) are unaffected

🤖 Generated with Claude Code

- Extract shared Docker URL rewriting and env conversion into BaseProvider
  to eliminate 4x duplicated code across Ollama and LMStudio
- Add error handling and 5s timeouts to all model-listing fetches so one
  unreachable provider doesn't block the entire model list
- Fix Ollama using createOllama() instead of mutating provider internals
- Fix LLMManager singleton ignoring env updates on subsequent requests
- Narrow cache key to only include provider-relevant env vars instead of
  the entire server environment
- Fix 'as any' casts in LMStudio and OpenAILike by using shared
  convertEnvToRecord helper
- Replace console.log/error with structured logger in OpenAILike
- Fix typo: filteredStaticModesl -> filteredStaticModels in manager
- Add connection status indicator (green/red dot) for local providers
  in the ModelSelector dropdown
- Show helpful "is X running?" message when local provider has no models

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 5, 2026 22:09
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves the robustness and user experience of local model providers (Ollama, LMStudio, OpenAILike) by addressing code duplication, adding error handling and timeouts, fixing SDK usage patterns, and enhancing the UI with connection status indicators.

Changes:

  • Extracted duplicated Docker URL rewriting logic to shared BaseProvider.resolveDockerUrl() method
  • Added 5-second timeouts and comprehensive error handling to local provider model fetches
  • Fixed Ollama provider to use createOllama() properly instead of mutating internal SDK state
  • Fixed LLMManager singleton to update env bindings on subsequent calls (important for Cloudflare Workers)
  • Added connection status indicators (green/red dots) and helpful error messages in the model selector UI

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
app/lib/modules/llm/base-provider.ts Added shared helper methods (resolveDockerUrl, convertEnvToRecord, createTimeoutSignal) and narrowed cache key to include only provider-relevant env keys
app/lib/modules/llm/providers/ollama.ts Refactored to use shared helpers, replaced SDK mutation with proper createOllama() usage, added error handling and timeout for model fetching
app/lib/modules/llm/providers/lmstudio.ts Refactored to use shared helpers, added error handling and timeout for model fetching, removed as any cast
app/lib/modules/llm/providers/openai-like.ts Added timeout to model fetch, replaced as any casts with proper type conversion, switched from console.log to logger utilities
app/lib/modules/llm/manager.ts Fixed singleton to update env on subsequent calls, fixed typo in variable name (filteredStaticModeslfilteredStaticModels)
app/components/chat/ModelSelector.tsx Added connection status indicators for local providers and helpful empty-state messages when providers are unreachable

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

return baseUrl;
}

return baseUrl.replace('localhost', 'host.docker.internal').replace('127.0.0.1', 'host.docker.internal');
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chained .replace() calls only replace the first occurrence of each string. If a URL contains multiple instances of 'localhost' or '127.0.0.1' (unlikely but possible in some edge cases like 'http://localhost:1234/localhost'), only the first occurrence will be replaced. Consider using .replaceAll() or a regex with the global flag (e.g., .replace(/localhost/g, 'host.docker.internal')) to ensure all occurrences are replaced.

Suggested change
return baseUrl.replace('localhost', 'host.docker.internal').replace('127.0.0.1', 'host.docker.internal');
return baseUrl
.replace(/localhost/g, 'host.docker.internal')
.replace(/127\.0\.0\.1/g, 'host.docker.internal');

Copilot uses AI. Check for mistakes.
settings?: IProviderSetting,
serverEnv: Record<string, string> = {},
): Promise<ModelInfo[]> {
const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv);
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to _resolveBaseUrl is outside the try-catch block. If _resolveBaseUrl throws an error (e.g., "No baseUrl found for Ollama provider"), it will not be caught and will propagate as an unhandled exception. Consider moving this line inside the try block to ensure all errors are properly handled and an empty array is returned.

Copilot uses AI. Check for mistakes.
Comment on lines +48 to +50
const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv);

try {
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to _resolveBaseUrl is outside the try-catch block. If _resolveBaseUrl throws an error (e.g., "No baseUrl found for LMStudio provider"), it will not be caught and will propagate as an unhandled exception. Consider moving this line inside the try block to ensure all errors are properly handled and an empty array is returned.

Suggested change
const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv);
try {
let baseUrl: string;
try {
baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv);

Copilot uses AI. Check for mistakes.
baseUrl = isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl;
baseUrl = isDocker ? baseUrl.replace('127.0.0.1', 'host.docker.internal') : baseUrl;
}
const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to _resolveBaseUrl can throw an error (e.g., "No baseUrl found for LMStudio provider"), but there's no error handling in getModelInstance. If this throws, it will propagate as an unhandled exception. Consider adding try-catch error handling or documenting that this method can throw.

Suggested change
const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);
let baseUrl: string;
try {
baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);
} catch (error) {
logger.error('Error resolving LMStudio base URL in getModelInstance:', error);
if (error instanceof Error) {
throw error;
}
throw new Error('Error resolving LMStudio base URL in getModelInstance');
}

Copilot uses AI. Check for mistakes.
Comment on lines +122 to 123
const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);

Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to _resolveBaseUrl can throw an error (e.g., "No baseUrl found for Ollama provider"), but there's no error handling in getModelInstance. If this throws, it will propagate as an unhandled exception. Consider adding try-catch error handling or documenting that this method can throw.

Suggested change
const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);
let baseUrl: string;
try {
baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord);
} catch (error) {
logger.error('Failed to resolve Ollama base URL', error);
throw error;
}

Copilot uses AI. Check for mistakes.
@Stijnus Stijnus merged commit b7ef224 into stackblitz-labs:main Feb 7, 2026
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant