fix: improve local model provider robustness and UX#2112
fix: improve local model provider robustness and UX#2112Stijnus merged 1 commit intostackblitz-labs:mainfrom
Conversation
- Extract shared Docker URL rewriting and env conversion into BaseProvider to eliminate 4x duplicated code across Ollama and LMStudio - Add error handling and 5s timeouts to all model-listing fetches so one unreachable provider doesn't block the entire model list - Fix Ollama using createOllama() instead of mutating provider internals - Fix LLMManager singleton ignoring env updates on subsequent requests - Narrow cache key to only include provider-relevant env vars instead of the entire server environment - Fix 'as any' casts in LMStudio and OpenAILike by using shared convertEnvToRecord helper - Replace console.log/error with structured logger in OpenAILike - Fix typo: filteredStaticModesl -> filteredStaticModels in manager - Add connection status indicator (green/red dot) for local providers in the ModelSelector dropdown - Show helpful "is X running?" message when local provider has no models Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR improves the robustness and user experience of local model providers (Ollama, LMStudio, OpenAILike) by addressing code duplication, adding error handling and timeouts, fixing SDK usage patterns, and enhancing the UI with connection status indicators.
Changes:
- Extracted duplicated Docker URL rewriting logic to shared
BaseProvider.resolveDockerUrl()method - Added 5-second timeouts and comprehensive error handling to local provider model fetches
- Fixed Ollama provider to use
createOllama()properly instead of mutating internal SDK state - Fixed
LLMManagersingleton to update env bindings on subsequent calls (important for Cloudflare Workers) - Added connection status indicators (green/red dots) and helpful error messages in the model selector UI
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| app/lib/modules/llm/base-provider.ts | Added shared helper methods (resolveDockerUrl, convertEnvToRecord, createTimeoutSignal) and narrowed cache key to include only provider-relevant env keys |
| app/lib/modules/llm/providers/ollama.ts | Refactored to use shared helpers, replaced SDK mutation with proper createOllama() usage, added error handling and timeout for model fetching |
| app/lib/modules/llm/providers/lmstudio.ts | Refactored to use shared helpers, added error handling and timeout for model fetching, removed as any cast |
| app/lib/modules/llm/providers/openai-like.ts | Added timeout to model fetch, replaced as any casts with proper type conversion, switched from console.log to logger utilities |
| app/lib/modules/llm/manager.ts | Fixed singleton to update env on subsequent calls, fixed typo in variable name (filteredStaticModesl → filteredStaticModels) |
| app/components/chat/ModelSelector.tsx | Added connection status indicators for local providers and helpful empty-state messages when providers are unreachable |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return baseUrl; | ||
| } | ||
|
|
||
| return baseUrl.replace('localhost', 'host.docker.internal').replace('127.0.0.1', 'host.docker.internal'); |
There was a problem hiding this comment.
The chained .replace() calls only replace the first occurrence of each string. If a URL contains multiple instances of 'localhost' or '127.0.0.1' (unlikely but possible in some edge cases like 'http://localhost:1234/localhost'), only the first occurrence will be replaced. Consider using .replaceAll() or a regex with the global flag (e.g., .replace(/localhost/g, 'host.docker.internal')) to ensure all occurrences are replaced.
| return baseUrl.replace('localhost', 'host.docker.internal').replace('127.0.0.1', 'host.docker.internal'); | |
| return baseUrl | |
| .replace(/localhost/g, 'host.docker.internal') | |
| .replace(/127\.0\.0\.1/g, 'host.docker.internal'); |
| settings?: IProviderSetting, | ||
| serverEnv: Record<string, string> = {}, | ||
| ): Promise<ModelInfo[]> { | ||
| const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv); |
There was a problem hiding this comment.
The call to _resolveBaseUrl is outside the try-catch block. If _resolveBaseUrl throws an error (e.g., "No baseUrl found for Ollama provider"), it will not be caught and will propagate as an unhandled exception. Consider moving this line inside the try block to ensure all errors are properly handled and an empty array is returned.
| const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv); | ||
|
|
||
| try { |
There was a problem hiding this comment.
The call to _resolveBaseUrl is outside the try-catch block. If _resolveBaseUrl throws an error (e.g., "No baseUrl found for LMStudio provider"), it will not be caught and will propagate as an unhandled exception. Consider moving this line inside the try block to ensure all errors are properly handled and an empty array is returned.
| const baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv); | |
| try { | |
| let baseUrl: string; | |
| try { | |
| baseUrl = this._resolveBaseUrl(apiKeys, settings, serverEnv); |
| baseUrl = isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl; | ||
| baseUrl = isDocker ? baseUrl.replace('127.0.0.1', 'host.docker.internal') : baseUrl; | ||
| } | ||
| const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); |
There was a problem hiding this comment.
The call to _resolveBaseUrl can throw an error (e.g., "No baseUrl found for LMStudio provider"), but there's no error handling in getModelInstance. If this throws, it will propagate as an unhandled exception. Consider adding try-catch error handling or documenting that this method can throw.
| const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); | |
| let baseUrl: string; | |
| try { | |
| baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); | |
| } catch (error) { | |
| logger.error('Error resolving LMStudio base URL in getModelInstance:', error); | |
| if (error instanceof Error) { | |
| throw error; | |
| } | |
| throw new Error('Error resolving LMStudio base URL in getModelInstance'); | |
| } |
| const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); | ||
|
|
There was a problem hiding this comment.
The call to _resolveBaseUrl can throw an error (e.g., "No baseUrl found for Ollama provider"), but there's no error handling in getModelInstance. If this throws, it will propagate as an unhandled exception. Consider adding try-catch error handling or documenting that this method can throw.
| const baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); | |
| let baseUrl: string; | |
| try { | |
| baseUrl = this._resolveBaseUrl(apiKeys, providerSettings?.[this.name], envRecord); | |
| } catch (error) { | |
| logger.error('Failed to resolve Ollama base URL', error); | |
| throw error; | |
| } |
Summary
createOllama()properlyIssues Fixed
BaseProvider.resolveDockerUrl()getDynamicModelshad zero try/catch — a crashed server would throw unhandledTypeError: fetch failedPromise.all— added 5sAbortSignal.timeoutollamaInstance.config.baseURLvia type cast — replaced withcreateOllama({ baseURL })LLMManager.getInstance(env)ignoredenvafter first call — now updates on each callserverEnvinto cache key — narrowed to only provider-relevant keysas anycasts: LMStudio/OpenAILike usedserverEnv as any— replaced with sharedconvertEnvToRecord()Files Changed
app/lib/modules/llm/base-provider.ts— AddedresolveDockerUrl(),convertEnvToRecord(),createTimeoutSignal(), narrowed cache keyapp/lib/modules/llm/providers/ollama.ts— Rewrote with shared helpers, propercreateOllama(), error handlingapp/lib/modules/llm/providers/lmstudio.ts— Rewrote with shared helpers, error handlingapp/lib/modules/llm/providers/openai-like.ts— Added timeout, replacedas any, used loggerapp/lib/modules/llm/manager.ts— Fixed singleton env update, fixed typoapp/components/chat/ModelSelector.tsx— Added local provider status dots and helpful empty statesTest plan
🤖 Generated with Claude Code