Model access is through IModelProvider (src/SharpClaw.Code.Providers/Abstractions/IModelProvider.cs):
ProviderName— stable id used byIModelProviderResolverGetAuthStatusAsyncStartStreamAsync— returnsProviderStreamHandlewithIAsyncEnumerable<ProviderEvent>
Registered implementations (see ProvidersServiceCollectionExtensions):
AnthropicProvider— HTTP client fromAnthropicProviderOptionsOpenAiCompatibleProvider— HTTP client fromOpenAiCompatibleProviderOptions
Both are registered as IModelProvider singletons; ModelProviderResolver builds a case-insensitive dictionary by ProviderName.
The provider layer also exposes IProviderCatalogService, which powers the CLI models command and ACP models/list. It centralizes:
- provider auth status
- alias/default resolution
- discovered local runtime profiles
- model discovery for local runtimes
ProviderRequestPreflight (IProviderRequestPreflight) normalizes ProviderRequest:
- Applies
ProviderCatalogOptions.ModelAliases(e.g."default"→ provider + model id). - Supports qualified model forms (implementation parses
provider/model). - Also supports local runtime forms such as
ollama/qwen2.5-coder, which route through the OpenAI-compatible provider with profile metadata attached. - Fills default provider name from
ProviderCatalogOptions.DefaultProviderwhen missing.
Default catalog (ProviderCatalogOptions) uses DefaultProvider = "openai-compatible" if not configured.
When using AddSharpClawRuntime(IConfiguration) (CLI host):
| Section | Options type |
|---|---|
SharpClaw:Providers:Catalog |
ProviderCatalogOptions |
SharpClaw:Providers:Anthropic |
AnthropicProviderOptions (ProviderName defaults to "anthropic", BaseUrl, API key binding as in options class) |
SharpClaw:Providers:OpenAiCompatible |
OpenAiCompatibleProviderOptions (ProviderName defaults to "openai-compatible", supports auth mode, default embedding model, and named LocalRuntimes) |
There is no checked-in appsettings.json in the repo; add one next to the CLI project or rely on environment variables / user secrets per standard .NET configuration.
OpenAiCompatibleProviderOptions.LocalRuntimes supports named profiles for local or self-hosted runtimes such as Ollama and llama.cpp.
Each profile carries:
- runtime kind (
Generic,Ollama,LlamaCpp) - base URL
- default chat model
- optional default embedding model
- auth mode (
ApiKey,Optional,None) - capability hints for tool calling and embeddings
At runtime the catalog service probes these profiles and surfaces health plus discovered models. Local runtimes do not assume API-key auth by default.
IAuthFlowService / AuthFlowService answer whether a provider name is authenticated (used by ProviderBackedAgentKernel). If not authenticated, the kernel may return a placeholder completion (see kernel logs) rather than calling the remote API.
For the OpenAI-compatible provider, auth status now respects provider auth mode plus any configured auth-optional local runtimes.
Hard failures use ProviderExecutionException with ProviderFailureKind: MissingProvider, AuthenticationUnavailable, StreamFailed.
- Implement
IModelProvider(stream events usingProviderEventfrom Protocol). - Register the implementation in
ProvidersServiceCollectionExtensions(or a testIServiceCollection) asAddSingleton<IModelProvider, YourProvider>. - Ensure
ModelProviderResolvercan resolve yourProviderName(unique among registered providers). - Extend
ProviderCatalogOptions(defaults / aliases) viaIConfigurationorConfigure<ProviderCatalogOptions>/PostConfigure.
Test pattern: SharpClaw.Code.MockProvider registers DeterministicMockModelProvider with PostConfigure<ProviderCatalogOptions> so default maps to provider name mock (MockProviderServiceCollectionExtensions).