Skip to content

feat: Add Ollama local AI provider support#2

Merged
dll-as merged 3 commits intomasterfrom
feat/ollama-support
Feb 7, 2026
Merged

feat: Add Ollama local AI provider support#2
dll-as merged 3 commits intomasterfrom
feat/ollama-support

Conversation

@dll-as
Copy link
Owner

@dll-as dll-as commented Feb 7, 2026

Pull Request Description

This PR adds Ollama as a new AI provider option, enabling users to generate commit messages using locally hosted models without requiring API keys.

Key Changes:

  • New Provider: Added ollama as a fully supported AI provider
  • Local AI Support: Enables offline, private commit message generation
  • API Key Flexibility: Ollama provider doesn't require API keys
  • Unified Interface: Enhanced provider architecture to handle both cloud (OpenAI, Grok, DeepSeek) and local (Ollama) AI providers seamlessly

Technical Details:

  • Added separate request/response structures for OpenAI and Ollama APIs
  • Updated configuration logic to skip API key validation for Ollama
  • Set default Ollama URL to http://localhost:11434/api/generate
  • Modified CLI flags to prioritize config file over hardcoded defaults
  • Updated documentation with comprehensive Ollama usage examples

Benefits:

  1. Privacy: All AI processing stays local
  2. Cost: No API costs for Ollama usage
  3. Offline: Works without internet connection
  4. Flexibility: Supports any Ollama-compatible model

Testing:

  • ✅ Tested with qwen3:8b-local model
  • ✅ Works with both /api/generate and /api/chat endpoints
  • ✅ Validated CLI flags and config file precedence

This feature addresses requests for local AI options while maintaining backward compatibility with existing cloud providers.

Add Ollama as local AI provider option with API key validation bypass.

Implement separate request/response structures for OpenAI and Ollama.

Set default Ollama URL to localhost:11434/api/generate.

Update provider logic to handle different authorization requirements.
Remove hardcoded default values from CLI flags to prioritize config file.

Set Ollama as default provider with llama3.2 model when configured.

Update config loading logic to skip API key validation for Ollama.

Bump version to 0.5.0 and refine provider-specific URL handling.
Add comprehensive Ollama integration details to README and CHANGELOG.

Document local AI benefits, no API key requirement, and usage examples.

Update CLI flags table with Ollama-specific defaults and provider options.

Refine environment variables and configuration examples for mixed AI setups.
@dll-as dll-as self-assigned this Feb 7, 2026
@dll-as dll-as added the enhancement New feature or request label Feb 7, 2026
@dll-as dll-as merged commit 05a4842 into master Feb 7, 2026
@dll-as dll-as deleted the feat/ollama-support branch February 7, 2026 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant