feat: Add Ollama local AI provider support#2
Merged
Conversation
Add Ollama as local AI provider option with API key validation bypass. Implement separate request/response structures for OpenAI and Ollama. Set default Ollama URL to localhost:11434/api/generate. Update provider logic to handle different authorization requirements.
Remove hardcoded default values from CLI flags to prioritize config file. Set Ollama as default provider with llama3.2 model when configured. Update config loading logic to skip API key validation for Ollama. Bump version to 0.5.0 and refine provider-specific URL handling.
Add comprehensive Ollama integration details to README and CHANGELOG. Document local AI benefits, no API key requirement, and usage examples. Update CLI flags table with Ollama-specific defaults and provider options. Refine environment variables and configuration examples for mixed AI setups.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Pull Request Description
This PR adds Ollama as a new AI provider option, enabling users to generate commit messages using locally hosted models without requiring API keys.
Key Changes:
ollamaas a fully supported AI providerTechnical Details:
http://localhost:11434/api/generateBenefits:
Testing:
qwen3:8b-localmodel/api/generateand/api/chatendpointsThis feature addresses requests for local AI options while maintaining backward compatibility with existing cloud providers.