fix(llm): pass reasoning_key to OpenAILegacy provider#1154
Open
rongou wants to merge 2 commits intoMoonshotAI:mainfrom
Open
fix(llm): pass reasoning_key to OpenAILegacy provider#1154rongou wants to merge 2 commits intoMoonshotAI:mainfrom
rongou wants to merge 2 commits intoMoonshotAI:mainfrom
Conversation
The OpenAILegacy class already supports a reasoning_key parameter for reading reasoning/thinking content from OpenAI-compatible API responses, but create_llm() never passed it. This caused reasoning-only responses to be silently dropped, resulting in APIEmptyResponseError.
Contributor
There was a problem hiding this comment.
Pull request overview
This PR adds a configurable reasoning_key to the CLI’s LLM provider config and wires it into the openai_legacy provider creation path so OpenAI-compatible servers that emit reasoning in a dedicated field don’t lead to empty-response failures.
Changes:
- Add
reasoning_keytoLLMProviderconfiguration. - Pass
reasoning_keythrough toOpenAILegacyincreate_llm().
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
src/kimi_cli/llm.py |
Plumbs provider.reasoning_key into the OpenAILegacy constructor. |
src/kimi_cli/config.py |
Extends provider config schema with optional reasoning_key. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Address code review feedback: - Normalize blank/whitespace reasoning_key to None at config load time to prevent a downstream AssertionError in OpenAILegacy - Clarify docstring that reasoning_key applies to the openai_legacy provider only - Add tests for reasoning_key passthrough and blank normalization
This was referenced Feb 20, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #1155
Summary
When using kimi-cli with an OpenAI-compatible server (e.g. sglang, vllm) that puts reasoning/thinking content in a dedicated response field, the
openai_legacyprovider silently drops it becausereasoning_keyis never passed to theOpenAILegacyconstructor. If the model produces a reasoning-only response (no text, no tool calls), this results in a crash withAPIEmptyResponseError.The underlying
OpenAILegacyclass already supports areasoning_keyparameter — it just wasn't wired up. This PR adds areasoning_keyconfig field toLLMProviderand passes it through increate_llm(), so users can configure it like:Test plan
reasoning_key = "reasoning_content"and verify reasoning content is no longer droppedreasoning_keycontinue to work unchanged