Skip to content

fix(llm): pass reasoning_key to OpenAILegacy provider#1154

Open
rongou wants to merge 2 commits intoMoonshotAI:mainfrom
rongou:fix-reasoning-key
Open

fix(llm): pass reasoning_key to OpenAILegacy provider#1154
rongou wants to merge 2 commits intoMoonshotAI:mainfrom
rongou:fix-reasoning-key

Conversation

@rongou
Copy link

@rongou rongou commented Feb 14, 2026

Fixes #1155

Summary

When using kimi-cli with an OpenAI-compatible server (e.g. sglang, vllm) that puts reasoning/thinking content in a dedicated response field, the openai_legacy provider silently drops it because reasoning_key is never passed to the OpenAILegacy constructor. If the model produces a reasoning-only response (no text, no tool calls), this results in a crash with APIEmptyResponseError.

The underlying OpenAILegacy class already supports a reasoning_key parameter — it just wasn't wired up. This PR adds a reasoning_key config field to LLMProvider and passes it through in create_llm(), so users can configure it like:

[providers.vllm]
type = "openai_legacy"
base_url = "http://localhost:8000/v1"
api_key = "dummy"
reasoning_key = "reasoning_content"

Test plan

  • Configure a provider with reasoning_key = "reasoning_content" and verify reasoning content is no longer dropped
  • Verify existing providers without reasoning_key continue to work unchanged

The OpenAILegacy class already supports a reasoning_key parameter for
reading reasoning/thinking content from OpenAI-compatible API responses,
but create_llm() never passed it. This caused reasoning-only responses
to be silently dropped, resulting in APIEmptyResponseError.
Copilot AI review requested due to automatic review settings February 14, 2026 09:49
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a configurable reasoning_key to the CLI’s LLM provider config and wires it into the openai_legacy provider creation path so OpenAI-compatible servers that emit reasoning in a dedicated field don’t lead to empty-response failures.

Changes:

  • Add reasoning_key to LLMProvider configuration.
  • Pass reasoning_key through to OpenAILegacy in create_llm().

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
src/kimi_cli/llm.py Plumbs provider.reasoning_key into the OpenAILegacy constructor.
src/kimi_cli/config.py Extends provider config schema with optional reasoning_key.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Address code review feedback:
- Normalize blank/whitespace reasoning_key to None at config load time
  to prevent a downstream AssertionError in OpenAILegacy
- Clarify docstring that reasoning_key applies to the openai_legacy
  provider only
- Add tests for reasoning_key passthrough and blank normalization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

openai_legacy provider drops reasoning content, causing APIEmptyResponseError

2 participants