Conversation
Extract `reasoning_content` from streaming deltas and non-streaming responses in OpenAICompletionsProvider, producing ContentThinking objects. Add `preserve_thinking` parameter (default False) to control whether reasoning content is sent back in multi-turn conversations. Set preserve_thinking=True for OpenRouter (which recommends including reasoning traces). DeepSeek's default (False) prevents 400 errors when reasoning_content is included in input messages. Equivalent of tidyverse/ellmer#972.
…serve_thinking deepseek-chat is deprecated (2026-07-24) and maps to v4-flash anyway. V4 thinking models require reasoning_content back for tool-call turns, so preserve_thinking=True is the correct default.
Collaborator
Author
|
@copilot resolve the merge conflicts in this pull request |
Co-authored-by: cpsievert <1365941+cpsievert@users.noreply.github.com>
Contributor
Resolved by merging the latest Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
This was referenced May 7, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
OpenAI-compatible providers using the Completions API (
ChatOpenAICompletions,ChatDeepSeek,ChatOpenRouter, etc.) now extractreasoning_contentfrom model responses and produceContentThinkingobjects — matching the behavior already present in the Responses API provider (ChatOpenAI).A new
preserve_thinkingparameter controls whether reasoning content is sent back to the API in multi-turn conversations. This is necessary because providers disagree on whether reasoning traces belong in conversation history:deepseek-reasonerreasoning_content(N/A)Changes
OpenAICompletionsProvider: Extractreasoning_contentfrom both streaming deltas and non-streaming responses. HandleContentThinkingin turn serialization — drop by default, preserve whenpreserve_thinking=True.ChatOpenAICompletions: Exposepreserve_thinkingparameter for users of custom OpenAI-compatible endpoints.ChatOpenRouter: Setpreserve_thinking=True(OpenRouter recommends including reasoning traces).ChatDeepSeek: Setpreserve_thinking=True(required for V4 thinking models with tool calls; harmlessly ignored for non-thinking responses). Also updates default model from deprecateddeepseek-chattodeepseek-v4-flash.Motivation
This is the Python equivalent of tidyverse/ellmer#972. The ellmer PR defaults to
preserve_thinking=Falsefor DeepSeek based on the olddeepseek-reasonerdocs, but DeepSeek's current V4 models (which replacedeepseek-reasoneranddeepseek-chatas of 2026-07-24) actually requirereasoning_contentback when tool calls are present. We default toTruefor DeepSeek since it's a no-op for non-thinking responses and required for the tool-call case.Relationship to #288
This PR overlaps significantly with #288, which also adds
reasoning_contentsupport. The key difference is that #288 always preserves thinking unconditionally, while this PR adds thepreserve_thinkingtoggle so each provider wrapper can choose the correct behavior. This PR also updates the DeepSeek default model and re-records VCR cassettes.One thing #288 includes that this PR does not: reordering tool result messages to precede user content in
_turns_as_inputs. That may be worth investigating separately if DeepSeek requires that ordering.Test plan
pyrightpasses with 0 errors across all modified files