Description:
When using ChatDeepSeek with a model that outputs a reasoning_content field (like deepseek-v4-flash), calling a tool causes a 400 error from the API because the reasoning_content is not passed back in the next request.
Steps to reproduce:
import os
from chatlas import ChatDeepSeek
import requests
chat = ChatDeepSeek(model='deepseek-v4-flash')
def get_current_weather(lat: float, lng: float):
"""
Get the current weather given a latitude and longitude.
Parameters
----------
lat: The latitude of the location.
lng: The longitude of the location.
"""
lat_lng = f"latitude={lat}&longitude={lng}"
url = f"https://api.open-meteo.com/v1/forecast?{lat_lng}¤t=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m"
response = requests.get(url)
return response.json()["current"]
chat.register_tool(get_current_weather)
chat.chat("How's the weather in San Francisco?")
Traceback:
BadRequestError: Error code: 400 - {'error': {'message': 'The `reasoning_content` in the thinking mode must be passed back to the API.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}
Expected behaviour:
When the assistant message contains a reasoning_content field (as part of the model’s thinking mode), chatlas should include it in the subsequent API call after a tool result is received. Without it, the DeepSeek API rejects the request.
Environment:
- chatlas version:
0.15.2
- Python version:
3.13.3
- Model:
deepseek-v4-flash
Additional context:
This issue seems to affect any DeepSeek model with thinking mode enabled that requires reasoning_content to be echoed back exactly as received. Many new models (including deepseek-v4-pro and deepseek-v4-flash) behave this way. It would be great if chatlas could automatically preserve and return the reasoning_content during multi-turn interactions, especially tool use.
Description:
When using
ChatDeepSeekwith a model that outputs areasoning_contentfield (likedeepseek-v4-flash), calling a tool causes a400error from the API because thereasoning_contentis not passed back in the next request.Steps to reproduce:
Traceback:
Expected behaviour:
When the assistant message contains a
reasoning_contentfield (as part of the model’s thinking mode), chatlas should include it in the subsequent API call after a tool result is received. Without it, the DeepSeek API rejects the request.Environment:
0.15.23.13.3deepseek-v4-flashAdditional context:
This issue seems to affect any DeepSeek model with thinking mode enabled that requires
reasoning_contentto be echoed back exactly as received. Many new models (includingdeepseek-v4-proanddeepseek-v4-flash) behave this way. It would be great if chatlas could automatically preserve and return thereasoning_contentduring multi-turn interactions, especially tool use.