-
Notifications
You must be signed in to change notification settings - Fork 2.9k
[BUG] OpenAI compatible provider setting should not enforce a temperature of 0 #12042
Copy link
Copy link
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Problem (one or two sentences)
When the Roo Code UI setting for a model Custom temperature is not set, and an OpenAI compatible custom provider is used (with a custom base URL), the temperature is automatically set to 0 and passed to the provider. This should not happen, and no temperature should be passed (unless the model is one of the recognized defaults…)
Context (who is affected and when)
This issue affects users of custom OpenAI compatible providers, that do not set a custom temperature when setting up a model configuration. It might also affect other use cases, which I did not identify.
Reproduction steps
- Add a custom OpenAI compatible model, from a non directly supported provider, like nano-gpt.com
- Do not set a custom temperature for the model
- Use a non directly supported model, or one that isn’t identified as having a default. On NanoGPT, that’s the case of
qwen3.5-flashfor example. - Roo Code sends a temperature of 0 to the provider, which it should not. Not passing the temperature allows the provider to use its backend default (if they set one).
Expected result
No temperature should be forced into the chat completion call
Actual result
A temperature of 0 is forced
Variations tried (optional)
No response
App Version
v3.51.1
API Provider (optional)
OpenAI Compatible
Model Used (optional)
Qwen3.5 Flash, but not really relevant
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working