Skip to content

[BUG] OpenAI compatible provider setting should not enforce a temperature of 0 #12042

@azalty

Description

@azalty

Problem (one or two sentences)

When the Roo Code UI setting for a model Custom temperature is not set, and an OpenAI compatible custom provider is used (with a custom base URL), the temperature is automatically set to 0 and passed to the provider. This should not happen, and no temperature should be passed (unless the model is one of the recognized defaults…)

Context (who is affected and when)

This issue affects users of custom OpenAI compatible providers, that do not set a custom temperature when setting up a model configuration. It might also affect other use cases, which I did not identify.

Reproduction steps

  1. Add a custom OpenAI compatible model, from a non directly supported provider, like nano-gpt.com
  2. Do not set a custom temperature for the model
  3. Use a non directly supported model, or one that isn’t identified as having a default. On NanoGPT, that’s the case of qwen3.5-flash for example.
  4. Roo Code sends a temperature of 0 to the provider, which it should not. Not passing the temperature allows the provider to use its backend default (if they set one).

Expected result

No temperature should be forced into the chat completion call

Actual result

A temperature of 0 is forced

Variations tried (optional)

No response

App Version

v3.51.1

API Provider (optional)

OpenAI Compatible

Model Used (optional)

Qwen3.5 Flash, but not really relevant

Roo Code Task Links (optional)

No response

Relevant logs or errors (optional)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions