Skip to content

Langchain-aws Reasoning effort #788

@sriramsriram3

Description

@sriramsriram3

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

In Langchain-aws, its better accept the reasoning effort. Because currently, all the models are coming up with reasoning.

Use Case

I'm using the GPT model from AWS Bedrock. If the langchain-aws supported the reasoning_effort parameter (low, medium, high), it would reduce latency by a few seconds. This would be much more efficient than calling the model through raw boto3.
Currently we are unable to do it so, thats why we moved to boto3, if it provides we can back to langchain-aws

Proposed Solution

llm = ChatBedrockConverse(
model_id="gpt--",
reasoning_effort="low", # NEW PARAM
)

Alternatives Considered

No response

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions