Skip to content

make Twins.chat() async to prevent event loop blocking#268

Open
verseon0980 wants to merge 1 commit intoOpenGradient:mainfrom
verseon0980:fix/twins-chat-async
Open

make Twins.chat() async to prevent event loop blocking#268
verseon0980 wants to merge 1 commit intoOpenGradient:mainfrom
verseon0980:fix/twins-chat-async

Conversation

@verseon0980
Copy link
Copy Markdown

Description

Twins.chat() is a synchronous function that uses a blocking httpx.post()
call with a 60 second timeout.

LLM.chat() in the same SDK is correctly defined as async. Twins.chat()
is not, which creates a serious problem when both are used together in
any async application such as FastAPI, LangChain agents, or any asyncio
based service.

When Twins.chat() is called from inside an async context, the blocking
httpx.post() call freezes the entire event loop for up to 60 seconds.
During that time no other async task in the application can run. Other
requests, background jobs, and inference calls all stall completely.

For a chat API that handles many concurrent users this is a full
availability failure.

Fix

Two changes to src/opengradient/client/twins.py:

  1. Changed def chat() to async def chat() so it works correctly
    inside async applications.

  2. Replaced the blocking httpx.post() call with a non-blocking
    httpx.AsyncClient used as an async context manager so the event
    loop is never blocked.

The behavior and return value are identical. Only the execution model
changed from blocking to non-blocking.

Files Changed

  • src/opengradient/client/twins.py: converted chat() to async and
    replaced httpx.post() with httpx.AsyncClient

Signed-off-by: verseon0980 <klokrc74@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants