Skip to content

Fix the invalid tool call request can block the response#328

Merged
locnguyen1986 merged 2 commits intoreleasefrom
main
Dec 19, 2025
Merged

Fix the invalid tool call request can block the response#328
locnguyen1986 merged 2 commits intoreleasefrom
main

Conversation

@locnguyen1986
Copy link
Collaborator

Fix the invalid tool call request can block the response

Copilot AI review requested due to automatic review settings December 19, 2025 08:09
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses an issue where invalid tool call requests (specifically empty message parts) could block LLM provider responses. The fix implements message sanitization to filter out invalid content parts before sending requests to providers, and adds comprehensive debug logging to track request flow and error conditions.

  • Implemented SanitizeMessages function to filter empty text and image parts from message content
  • Added debug and error logging throughout the request/response flow for better observability
  • Applied message sanitization in both streaming and non-streaming request paths

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
services/llm-api/internal/utils/httpclients/chat/chat_completion_client.go Added SanitizeMessages function and calls to sanitize messages; added extensive debug/error logging for request tracking
services/llm-api/internal/interfaces/httpserver/routes/v1/chat/completion_route.go Added debug and error logging to track request delegation and error handling
services/llm-api/internal/interfaces/httpserver/requests/chat/chat.go Modified empty part handling to return empty image parts for filtering; added empty text part filtering
services/llm-api/internal/interfaces/httpserver/handlers/chathandler/chat_handler.go Added debug logging for deep research preferences, LLM calls, and completion results
services/llm-api/internal/infrastructure/inference/inference_provider.go Added debug logging for client creation
services/llm-api/internal/domain/prompt/deep_research_module.go Added debug logging for condition evaluation in ShouldApply method

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +65 to 71
// Return empty part - will be filtered out by caller
// Note: We can't return nil, so we return an empty image part which will be filtered
// because empty text parts with omitempty cause {"type": "text"} without text field
// which fails validation on some LLM providers
return openai.ChatMessagePart{
Type: openai.ChatMessagePartTypeText,
Text: "",
Type: openai.ChatMessagePartTypeImageURL, // Will be filtered out by caller
}
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning an empty ImageURL part as a workaround is confusing and fragile. The comment acknowledges this is intentional for later filtering, but this creates tight coupling between this function and its callers. Consider refactoring to return an Optional/Maybe type, or modify the signature to return a pointer that can be nil, allowing explicit representation of "no valid part".

Copilot uses AI. Check for mistakes.
Comment on lines +178 to +187
// Try to get response body for more error details
respBodyStr := ""
if resp.RawResponse != nil && resp.RawResponse.Body != nil {
bodyBytes, _ := io.ReadAll(resp.RawResponse.Body)
respBodyStr = string(bodyBytes)
}
// Also try String() method
if respBodyStr == "" {
respBodyStr = resp.String()
}
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading the response body with io.ReadAll() consumes the stream, making it unavailable for downstream error handling in errorFromResponse(). The body should either be restored after reading or you should rely on resp.String() alone, which the resty library handles properly without consuming the underlying stream.

Suggested change
// Try to get response body for more error details
respBodyStr := ""
if resp.RawResponse != nil && resp.RawResponse.Body != nil {
bodyBytes, _ := io.ReadAll(resp.RawResponse.Body)
respBodyStr = string(bodyBytes)
}
// Also try String() method
if respBodyStr == "" {
respBodyStr = resp.String()
}
// Use resp.String() for more error details without consuming the body stream
respBodyStr := resp.String()

Copilot uses AI. Check for mistakes.
Comment on lines +574 to +582
// Try to read response body for error details
respBodyStr := ""
if resp.RawResponse != nil && resp.RawResponse.Body != nil {
bodyBytes, _ := io.ReadAll(resp.RawResponse.Body)
respBodyStr = string(bodyBytes)
}
if respBodyStr == "" {
respBodyStr = resp.String()
}
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading the response body with io.ReadAll() consumes the stream, making it unavailable for downstream error handling in errorFromResponse(). The body should either be restored after reading or you should rely on resp.String() alone, which the resty library handles properly without consuming the underlying stream.

Suggested change
// Try to read response body for error details
respBodyStr := ""
if resp.RawResponse != nil && resp.RawResponse.Body != nil {
bodyBytes, _ := io.ReadAll(resp.RawResponse.Body)
respBodyStr = string(bodyBytes)
}
if respBodyStr == "" {
respBodyStr = resp.String()
}
// Use Resty's buffered response body for error details without consuming the raw stream
respBodyStr := resp.String()

Copilot uses AI. Check for mistakes.
Comment on lines +913 to +915
// If all parts were filtered out, convert to a simple text message
if len(sanitizedParts) == 0 && msg.Content != "" {
sanitizedMsg.MultiContent = nil
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When all MultiContent parts are filtered out and msg.Content is empty, this will result in a message with an empty MultiContent array (len=0). This edge case should either skip the entire message or ensure msg.Content has a default value to prevent sending malformed messages to the provider.

Suggested change
// If all parts were filtered out, convert to a simple text message
if len(sanitizedParts) == 0 && msg.Content != "" {
sanitizedMsg.MultiContent = nil
// If all parts were filtered out, either fall back to simple text or skip the message
if len(sanitizedParts) == 0 {
// Remove MultiContent entirely; if there's no usable text content, skip this message
sanitizedMsg.MultiContent = nil
if strings.TrimSpace(sanitizedMsg.Content) == "" {
log.Debug().
Str("role", string(msg.Role)).
Msg("[DEBUG] SanitizeMessages: skipping message with no valid content after sanitization")
continue
}

Copilot uses AI. Check for mistakes.
Comment on lines +148 to +153
log.Debug().
Str("provider", c.name).
Str("base_url", c.baseURL).
Str("model", request.Model).
Int("message_count", len(request.Messages)).
Msg("[DEBUG] CreateChatCompletion: sending request to provider")
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log messages include the "[DEBUG]" prefix in the message text itself. This is redundant since the log level is already Debug. Consider removing the prefix from the message to avoid duplication, as the log level already indicates this is a debug message.

Copilot uses AI. Check for mistakes.
Comment on lines +189 to +196
log.Error().
Str("provider", c.name).
Str("base_url", c.baseURL).
Str("model", request.Model).
Int("status_code", resp.StatusCode()).
Str("response_body", respBodyStr).
Dur("duration", duration).
Msg("[DEBUG] CreateChatCompletion: provider returned error response")
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging the full response_body can expose sensitive information such as API keys, user data, or internal error details that should not be logged. Consider sanitizing or limiting the response body content before logging, or use a structured approach to log only specific non-sensitive fields from error responses.

Copilot uses AI. Check for mistakes.
Comment on lines +584 to +590
log.Error().
Str("provider", c.name).
Str("base_url", c.baseURL).
Str("model", request.Model).
Int("status_code", resp.StatusCode()).
Str("response_body", respBodyStr).
Msg("[DEBUG] doStreamingRequest: provider returned error response")
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging the full response_body can expose sensitive information such as API keys, user data, or internal error details that should not be logged. Consider sanitizing or limiting the response body content before logging, or use a structured approach to log only specific non-sensitive fields from error responses.

Copilot uses AI. Check for mistakes.
@locnguyen1986 locnguyen1986 merged commit aa791d5 into release Dec 19, 2025
13 of 14 checks passed
@github-project-automation github-project-automation bot moved this to QA in Jan Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: QA

Development

Successfully merging this pull request may close these issues.

3 participants