Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile OverviewGreptile SummaryThis PR fixes token counting for Gemini models by including thinking tokens in the output token count for accurate billing. Main Changes:
Impact: Confidence Score: 5/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User as User/Form Input
participant AgentHandler as AgentBlockHandler
participant Provider as Gemini Provider
participant GeminiAPI as Gemini API
participant Utils as Google Utils
participant Billing as Cost Calculation
User->>AgentHandler: Submit agent block with temperature & maxTokens (as strings)
AgentHandler->>AgentHandler: Check temperature != null && !== ''
AgentHandler->>AgentHandler: Check maxTokens != null && !== ''
AgentHandler->>AgentHandler: Convert strings to numbers using Number()
AgentHandler->>Provider: Send request with numeric temperature & maxTokens
Provider->>GeminiAPI: Execute model with thinkingConfig
GeminiAPI-->>Provider: Response with usageMetadata (promptTokenCount, candidatesTokenCount, thoughtsTokenCount)
Provider->>Utils: Call convertUsageMetadata(usageMetadata)
Utils->>Utils: Extract thoughtsTokenCount from usageMetadata
Utils->>Utils: Calculate candidatesTokenCount = candidatesTokenCount + thoughtsTokenCount
Utils->>Utils: Calculate totalTokenCount
Utils-->>Provider: Return GeminiUsage with updated counts
Provider->>Billing: calculateCost(promptTokenCount, candidatesTokenCount)
Billing-->>Provider: Cost for input + output tokens (including thinking)
Provider-->>AgentHandler: Return response with correct token counts and cost
|
|
@cursor review |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
* fix(gemini): token count * fix to include tool call tokens
Summary
Token count should include thinking tokens.
Type of Change
Testing
Tested manually
Checklist