-
Notifications
You must be signed in to change notification settings - Fork 216
Open
Labels
bugSomething isn't workingSomething isn't workingllmoThis issue is related to LLM Observability.This issue is related to LLM Observability.team/llm-analyticsLLM AnalyticsLLM Analytics
Description
Bug description
When using with withTracing wrapper with the Vercel AI SDK version 6 and using any Anthropic model with cache control the input token costs that appear in PostHog are incorrect.
From the code in the Posthog repo there is a special case for calculating token input costs when the provider is Anthropic, because the raw input_tokens reported by Anthropic do not include cached tokens.
However in Version 6 of the Vercel SDK, the value of the total input tokens returned by the SDK now includes cached tokens, and this is the value that @posthog/ai is forwarding on.
Consequently the input token costs appear in PostHog to be much higher than they actually are.
How to reproduce
- Use the Vercel AI SDK V6 with any Anthropic model. Wrap using
withTracingfrom @posthog/ai. Ensure that prompt caching is enabled and that you are sending a long enough prompt that results in cache reads and writes. - Check the reported total input cost USD in PostHog and observe they do not match the usage costs Anthropic reports in their platform.
Related sub-libraries
- All of them
- posthog-js (web)
- posthog-js-lite (web lite)
- posthog-node
- posthog-react-native
- @posthog/react
- @posthog/ai
- @posthog/nextjs-config
- @posthog/nuxt
- @posthog/rollup-plugin
- @posthog/webpack-plugin
Additional context
Thank you for your bug report – we love squashing them!
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingllmoThis issue is related to LLM Observability.This issue is related to LLM Observability.team/llm-analyticsLLM AnalyticsLLM Analytics