Skip to content

Anthropic input token costs with caching are incorrect when using Vercel AI SDK v6 #2829

@tom-auger

Description

@tom-auger

Bug description

When using with withTracing wrapper with the Vercel AI SDK version 6 and using any Anthropic model with cache control the input token costs that appear in PostHog are incorrect.

From the code in the Posthog repo there is a special case for calculating token input costs when the provider is Anthropic, because the raw input_tokens reported by Anthropic do not include cached tokens.

However in Version 6 of the Vercel SDK, the value of the total input tokens returned by the SDK now includes cached tokens, and this is the value that @posthog/ai is forwarding on.

Consequently the input token costs appear in PostHog to be much higher than they actually are.

How to reproduce

  1. Use the Vercel AI SDK V6 with any Anthropic model. Wrap using withTracing from @posthog/ai. Ensure that prompt caching is enabled and that you are sending a long enough prompt that results in cache reads and writes.
  2. Check the reported total input cost USD in PostHog and observe they do not match the usage costs Anthropic reports in their platform.

Related sub-libraries

  • All of them
  • posthog-js (web)
  • posthog-js-lite (web lite)
  • posthog-node
  • posthog-react-native
  • @posthog/react
  • @posthog/ai
  • @posthog/nextjs-config
  • @posthog/nuxt
  • @posthog/rollup-plugin
  • @posthog/webpack-plugin

Additional context

Thank you for your bug report – we love squashing them!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingllmoThis issue is related to LLM Observability.team/llm-analyticsLLM Analytics

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions