-
Notifications
You must be signed in to change notification settings - Fork 10
Evaluation: Show cost #746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
631f3f4
first stab at costing
AkhileshNegi b6750f0
minor fixes
AkhileshNegi 63eb942
cleanup
AkhileshNegi 35d7c9f
Merge branch 'main' into enhancement/evaluation-cost
AkhileshNegi b75a043
first stab of using config table
AkhileshNegi be8f60e
cleanup
AkhileshNegi 85e249a
few fixes from suggestions
AkhileshNegi 1af4148
cleanups
AkhileshNegi 77451b2
coderabbit suggestions
AkhileshNegi b95e11e
Merge branch 'main' into enhancement/evaluation-cost
AkhileshNegi 5cbea94
update to main
AkhileshNegi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
33 changes: 33 additions & 0 deletions
33
backend/app/alembic/versions/054_add_cost_to_evaluation_run.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,33 @@ | ||
| """add cost tracking to evaluation_run | ||
|
|
||
| Revision ID: 054 | ||
| Revises: 053 | ||
| Create Date: 2026-04-09 12:00:00.000000 | ||
|
|
||
| """ | ||
|
|
||
| import sqlalchemy as sa | ||
| from alembic import op | ||
| from sqlalchemy.dialects import postgresql | ||
|
|
||
| # revision identifiers, used by Alembic. | ||
| revision = "054" | ||
| down_revision = "053" | ||
| branch_labels = None | ||
| depends_on = None | ||
|
|
||
|
|
||
| def upgrade(): | ||
| op.add_column( | ||
| "evaluation_run", | ||
| sa.Column( | ||
| "cost", | ||
| postgresql.JSONB(astext_type=sa.Text()), | ||
| nullable=True, | ||
| comment="Cost tracking (response/embedding tokens and USD)", | ||
| ), | ||
| ) | ||
|
|
||
|
|
||
| def downgrade(): | ||
| op.drop_column("evaluation_run", "cost") | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,177 @@ | ||
| """ | ||
| Cost tracking for evaluation runs. | ||
|
|
||
| Token usage is aggregated per stage (response generation, embedding) and | ||
| priced against `global.model_config` using OpenAI Batch rates. Failures | ||
| here must never block evaluation completion — `attach_cost` swallows | ||
| exceptions and logs a warning. | ||
|
|
||
| Persisted shape on `eval_run.cost`: | ||
|
|
||
| { | ||
| "response": {model, input_tokens, output_tokens, total_tokens, cost_usd}, | ||
| "embedding": {model, input_tokens, output_tokens, total_tokens, cost_usd}, | ||
| "total_cost_usd": float, | ||
| } | ||
|
|
||
| Either stage entry is optional. Embedding entries use output_tokens=0. | ||
| """ | ||
|
|
||
| import logging | ||
| from collections.abc import Callable, Iterable | ||
| from typing import Any | ||
|
|
||
| from sqlmodel import Session | ||
|
|
||
| from app.crud.model_config import estimate_model_cost | ||
| from app.models import EvaluationRun | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| # USD rounding precision for persisted cost values. | ||
| COST_USD_DECIMALS = 6 | ||
|
|
||
|
|
||
| def _cost_usd(estimate: dict[str, Any] | None) -> float: | ||
| """Sum the per-direction costs from an estimate and round to our USD precision.""" | ||
| if not estimate: | ||
| return 0.0 | ||
| total = float(estimate.get("input_cost", 0.0)) + float( | ||
| estimate.get("output_cost", 0.0) | ||
| ) | ||
| return round(total, COST_USD_DECIMALS) | ||
|
|
||
|
|
||
| def _sum_tokens( | ||
| items: Iterable[dict[str, Any]], | ||
| usage_extractor: Callable[[dict[str, Any]], dict[str, Any] | None], | ||
| input_key: str, | ||
| ) -> dict[str, int]: | ||
| """Sum (input, output, total) tokens across items using a per-item usage extractor. | ||
|
|
||
| The OpenAI Embeddings API reports input tokens as ``prompt_tokens`` and has | ||
| no output tokens; chat/responses APIs use ``input_tokens`` and ``output_tokens``. | ||
| Missing keys default to 0, so the embedding case naturally produces | ||
| output_tokens=0. | ||
| """ | ||
| totals = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0} | ||
| for item in items: | ||
| usage = usage_extractor(item) | ||
| if not usage: | ||
| continue | ||
| totals["input_tokens"] += usage.get(input_key, 0) | ||
| totals["output_tokens"] += usage.get("output_tokens", 0) | ||
| totals["total_tokens"] += usage.get("total_tokens", 0) | ||
| return totals | ||
|
Comment on lines
+45
to
+65
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Harden token aggregation against malformed usage fields. At Line 62–64, direct Proposed fix+def _to_int_token(value: Any) -> int:
+ try:
+ return int(value)
+ except (TypeError, ValueError):
+ return 0
+
def _sum_tokens(
items: Iterable[dict[str, Any]],
usage_extractor: Callable[[dict[str, Any]], dict[str, Any] | None],
input_key: str,
) -> dict[str, int]:
@@
usage = usage_extractor(item)
if not usage:
continue
- totals["input_tokens"] += usage.get(input_key, 0)
- totals["output_tokens"] += usage.get("output_tokens", 0)
- totals["total_tokens"] += usage.get("total_tokens", 0)
+ totals["input_tokens"] += _to_int_token(usage.get(input_key, 0))
+ totals["output_tokens"] += _to_int_token(usage.get("output_tokens", 0))
+ totals["total_tokens"] += _to_int_token(usage.get("total_tokens", 0))🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| def _build_cost_entry( | ||
| session: Session, | ||
| model: str, | ||
| totals: dict[str, int], | ||
| ) -> dict[str, Any]: | ||
| """Price aggregated token usage against the model's batch pricing row.""" | ||
| estimate = estimate_model_cost( | ||
| session=session, | ||
| provider="openai", | ||
| model_name=model, | ||
| input_tokens=totals["input_tokens"], | ||
| output_tokens=totals["output_tokens"], | ||
| usage_type="batch", | ||
| ) | ||
| return { | ||
| "model": model, | ||
| "input_tokens": totals["input_tokens"], | ||
| "output_tokens": totals["output_tokens"], | ||
| "total_tokens": totals["total_tokens"], | ||
| "cost_usd": _cost_usd(estimate), | ||
| } | ||
|
|
||
|
|
||
| def _build_response_cost_entry( | ||
| session: Session, model: str, results: list[dict[str, Any]] | ||
| ) -> dict[str, Any]: | ||
| """Build a response-stage cost entry from parsed evaluation results.""" | ||
| totals = _sum_tokens( | ||
| items=results, | ||
| usage_extractor=lambda r: r.get("usage"), | ||
| input_key="input_tokens", | ||
| ) | ||
| return _build_cost_entry(session=session, model=model, totals=totals) | ||
|
|
||
|
|
||
| def _build_embedding_cost_entry( | ||
| session: Session, model: str, raw_results: list[dict[str, Any]] | ||
| ) -> dict[str, Any]: | ||
| """Build an embedding-stage cost entry from raw embedding batch output.""" | ||
| totals = _sum_tokens( | ||
| items=raw_results, | ||
| usage_extractor=lambda r: r.get("response", {}).get("body", {}).get("usage"), | ||
| input_key="prompt_tokens", | ||
| ) | ||
| return _build_cost_entry(session=session, model=model, totals=totals) | ||
|
|
||
|
|
||
| def _build_cost_dict( | ||
| response_entry: dict[str, Any] | None, | ||
| embedding_entry: dict[str, Any] | None, | ||
| ) -> dict[str, Any]: | ||
| """Combine per-stage entries into the `eval_run.cost` payload with a grand total.""" | ||
| cost: dict[str, Any] = {} | ||
| total = 0.0 | ||
|
|
||
| if response_entry: | ||
| cost["response"] = response_entry | ||
| total += response_entry.get("cost_usd", 0.0) | ||
|
|
||
| if embedding_entry: | ||
| cost["embedding"] = embedding_entry | ||
| total += embedding_entry.get("cost_usd", 0.0) | ||
|
|
||
| cost["total_cost_usd"] = round(total, COST_USD_DECIMALS) | ||
| return cost | ||
|
|
||
|
|
||
| def attach_cost( | ||
| session: Session, | ||
| eval_run: EvaluationRun, | ||
| log_prefix: str, | ||
| *, | ||
| response_model: str | None = None, | ||
| response_results: list[dict[str, Any]] | None = None, | ||
| embedding_model: str | None = None, | ||
| embedding_raw_results: list[dict[str, Any]] | None = None, | ||
| ) -> None: | ||
| """Compute cost for the given stage(s) and attach to `eval_run.cost`, never raising. | ||
|
|
||
| Caller is responsible for persisting `eval_run` afterwards. Either stage's | ||
| previously-computed entry on `eval_run.cost` is preserved when that stage's | ||
| inputs are not supplied, so partial updates never clobber prior data. | ||
| """ | ||
| try: | ||
| existing_cost = eval_run.cost or {} | ||
|
|
||
| if response_model is not None and response_results is not None: | ||
| response_entry = _build_response_cost_entry( | ||
| session=session, model=response_model, results=response_results | ||
| ) | ||
| else: | ||
| response_entry = existing_cost.get("response") | ||
|
|
||
| if embedding_model is not None and embedding_raw_results is not None: | ||
| embedding_entry = _build_embedding_cost_entry( | ||
| session=session, | ||
| model=embedding_model, | ||
| raw_results=embedding_raw_results, | ||
| ) | ||
| else: | ||
| embedding_entry = existing_cost.get("embedding") | ||
|
|
||
| eval_run.cost = _build_cost_dict( | ||
| response_entry=response_entry, | ||
| embedding_entry=embedding_entry, | ||
| ) | ||
| except Exception as cost_err: | ||
| logger.warning( | ||
| f"[attach_cost] {log_prefix} Failed to compute cost | {cost_err}" | ||
| ) | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.