Skip to content

Conversation

@seyeong-han
Copy link
Contributor

This PR adds a --benchmark flag to the Whisper runner to display performance metrics such as model load time, inference time, and tokens per second. It also enables EXECUTORCH_ENABLE_LOGGING in the llm-release CMake preset to ensure these benchmark logs are visible in release builds.

Key Changes:

  • Added --benchmark flag to main.cpp to log performance stats.
  • Updated README.md with instructions on using the benchmark flag and enabling logging.
I 00:00:02.773705 executorch:llm_runner_helper.cpp:54] Loaded json tokenizer
I 00:00:02.773769 executorch:main.cpp:114] Model load time: 2114.00 ms (2.11 seconds)
I 00:00:02.773772 executorch:main.cpp:124] Using decoder_start_token_id=50258
I 00:00:02.773774 executorch:runner.cpp:140] Preprocessed features shape: [1, 80, 3000]
I 00:00:02.773775 executorch:runner.cpp:149] RSS after loading model: 0.000000 MiB (0 if unsupported)
I 00:00:02.773779 executorch:runner.cpp:186] Converting audio features from Float to BFloat16. Before converting, first value = -0.577638
I 00:00:02.773819 executorch:runner.cpp:195] Conversion complete, first value = -0.581041
I 00:00:03.218393 executorch:runner.cpp:221] Encoder output shape: [1, 1500, 768]
I 00:00:03.218405 executorch:runner.cpp:225] Encoder first value: -1.154269
<|en|><|transcribe|><|notimestamps|> This week, I traveled to Chicago to deliver my final farewell address to the nation, following in the tradition of Presidents before me. It was an opportunity to say thank you. Whether we've seen eye to eye or rarely agreed at all, my conversations with you, the American people, in living rooms and schools,<|endoftext|>
PyTorchObserver {"prompt_tokens":0,"generated_tokens":68,"model_load_start_ms":1765389619730,"model_load_end_ms":1765389621844,"inference_start_ms":1765389621844,"inference_end_ms":1765389623358,"prompt_eval_end_ms":1765389622289,"first_token_ms":1765389622328,"aggregate_sampling_time_ms":0,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
I 00:00:04.287897 executorch:stats.h:143]       Prompt Tokens: 0    Generated Tokens: 68
I 00:00:04.287899 executorch:stats.h:149]       Model Load Time:                2.114000 (seconds)
I 00:00:04.287901 executorch:stats.h:159]       Total inference time:           1.514000 (seconds)               Rate:  44.914135 (tokens/second)
I 00:00:04.287903 executorch:stats.h:167]               Prompt evaluation:      0.445000 (seconds)               Rate:  0.000000 (tokens/second)
I 00:00:04.287905 executorch:stats.h:178]               Generated 68 tokens:    1.069000 (seconds)               Rate:  63.610851 (tokens/second)
I 00:00:04.287906 executorch:stats.h:186]       Time to first generated token:  0.484000 (seconds)
I 00:00:04.287907 executorch:stats.h:193]       Sampling time over 68 tokens:   0.000000 (seconds)
I 00:00:04.287933 executorch:main.cpp:150] Inference time: 1514.00 ms (1.51 seconds)
I 00:00:04.287936 executorch:main.cpp:151] Generated tokens: 69
I 00:00:04.287937 executorch:main.cpp:152] Tokens per second: 45.57
I 00:00:04.287938 executorch:main.cpp:159] === Performance Summary === Model Load: 2114.00 ms | Inference: 1514.00 ms | Tokens: 69 | Speed: 45.57 tok/s

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16182

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 9df111b with merge base 0d61efc (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 10, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant