⚡️ Speed up function encode_to_base64 by 17%
#65
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 17% (0.17x) speedup for
encode_to_base64ingradio/external_utils.py⏱️ Runtime :
109 microseconds→92.7 microseconds(best of97runs)📝 Explanation and details
The optimization achieves a 17% speedup by reordering the execution logic to avoid expensive operations when they're not needed.
Key optimization: Fast-path for JSON responses
The original code always performed
base64.b64encode(r.content).decode("utf-8")first, regardless of the response type. The optimized version checkscontent-typeheader upfront and handlesapplication/jsonresponses directly without any base64 encoding, since JSON responses already contain the pre-encoded blob.Specific changes:
application/jsonresponses (which appear frequently in the test results), the function now extracts the blob directly from the JSON without encodingr.contentnew_base64variable and redundant logic branchesPerformance impact by test type:
Hot path relevance:
Based on the function references,
encode_to_base64is called fromcustom_post_binaryin the Hugging Face model integration pipeline. Given that many HF inference endpoints return JSON responses with pre-encoded blobs, this optimization will significantly benefit common ML model inference workloads where the function processes JSON responses in a loop or batch processing scenario.The 17% overall speedup represents the weighted average across different response types, with JSON responses (likely common in HF API usage) seeing the most substantial gains.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-encode_to_base64-mhws3q0oand push.