Skip to content

Conversation

@madisoncarter1234
Copy link
Contributor

Addresses #282

Summary

  • Parallelizes ECDSA sender recovery using rayon
  • Adds sender_recovery_duration metric
  • All tests pass

What's included

✅ Parallel sender recovery (matches upstream reth approach)
✅ Timing metric for observability
✅ Proper error handling preserved

What's not included

⚠️ Benchmarks (infra was removed from main - can be added in follow-up)

@cb-heimdall
Copy link
Collaborator

cb-heimdall commented Dec 31, 2025

🟡 Heimdall Review Status

Requirement Status More Info
Reviews 🟡 0/1
Denominator calculation
Show calculation
1 if user is bot 0
1 if user is external 0
2 if repo is sensitive 0
From .codeflow.yml 1
Additional review requirements
Show calculation
Max 0
0
From CODEOWNERS 0
Global minimum 0
Max 1
1
1 if commit is unverified 1
Sum 2

@madisoncarter1234 madisoncarter1234 force-pushed the fix/parallel-sender-recovery branch from a2763d7 to e839384 Compare January 1, 2026 16:57
  Addresses base#282 - reduces sender recovery overhead during flashblock processing.

  ## Changes
  - Add rayon for parallel iteration over transactions
  - Batch ECDSA sender recovery upfront using par_iter()
  - Add sender_recovery_duration metric for observability
  - Preserve cache lookup behavior (check prev_pending_blocks first)
  - Maintain original error handling semantics

  ## Technical Approach
  Follows upstream reth pattern (paradigmxyz/reth#20169) - recover all
  senders in parallel before the sequential transaction processing loop.

  ## Testing
  All 10 existing flashblocks tests pass.

  ## Note
  Benchmarks not included as benchmark infrastructure was removed from main.
  The new sender_recovery_duration metric can be used for production measurement.
@madisoncarter1234 madisoncarter1234 force-pushed the fix/parallel-sender-recovery branch from e839384 to 41f678f Compare January 2, 2026 23:56
Comment on lines 420 to 423
// Sender already recovered in parallel phase
let sender = *sender_by_hash
.get(&tx_hash)
.expect("sender must exist from parallel recovery");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid the .expect statement here, we could just iterate over the txs in parallel as you've done earlier, but instead of constructing a map, produce a list of tuples that is (Transaction, Sender) so the sender is available in this for loop.

Copy link
Contributor

@refcell refcell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

@madisoncarter1234
Copy link
Contributor Author

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

Switched from HashMap lookup to Vec<(&OpTxEnvelope, Address)> so the sender is directly available in the loop, no .expect() needed.

Re: benchmarking, i can take a stab at it in this PR!

@refcell
Copy link
Contributor

refcell commented Jan 6, 2026

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

Switched from HashMap lookup to Vec<(&OpTxEnvelope, Address)> so the sender is directly available in the loop, no .expect() needed.

Re: benchmarking, i can take a stab at it in this PR!

Awesome work thank you @madisoncarter1234 !!! And yes if you could add benchmarks here that would be very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants