Conversation
…dices When a transaction mixes batch (v2) and legacy/concurrent (v1) input accounts, the nullifier queue index assignment was using the raw position in input_compressed_accounts as the write index into nullifier_queue_indices. This caused an out-of-bounds panic when a non-batch account appeared between batch accounts (e.g. [batchA, legacy, batchB, batchA]). Fix by walking input_compressed_accounts in order and using a compact batch_idx counter that only advances for accounts with a matching sequence number entry. Non-batch accounts have no sequence number entry and are skipped without consuming a slot.
Fetches a configurable number of blocks starting at a given slot, parses every transaction using event_from_light_transaction, and prints a structured summary of all Light Protocol events found. Usage: cargo xtask fetch-block-events --start-slot <slot> --network mainnet cargo xtask fetch-block-events --start-slot <slot> --network devnet --num-blocks 5
📝 WalkthroughWalkthroughThis change bumps the Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes The changes span multiple file types and require careful attention to the control flow refactor in Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…ier OOB panic Transaction 3ybts1eFSC7QN6aU4ao6NJCgn7xTbtBVyzeLDZJf9eVN93vHZWupX4TXqHHgV18xf17eit7Uw5T135uabnpToKK4 at slot 407265372 panicked with "index out of bounds: len is 3 but index is 3" in create_nullifier_queue_indices when inputs mix batch and legacy trees. Adds two tests: - src/regression_test.rs: real mainnet instruction bytes decoded via bs58 - tests/parse_test.rs: synthetic test verifying exact nullifier_queue_indices [6, 3, 7] Also adds light-event to sdk-libs/justfile so it runs in CI.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
sdk-libs/event/src/parse.rs (1)
726-754:⚠️ Potential issue | 🔴 CriticalSkip
read_onlyinputs before consuming batch sequence numbers.
batch_input_accounts.len()is derived frominsert_into_queues_instruction.nullifiers, but this helper still walks everyinput_compressed_account. If a transaction mixes a read-only batch input with a writable input from the same batched tree, the read-only entry will still matchinput_sequence_numbers, advancebatch_idx, and can reproduce the same out-of-bounds write or misassign later queue indices. This is reachable today through theInvokeCpiWithReadOnlypath.🐛 Proposed fix
- let input_merkle_tree_pubkeys = associated_instructions - .executing_system_instruction - .input_compressed_accounts - .iter() - .map(|x| { - associated_instructions - .executing_system_instruction - .accounts[x.merkle_context.merkle_tree_pubkey_index as usize] - }) - .collect::<Vec<_>>(); + let input_merkle_tree_pubkeys = associated_instructions + .executing_system_instruction + .input_compressed_accounts + .iter() + .filter(|input| !input.read_only) + .map(|input| { + associated_instructions + .executing_system_instruction + .accounts[input.merkle_context.merkle_tree_pubkey_index as usize] + }) + .collect::<Vec<_>>(); @@ let mut batch_idx = 0usize; for merkle_tree_pubkey in input_merkle_tree_pubkeys.iter() { if let Some(seq) = internal_input_sequence_numbers .iter_mut() .find(|s| s.tree_pubkey == *merkle_tree_pubkey) { + debug_assert!(batch_idx < nullifier_queue_indices.len()); nullifier_queue_indices[batch_idx] = seq.seq.into(); seq.seq += 1; batch_idx += 1; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/event/src/parse.rs` around lines 726 - 754, The loop that assigns sequence numbers walks associated_instructions.executing_system_instruction.input_compressed_accounts but doesn't skip read-only entries, so read-only batch inputs can consume entries from associated_instructions.insert_into_queues_instruction.input_sequence_numbers and corrupt nullifier_queue_indices; update the loop to ignore any input_compressed_account where the account is read_only (or otherwise not a writable batch input) before trying to match and consume a sequence number, and only increment batch_idx and write into nullifier_queue_indices when a non-read-only (writable/batched) account matched and consumed a sequence (refer to input_compressed_accounts, input_sequence_numbers, nullifier_queue_indices, and batch_idx to locate and change the logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sdk-libs/event/src/regression_test.rs`:
- Around line 103-108: The test currently only asserts result.is_ok() for the
call to event_from_light_transaction(&program_ids, &instructions, accounts)
which still allows Ok(None); change the assertion to require a parsed event by
asserting the result equals Ok(Some(_)) (or use assert!(matches!(result,
Ok(Some(_))))) so the fixture actually exercises the path that triggers
create_nullifier_queue_indices and fails if no events were parsed.
---
Outside diff comments:
In `@sdk-libs/event/src/parse.rs`:
- Around line 726-754: The loop that assigns sequence numbers walks
associated_instructions.executing_system_instruction.input_compressed_accounts
but doesn't skip read-only entries, so read-only batch inputs can consume
entries from
associated_instructions.insert_into_queues_instruction.input_sequence_numbers
and corrupt nullifier_queue_indices; update the loop to ignore any
input_compressed_account where the account is read_only (or otherwise not a
writable batch input) before trying to match and consume a sequence number, and
only increment batch_idx and write into nullifier_queue_indices when a
non-read-only (writable/batched) account matched and consumed a sequence (refer
to input_compressed_accounts, input_sequence_numbers, nullifier_queue_indices,
and batch_idx to locate and change the logic).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7346b81a-ec4d-4566-855b-ec6ed02dad68
⛔ Files ignored due to path filters (5)
Cargo.lockis excluded by!**/*.lockand included by noneCargo.tomlis excluded by none and included by nonextask/Cargo.tomlis excluded by none and included by nonextask/src/fetch_block_events.rsis excluded by none and included by nonextask/src/main.rsis excluded by none and included by none
📒 Files selected for processing (6)
sdk-libs/event/Cargo.tomlsdk-libs/event/src/lib.rssdk-libs/event/src/parse.rssdk-libs/event/src/regression_test.rssdk-libs/event/tests/parse_test.rssdk-libs/justfile
| let result = event_from_light_transaction(&program_ids, &instructions, accounts); | ||
| assert!( | ||
| result.is_ok(), | ||
| "event_from_light_transaction failed: {:?}", | ||
| result.err() | ||
| ); |
There was a problem hiding this comment.
Assert that this fixture actually produces a parsed event.
This still passes on Ok(None), so a future pattern-matching regression could skip create_nullifier_queue_indices entirely and the test would stay green. Please assert Ok(Some(events)) so this fixture keeps exercising the failing path.
✅ Suggested tightening
- let result = event_from_light_transaction(&program_ids, &instructions, accounts);
- assert!(
- result.is_ok(),
- "event_from_light_transaction failed: {:?}",
- result.err()
- );
+ let events = event_from_light_transaction(&program_ids, &instructions, accounts)
+ .expect("event_from_light_transaction should parse this fixture")
+ .expect("fixture should match a Light event");
+ assert!(!events.is_empty(), "fixture should produce at least one parsed event");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let result = event_from_light_transaction(&program_ids, &instructions, accounts); | |
| assert!( | |
| result.is_ok(), | |
| "event_from_light_transaction failed: {:?}", | |
| result.err() | |
| ); | |
| let events = event_from_light_transaction(&program_ids, &instructions, accounts) | |
| .expect("event_from_light_transaction should parse this fixture") | |
| .expect("fixture should match a Light event"); | |
| assert!(!events.is_empty(), "fixture should produce at least one parsed event"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/event/src/regression_test.rs` around lines 103 - 108, The test
currently only asserts result.is_ok() for the call to
event_from_light_transaction(&program_ids, &instructions, accounts) which still
allows Ok(None); change the assertion to require a parsed event by asserting the
result equals Ok(Some(_)) (or use assert!(matches!(result, Ok(Some(_))))) so the
fixture actually exercises the path that triggers create_nullifier_queue_indices
and fails if no events were parsed.
Summary by CodeRabbit
Bug Fixes
Tests
Chores