refactor: reduce clone pressure in compressible trackers#2344
refactor: reduce clone pressure in compressible trackers#2344sergeytimoshin wants to merge 15 commits intomainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors several compressor structs to use Changes
Sequence Diagram(s)sequenceDiagram
participant EpochManager
participant BackgroundRuntime
participant NewTreeWorker
participant Compressor
EpochManager->>BackgroundRuntime: spawn_blocking(worker task with Arc<Self>)
BackgroundRuntime->>NewTreeWorker: start per-tree worker
NewTreeWorker->>Compressor: dispatch compress_batch / dispatch compression tasks
Compressor-->>NewTreeWorker: return CompressionOutcome (ok/err/panic)
NewTreeWorker-->>EpochManager: register completion / notify
EpochManager->>NewTreeWorker: request_shutdown (on global shutdown)
NewTreeWorker->>BackgroundRuntime: join worker (with timeout)
BackgroundRuntime-->>EpochManager: report join result (joined/detached/timeout)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
579d45a to
6d927c2
Compare
6d927c2 to
ae11676
Compare
3498eb9 to
0e56512
Compare
0e56512 to
9e3e448
Compare
9e3e448 to
986107b
Compare
986107b to
d7783b8
Compare
d7783b8 to
64ada13
Compare
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
sdk-libs/client/src/local_test_validator.rs (1)
71-97: 🧹 Nitpick | 🔵 TrivialCommand string built via concatenation may misbehave with special characters.
The command is assembled by directly interpolating
program_id,program_path,upgrade_authority, andvalidator_argsinto a shell string. If any value contains spaces, quotes, or shell metacharacters (e.g.,; rm -rf /), the shell will interpret them.While this is test infrastructure and the inputs are typically controlled, consider either:
- Escaping/quoting values (e.g., using
shell-escapecrate), or- Passing arguments directly to
Commandwithout going throughsh -c.Example for option 2 (partial):
// Instead of sh -c with concatenated string: Command::new(&format!("{}/cli/test_bin/run", project_root.trim())) .arg("test-validator") .args(["--sbf-program", &sbf_program.0, &sbf_program.1]) // ... etcThis avoids shell interpretation entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/client/src/local_test_validator.rs` around lines 71 - 97, The loop builds a shell command string by concatenating unescaped values (sbf_programs, upgradeable_programs' program_id/program_path/upgrade_authority, validator_args), which risks shell interpretation of special characters; change to constructing the process invocation with std::process::Command (e.g., the run binary created via format!("{}/cli/test_bin/run", project_root.trim()) and the "test-validator" subcommand) and add each flag/argument with .arg or .args instead of string concatenation, mapping each sbf_program to .args(&["--sbf-program", &sbf_program.0, &sbf_program.1]), each upgradeable_program to .args(&["--upgradeable-program", &upgradeable_program.program_id, &upgradeable_program.program_path, &upgradeable_program.upgrade_authority"]), conditionally add "--skip-prover" and "--use-surfpool" with .arg, and append config.validator_args via .args to avoid passing through a shell; alternatively, if you must build a shell string, apply proper escaping using a crate like shell-escape for the same fields.forester/src/epoch_manager.rs (2)
4775-4788:⚠️ Potential issue | 🔴 CriticalShutdown currently skips the
NewTreeWorkercleanup path.When the shutdown branch wins this
tokio::select!,epoch_manager.run()is dropped rather than allowed to reach its epilogue. That meansshutdown_new_tree_workers()never runs, so any already spawnedNewTreeWorkerthreads keep their own Tokio runtimes alive and can continue submitting work after the service reports shutdown. Please plumb the signal intoEpochManager::run()or expose an explicit asyncshutdown()and await it here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 4775 - 4788, The shutdown branch currently wins the tokio::select! and drops epoch_manager.run(), preventing NewTreeWorker cleanup (shutdown_new_tree_workers()) from running; modify the code so the shutdown signal is propagated into EpochManager rather than canceling it — either add a cancellation-aware entry point (e.g., change EpochManager::run() to accept a shutdown signal/Receiver) or implement an explicit async EpochManager::shutdown() that you call and await here when shutdown wins; ensure you reference EpochManager::run(), EpochManager::shutdown() (if added), and the shutdown_new_tree_workers()/NewTreeWorker cleanup path so the manager completes its epilogue before returning.
558-574:⚠️ Potential issue | 🟠 MajorAbort discovery before draining tracked new-tree workers.
tree_discovery_handleis still live whileshutdown_new_tree_workers()is taking and cancelling the current worker list. A tree discovered in that window can register a freshNewTreeWorkerafter the vector has been drained, so it escapes shutdown and keeps processing past the manager exit path.Suggested fix
- monitor_handle.abort(); - self.shutdown_new_tree_workers(NEW_TREE_WORKER_SHUTDOWN_TIMEOUT) - .await; + monitor_handle.abort(); + drop(_guard); + self.shutdown_new_tree_workers(NEW_TREE_WORKER_SHUTDOWN_TIMEOUT) + .await;Also applies to: 678-681
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 558 - 574, The scopeguard closure currently aborts current_previous_handle, tree_discovery_handle, and balance_check_handle together which allows tree_discovery_handle to remain live while shutdown_new_tree_workers() drains and cancels workers; change the shutdown sequence so tree_discovery_handle is aborted before any call that drains or cancels NewTreeWorker instances (i.e., ensure tree_discovery_handle.abort() runs prior to invoking shutdown_new_tree_workers() or draining the worker vector), update the closure or surrounding teardown logic to call tree_discovery_handle.abort() first and then perform shutdown_new_tree_workers(), and apply the same ordering fix at the other teardown site referenced around lines 678-681 so discovery cannot register new workers after the drain starts.forester/src/processor/v2/strategy/address.rs (1)
170-205:⚠️ Potential issue | 🔴 CriticalThe snapshot indexing is misaligned with the data arrays.
In
fetch_queue_data,data_start_indexis set totree_next_insertion_index()(line 170). However,build_proof_jobcallsget_batch_snapshot(start, batch_end)wherestartis a relative offset (line 273). Theget_batch_snapshotimplementation treats this offset as a direct index into the data arrays—usingdata.addresses.get(start..end)(line 502 in helpers.rs).Since the data arrays are populated from
StreamingAddressQueue::start_index()(the queue window's absolute start), butdata_start_indexis nowtree_next_insertion_index()(a different value), the snapshot lookup will access the wrong address slice. This breaks the overlap guard (reconcile_alignment) and can cause replay of already-inserted addresses.Either:
- Rebase
get_batch_snapshotcalls to usestart + (streaming_queue.start_index() - streaming_queue.tree_next_insertion_index()), or- Keep
data_start_index = streaming_queue.start_index()and handle tree alignment separately.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/processor/v2/strategy/address.rs` around lines 170 - 205, The data_start_index returned by fetch_queue_data is set to streaming_queue.tree_next_insertion_index(), which misaligns with how get_batch_snapshot interprets the start offset (it indexes directly into the data arrays populated from StreamingAddressQueue::start_index()); update fetch_queue_data to set data_start_index = streaming_queue.start_index() (preserving tree_next_insertion_index use for staging tree creation), or alternatively, keep data_start_index as-is but adjust build_proof_job calls to rebase snapshot offsets before calling get_batch_snapshot (compute adjusted_start = start + (streaming_queue.start_index() - streaming_queue.tree_next_insertion_index()) and pass adjusted_start), and ensure reconcile_alignment still compares indices using the same base (use StreamingAddressQueue::start_index() when checking overlap).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/epoch_manager.rs`:
- Around line 914-994: This spawns an unbounded OS thread + dedicated Tokio
runtime per new tree (inside the block creating cancel_tx/completion_tx and
thread_handle) which can oversubscribe the host; instead, route new-tree work
through the existing tree-processing concurrency limiter used by normal trees
(the same semaphore/pool guarded by max_parallel_tree_workers()/the struct's
worker semaphore) and reuse the existing worker runtime rather than building a
new Tokio runtime per tree. Change the code that spawns a thread and builds a
runtime for each new tree (the closure calling process_queue and using
cancel_rx/completion_rx) to acquire the common semaphore/pool before scheduling
work, submit the processing task onto the shared runtime/task queue (or spawn
onto existing worker threads), and only keep the cancellation/completion
channels and register_new_tree_worker/NewTreeWorker for lifecycle tracking
without creating a dedicated runtime per tree.
In `@forester/tests/legacy/batched_state_async_indexer_test.rs`:
- Line 90: Remove the duplicate prover startup: the call to spawn_prover() after
init(Some(LightValidatorConfig { enable_prover: true, ... })) should be deleted
because init already starts the prover when enable_prover is true; locate the
test in batched_state_async_indexer_test.rs and remove the explicit
spawn_prover() invocation to avoid binding the same port twice and the
subsequent unwrap() panic.
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Around line 219-230: The code truncates new_element_values to batch_len but
not the related low_element_* slices; to make intent explicit and avoid indexing
surprises, create truncated slice bindings for each low_element slice (e.g.,
low_element_values, low_element_next_values, low_element_indices,
low_element_next_indices, and any other per-element input slices) using the same
..batch_len slice pattern before the loop/processing that uses them, and then
use those truncated bindings everywhere instead of indexing the original
vectors; keep the existing
staged_changelog/staged_indexed_changelog/staged_sparse_merkle_tree logic
unchanged.
In `@prover/client/src/prover.rs`:
- Around line 85-86: The current health-check boolean combines TCP read success
and a curl fallback which can mask non-200 HTTP responses; change the logic in
the health check block (the bytes_read check in prover.rs) so that when
bytes_read > 0 you return the TCP result directly by calling
has_http_ok_status(&response[..bytes_read]) and only call
health_check_once_with_curl(timeout) when the TCP path failed (e.g., bytes_read
== 0 or earlier connect/write/read errors); update the conditional controlling
those calls (the bytes_read branch) to implement this explicit fall-through
behavior.
- Around line 175-202: The local binding spawn_result captures the result of the
async block but the block returns () so the variable is unused; remove the
unnecessary spawn_result binding and simply await the async block inline (or use
let _ = ...await) where the Command::new(&prover_path) / wait_for_prover_health
/ monitor_prover_child logic runs, then keep IS_LOADING.store(false,
Ordering::Release) afterwards—this cleans up the dead variable while preserving
the startup, health-check, and child-monitor behavior.
In `@prover/client/tests/batch_address_append.rs`:
- Around line 109-124: The call to
get_batch_address_append_circuit_inputs(...).unwrap() in the warmup phase
returns circuit inputs that are currently discarded; update the call site (the
invocation of get_batch_address_append_circuit_inputs with warmup_* args) to
make the intent explicit by either assigning the result to a clearly named
placeholder (e.g. let _warmup_inputs = ...) or by adding a concise comment
before the call explaining "result intentionally discarded — only side effects
on sparse_merkle_tree, changelog, and indexed_changelog are needed for warmup",
keeping the unwrap as-is; reference the symbols
get_batch_address_append_circuit_inputs, sparse_merkle_tree, changelog, and
indexed_changelog when making the change.
- Line 128: The variable remaining_values is misleading because it binds the
full slice &new_element_values[..]; rename remaining_values to batch_values (or
remove the alias and use new_element_values directly) and update every usage of
remaining_values in the test to the new name so the intent matches the data
(references: remaining_values, new_element_values in the batch_address_append
test).
In `@sdk-libs/client/src/indexer/photon_indexer.rs`:
- Around line 1145-1162: The second, redundant validation can be removed: inside
the block that computes expected_siblings and expected_total (using
STATE_MERKLE_TREE_HEIGHT and STATE_MERKLE_TREE_CANOPY_DEPTH) keep only the first
check that compares x.proof.len() to expected_total and the subsequent
computation of proof_len if needed, but drop the impossible second check that
compares proof_len to expected_siblings (the IndexerError::InvalidParameters
branch referencing proof_len vs expected_siblings) since it can never fail when
the first check passes.
- Around line 1693-1695: The assignment of tree_next_insertion_index currently
uses aq.start_index (a pagination offset) which is wrong; replace that value
with the actual next leaf index from the tree state—e.g., call
address_tree_bundle.right_most_index() (or derive from queue_indices/queue state
if Photon API is missing the field) when setting tree_next_insertion_index in
the struct initializer where start_index, tree_next_insertion_index, root_seq
are set; ensure tree_next_insertion_index represents the sparse tree insertion
point used to initialize staging trees rather than the pagination offset.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 198-203: The lookup-based proof reconstruction uses the unstable
is_multiple_of API when computing sibling_pos in reconstruct_proof_with_lookup;
replace the unstable call with a stable parity check (e.g., test pos % 2 == 0 or
pos & 1 == 0) when deciding sibling_pos, and ensure the same replacement is
applied to the sibling index computation inside reconstruct_proof_with_lookup so
behavior remains identical but uses stable Rust methods.
- Around line 274-286: Replace the unstable is_multiple_of call in the test
helper's loop with a stable check using modulus: instead of
pos.is_multiple_of(2) use pos % 2 == 0 (or equivalent) when computing
sibling_pos inside the loop that constructs node_index (the block that inserts
into nodes via nodes.entry(node_index).or_insert_with(||
hash_from_node(node_index))) and ensure the same change is applied to any other
occurrences in build_queue_data so pos is updated correctly before pos /= 2.
- Around line 175-177: Add a doc comment to the proof_count method explaining
that it returns the number of complete proof entries by taking the minimum of
addresses.len() and low_element_indices.len(); document that if those two
vectors diverge the method reports only the count of fully-paired elements
(i.e., the number of valid address→low_element_index pairs) so callers
understand the invariant and why min(...) is used; mention the related fields
addresses and low_element_indices in the comment for clarity.
In `@sdk-libs/client/src/interface/load_accounts.rs`:
- Around line 164-165: The loop that iterates over
cold_atas.chunks(MAX_ATAS_PER_IX).zip(ata_proofs) causes MissingAtaContext.index
to be the local chunk index instead of the global ATA index; fix build_ata_load
error reporting by computing and preserving the global index: iterate with
enumerate over chunks (e.g., for (chunk_idx, (chunk, proof)) in
cold_atas.chunks(MAX_ATAS_PER_IX).zip(ata_proofs).enumerate()), compute
chunk_start = chunk_idx * MAX_ATAS_PER_IX, and when constructing any
MissingAtaContext (inside build_ata_load or just before calling it) set index =
chunk_start + local_index (where local_index is the position within chunk) so
the reported MissingAtaContext.index refers to the original cold_atas index.
- Around line 135-139: The three concurrent pipelines (fetch_proof_batches,
fetch_proofs_batched, fetch_individual_proofs) each buffer up to
PROOF_FETCH_CONCURRENCY, creating up to 3x the intended concurrent
get_validity_proof calls; change the implementation to use a single shared
concurrency limiter instead of independent buffers. Add a shared limiter (e.g.,
a tokio::sync::Semaphore or an Arc<Limiter>) and pass it into
fetch_proof_batches, fetch_proofs_batched, and fetch_individual_proofs; have
each proof-fetching operation acquire a permit before calling get_validity_proof
and release it after, so the global concurrency across all pipelines is bounded
by PROOF_FETCH_CONCURRENCY (or another configured global limit) rather than
per-pipeline. Ensure constructors/signatures for those functions accept the
shared limiter and use it wherever get_validity_proof is awaited.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 119-128: The spawned validator process (created with
Command::new("sh")... and assigned to the dropped variable _child when
use_surfpool is false) is intentionally detached and lacks any cleanup path;
change the implementation to retain and expose the Child handle so callers/tests
can terminate the validator: either store the Child in a shared structure (e.g.,
a static Mutex/OnceCell) or return it from the function that starts the
validator (so callers receive the Child and can call kill/await), and update any
callers/tests to call .kill()/.wait() or drop appropriately during teardown;
ensure the symbol names to change are the creation site where _child is assigned
and the surrounding function that gates on use_surfpool.
- Around line 60-62: The code silently no-ops when get_project_root() returns
None; update the containing function (e.g., the function that calls
get_project_root() in local_test_validator.rs) to return a Result and propagate
an error instead of doing nothing, or at minimum log a clear warning and return
an Err variant so the caller knows the validator failed to start; specifically,
replace the current early-return/no-op on None from get_project_root() with a
branch that calls processLogger.error/warn (or uses anyhow::bail) and returns
Err(...) from the function that constructs and spawns the command (referencing
get_project_root and the function that formats/executes the "cli/test_bin/run
test-validator" command) so failures are observable by callers/tests.
In `@sdk-libs/client/src/utils.rs`:
- Around line 36-46: The code builds light_path from the git rev-parse output
without checking Command::output() success, so on git failure you may produce an
invalid path or panic; update the block around the
std::process::Command::new("git") call to check output.status.success() (or
handle Err from .output()) and return None (or an appropriate error) if the
command failed, only parsing stdout into light_protocol_toplevel and
constructing light_path when the status is success; reference the existing
Command invocation, the light_protocol_toplevel variable and the light_path
PathBuf construction to find and fix the logic.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2593-2607: The retry loop currently uses the `?` operator when
reading the response body and when decoding via `build_compressed_proof`, which
causes early exit on transient failures; change the
`response_result.text().await.map_err(...) ?` and
`build_compressed_proof(&body)?` sites to explicit error handling: capture the
errors into `last_error` (e.g., set last_error =
Some(IndexerError::CustomError(...)) or wrap the build error similarly) and
continue the loop instead of returning, and only return
`Ok(ValidityProofWithContext { accounts: account_proof_inputs, addresses:
address_proof_inputs, proof: ... })` when both reading and decoding succeed;
ensure you reference `response_result`, `build_compressed_proof`,
`ValidityProofWithContext`, `account_proof_inputs`, `address_proof_inputs`, and
`last_error` when implementing the changes.
In `@sdk-libs/sdk-types/src/interface/program/decompression/processor.rs`:
- Line 129: The public field token_seeds uses the confusing type
Vec<Vec<Vec<u8>>>; define a small alias or newtype (e.g., type SeedsPerSigner =
Vec<Vec<u8>>; or pub struct SignerSeedGroups(pub Vec<SeedsPerSigner>);) to
represent “one seed group per signer PDA” and replace token_seeds:
Vec<Vec<Vec<u8>>> with token_seeds: SeedsPerSignerGroups (or the chosen name)
and update the other occurrence referenced (the second use at the later
location) to use the new alias/newtype so the public API is readable and less
error-prone.
In `@sdk-libs/sdk-types/src/interface/program/decompression/token.rs`:
- Around line 151-153: The code currently only pushes the vault PDA seed group
into ctx.token_seeds (ctx.token_seeds.push(seeds.iter().map(|seed|
seed.to_vec()).collect() in the decompression token flow), but when
token::owner_seeds ≠ token::seeds derive_owner() produces a different owner PDA
and processor.rs later marks the owner account (account_metas[data.owner as
usize + transfer2_packed_start].is_signer = true) without providing signer seeds
for it; fix by including the owner PDA seeds as a separate signer seed group
when owner_seeds differs from seeds (i.e., push token::owner_seeds mapped to
Vec<u8> into ctx.token_seeds in addition to the vault seeds, making sure the
order of seed groups matches how account_metas are constructed for the Transfer2
CPI), or alternatively change the signer-marking logic to only mark the owner as
signer when the owner PDA equals the vault PDA.
---
Outside diff comments:
In `@forester/src/epoch_manager.rs`:
- Around line 4775-4788: The shutdown branch currently wins the tokio::select!
and drops epoch_manager.run(), preventing NewTreeWorker cleanup
(shutdown_new_tree_workers()) from running; modify the code so the shutdown
signal is propagated into EpochManager rather than canceling it — either add a
cancellation-aware entry point (e.g., change EpochManager::run() to accept a
shutdown signal/Receiver) or implement an explicit async
EpochManager::shutdown() that you call and await here when shutdown wins; ensure
you reference EpochManager::run(), EpochManager::shutdown() (if added), and the
shutdown_new_tree_workers()/NewTreeWorker cleanup path so the manager completes
its epilogue before returning.
- Around line 558-574: The scopeguard closure currently aborts
current_previous_handle, tree_discovery_handle, and balance_check_handle
together which allows tree_discovery_handle to remain live while
shutdown_new_tree_workers() drains and cancels workers; change the shutdown
sequence so tree_discovery_handle is aborted before any call that drains or
cancels NewTreeWorker instances (i.e., ensure tree_discovery_handle.abort() runs
prior to invoking shutdown_new_tree_workers() or draining the worker vector),
update the closure or surrounding teardown logic to call
tree_discovery_handle.abort() first and then perform
shutdown_new_tree_workers(), and apply the same ordering fix at the other
teardown site referenced around lines 678-681 so discovery cannot register new
workers after the drain starts.
In `@forester/src/processor/v2/strategy/address.rs`:
- Around line 170-205: The data_start_index returned by fetch_queue_data is set
to streaming_queue.tree_next_insertion_index(), which misaligns with how
get_batch_snapshot interprets the start offset (it indexes directly into the
data arrays populated from StreamingAddressQueue::start_index()); update
fetch_queue_data to set data_start_index = streaming_queue.start_index()
(preserving tree_next_insertion_index use for staging tree creation), or
alternatively, keep data_start_index as-is but adjust build_proof_job calls to
rebase snapshot offsets before calling get_batch_snapshot (compute
adjusted_start = start + (streaming_queue.start_index() -
streaming_queue.tree_next_insertion_index()) and pass adjusted_start), and
ensure reconcile_alignment still compares indices using the same base (use
StreamingAddressQueue::start_index() when checking overlap).
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 71-97: The loop builds a shell command string by concatenating
unescaped values (sbf_programs, upgradeable_programs'
program_id/program_path/upgrade_authority, validator_args), which risks shell
interpretation of special characters; change to constructing the process
invocation with std::process::Command (e.g., the run binary created via
format!("{}/cli/test_bin/run", project_root.trim()) and the "test-validator"
subcommand) and add each flag/argument with .arg or .args instead of string
concatenation, mapping each sbf_program to .args(&["--sbf-program",
&sbf_program.0, &sbf_program.1]), each upgradeable_program to
.args(&["--upgradeable-program", &upgradeable_program.program_id,
&upgradeable_program.program_path, &upgradeable_program.upgrade_authority"]),
conditionally add "--skip-prover" and "--use-surfpool" with .arg, and append
config.validator_args via .args to avoid passing through a shell; alternatively,
if you must build a shell string, apply proper escaping using a crate like
shell-escape for the same fields.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d0c1415d-1a73-4dc1-9635-3733788e8dcd
⛔ Files ignored due to path filters (9)
Cargo.lockis excluded by!**/*.lockand included by noneforester-utils/src/address_staging_tree.rsis excluded by none and included by noneprogram-tests/utils/src/actions/legacy/instructions/transfer2.rsis excluded by none and included by noneprogram-tests/utils/src/e2e_test_env.rsis excluded by none and included by noneprogram-tests/utils/src/mock_batched_forester.rsis excluded by none and included by noneprogram-tests/utils/src/test_batch_forester.rsis excluded by none and included by nonesdk-tests/csdk-anchor-full-derived-test/tests/basic_test.rsis excluded by none and included by nonesparse-merkle-tree/src/indexed_changelog.rsis excluded by none and included by nonesparse-merkle-tree/tests/indexed_changelog.rsis excluded by none and included by none
📒 Files selected for processing (48)
forester/src/compressible/ctoken/compressor.rsforester/src/compressible/ctoken/state.rsforester/src/compressible/ctoken/types.rsforester/src/compressible/mint/compressor.rsforester/src/compressible/mint/state.rsforester/src/compressible/mint/types.rsforester/src/compressible/pda/compressor.rsforester/src/epoch_manager.rsforester/src/forester_status.rsforester/src/metrics.rsforester/src/priority_fee.rsforester/src/processor/v2/helpers.rsforester/src/processor/v2/processor.rsforester/src/processor/v2/proof_cache.rsforester/src/processor/v2/proof_worker.rsforester/src/processor/v2/strategy/address.rsforester/tests/e2e_test.rsforester/tests/legacy/batched_state_async_indexer_test.rsforester/tests/legacy/test_utils.rsforester/tests/test_batch_append_spent.rsforester/tests/test_compressible_ctoken.rsforester/tests/test_compressible_mint.rsforester/tests/test_indexer_interface.rsforester/tests/test_utils.rsprover/client/src/constants.rsprover/client/src/errors.rsprover/client/src/helpers.rsprover/client/src/proof.rsprover/client/src/proof_client.rsprover/client/src/proof_types/batch_address_append/proof_inputs.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/batch_update/proof_inputs.rsprover/client/src/prover.rsprover/client/tests/batch_address_append.rssdk-libs/client/src/indexer/photon_indexer.rssdk-libs/client/src/indexer/types/proof.rssdk-libs/client/src/indexer/types/queue.rssdk-libs/client/src/interface/initialize_config.rssdk-libs/client/src/interface/instructions.rssdk-libs/client/src/interface/load_accounts.rssdk-libs/client/src/interface/pack.rssdk-libs/client/src/local_test_validator.rssdk-libs/client/src/utils.rssdk-libs/program-test/src/indexer/test_indexer.rssdk-libs/sdk-types/src/interface/account/token_seeds.rssdk-libs/sdk-types/src/interface/program/decompression/pda.rssdk-libs/sdk-types/src/interface/program/decompression/processor.rssdk-libs/sdk-types/src/interface/program/decompression/token.rs
| let new_element_values = &new_element_values[..batch_len]; | ||
| let mut staged_changelog = changelog.clone(); | ||
| let mut staged_indexed_changelog = indexed_changelog.clone(); | ||
| let mut staged_sparse_merkle_tree = sparse_merkle_tree.clone(); | ||
| let initial_changelog_len = staged_changelog.len(); | ||
| let mut new_root = [0u8; 32]; | ||
| let mut low_element_circuit_merkle_proofs = Vec::with_capacity(batch_len); | ||
| let mut new_element_circuit_merkle_proofs = Vec::with_capacity(batch_len); | ||
| let mut patched_low_element_next_values = Vec::with_capacity(batch_len); | ||
| let mut patched_low_element_next_indices = Vec::with_capacity(batch_len); | ||
| let mut patched_low_element_values = Vec::with_capacity(batch_len); | ||
| let mut patched_low_element_indices = Vec::with_capacity(batch_len); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider truncating all slices consistently for clarity.
You truncate new_element_values at line 219 but access the low_element_* slices by index. While safe (you validated lengths), truncating all of them would make the code self-documenting:
let low_element_values = &low_element_values[..batch_len];
let low_element_next_values = &low_element_next_values[..batch_len];
// ... etcThis makes it immediately clear to readers that all slices are working with exactly batch_len elements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs` around
lines 219 - 230, The code truncates new_element_values to batch_len but not the
related low_element_* slices; to make intent explicit and avoid indexing
surprises, create truncated slice bindings for each low_element slice (e.g.,
low_element_values, low_element_next_values, low_element_indices,
low_element_next_indices, and any other per-element input slices) using the same
..batch_len slice pattern before the loop/processing that uses them, and then
use those truncated bindings everywhere instead of indexing the original
vectors; keep the existing
staged_changelog/staged_indexed_changelog/staged_sparse_merkle_tree logic
unchanged.
| bytes_read > 0 | ||
| && (has_http_ok_status(&response[..bytes_read]) || health_check_once_with_curl(timeout)) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Health check logic may produce unexpected behavior.
The condition bytes_read > 0 && (has_http_ok_status(...) || health_check_once_with_curl(...)) means:
- If TCP read succeeds but returns non-200, we still call curl as a fallback
- If curl succeeds, we return
trueeven though the TCP path got a different status
This could mask issues where the server is returning errors. Consider:
🔧 Suggested simplification
- bytes_read > 0
- && (has_http_ok_status(&response[..bytes_read]) || health_check_once_with_curl(timeout))
+ bytes_read > 0 && has_http_ok_status(&response[..bytes_read])If TCP health check succeeds with a valid 200 response, use that result directly. Only fall back to curl when TCP operations fail (connect/write/read errors), which you already handle above.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bytes_read > 0 | |
| && (has_http_ok_status(&response[..bytes_read]) || health_check_once_with_curl(timeout)) | |
| bytes_read > 0 && has_http_ok_status(&response[..bytes_read]) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/prover.rs` around lines 85 - 86, The current health-check
boolean combines TCP read success and a curl fallback which can mask non-200
HTTP responses; change the logic in the health check block (the bytes_read check
in prover.rs) so that when bytes_read > 0 you return the TCP result directly by
calling has_http_ok_status(&response[..bytes_read]) and only call
health_check_once_with_curl(timeout) when the TCP path failed (e.g., bytes_read
== 0 or earlier connect/write/read errors); update the conditional controlling
those calls (the bytes_read branch) to implement this explicit fall-through
behavior.
| let spawn_result = async { | ||
| let mut child = Command::new(&prover_path) | ||
| .arg("start-prover") | ||
| .spawn() | ||
| .expect("Failed to start prover process"); | ||
|
|
||
| let _ = command.wait_with_output(); | ||
| .unwrap_or_else(|error| panic!("Failed to start prover process: {error}")); | ||
|
|
||
| let health_result = health_check(120, 1).await; | ||
| if health_result { | ||
| info!("Prover started successfully"); | ||
| } else { | ||
| panic!("Failed to start prover, health check failed."); | ||
| match wait_for_prover_health( | ||
| STARTUP_HEALTH_CHECK_RETRIES, | ||
| Duration::from_secs(1), | ||
| &mut child, | ||
| ) | ||
| .await | ||
| { | ||
| Ok(()) => { | ||
| monitor_prover_child(child); | ||
| info!("Prover started successfully"); | ||
| } | ||
| Err(error) => { | ||
| let _ = child.kill(); | ||
| let _ = child.wait(); | ||
| panic!("Failed to start prover: {error}"); | ||
| } | ||
| } | ||
| } | ||
| .await; | ||
|
|
||
| IS_LOADING.store(false, Ordering::Release); | ||
| spawn_result |
There was a problem hiding this comment.
Unused spawn_result variable.
The async block at lines 175-199 returns () (unit type), so spawn_result at line 202 is a no-op expression that does nothing. This appears to be leftover from a refactor.
🧹 Suggested cleanup
- let spawn_result = async {
+ async {
let mut child = Command::new(&prover_path)
.arg("start-prover")
.spawn()
.unwrap_or_else(|error| panic!("Failed to start prover process: {error}"));
match wait_for_prover_health(
STARTUP_HEALTH_CHECK_RETRIES,
Duration::from_secs(1),
&mut child,
)
.await
{
Ok(()) => {
monitor_prover_child(child);
info!("Prover started successfully");
}
Err(error) => {
let _ = child.kill();
let _ = child.wait();
panic!("Failed to start prover: {error}");
}
}
}
- .await;
+ .await;
IS_LOADING.store(false, Ordering::Release);
- spawn_result📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let spawn_result = async { | |
| let mut child = Command::new(&prover_path) | |
| .arg("start-prover") | |
| .spawn() | |
| .expect("Failed to start prover process"); | |
| let _ = command.wait_with_output(); | |
| .unwrap_or_else(|error| panic!("Failed to start prover process: {error}")); | |
| let health_result = health_check(120, 1).await; | |
| if health_result { | |
| info!("Prover started successfully"); | |
| } else { | |
| panic!("Failed to start prover, health check failed."); | |
| match wait_for_prover_health( | |
| STARTUP_HEALTH_CHECK_RETRIES, | |
| Duration::from_secs(1), | |
| &mut child, | |
| ) | |
| .await | |
| { | |
| Ok(()) => { | |
| monitor_prover_child(child); | |
| info!("Prover started successfully"); | |
| } | |
| Err(error) => { | |
| let _ = child.kill(); | |
| let _ = child.wait(); | |
| panic!("Failed to start prover: {error}"); | |
| } | |
| } | |
| } | |
| .await; | |
| IS_LOADING.store(false, Ordering::Release); | |
| spawn_result | |
| async { | |
| let mut child = Command::new(&prover_path) | |
| .arg("start-prover") | |
| .spawn() | |
| .unwrap_or_else(|error| panic!("Failed to start prover process: {error}")); | |
| match wait_for_prover_health( | |
| STARTUP_HEALTH_CHECK_RETRIES, | |
| Duration::from_secs(1), | |
| &mut child, | |
| ) | |
| .await | |
| { | |
| Ok(()) => { | |
| monitor_prover_child(child); | |
| info!("Prover started successfully"); | |
| } | |
| Err(error) => { | |
| let _ = child.kill(); | |
| let _ = child.wait(); | |
| panic!("Failed to start prover: {error}"); | |
| } | |
| } | |
| } | |
| .await; | |
| IS_LOADING.store(false, Ordering::Release); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/prover.rs` around lines 175 - 202, The local binding
spawn_result captures the result of the async block but the block returns () so
the variable is unused; remove the unnecessary spawn_result binding and simply
await the async block inline (or use let _ = ...await) where the
Command::new(&prover_path) / wait_for_prover_health / monitor_prover_child logic
runs, then keep IS_LOADING.store(false, Ordering::Release) afterwards—this
cleans up the dead variable while preserving the startup, health-check, and
child-monitor behavior.
| let _child = Command::new("sh") | ||
| .arg("-c") | ||
| .arg(path) | ||
| .arg(command) | ||
| .stdin(Stdio::null()) | ||
| .stdout(Stdio::null()) | ||
| .stderr(Stdio::null()) | ||
| .spawn() | ||
| .expect("Failed to start server process"); | ||
| std::mem::drop(child); | ||
| // Intentionally detaching the spawned child; the caller only waits | ||
| // for the validator services to become available. |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Detached child process has no cleanup path.
When use_surfpool is false, the spawned process is intentionally detached (the handle is assigned to _child and dropped). This means:
- If the test crashes, the validator keeps running as an orphan.
- There's no way for the caller to terminate the validator gracefully.
The comment acknowledges this, but for test robustness you might consider storing the Child handle in a shared structure (e.g., a static or returned handle) so tests can clean up on teardown.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/client/src/local_test_validator.rs` around lines 119 - 128, The
spawned validator process (created with Command::new("sh")... and assigned to
the dropped variable _child when use_surfpool is false) is intentionally
detached and lacks any cleanup path; change the implementation to retain and
expose the Child handle so callers/tests can terminate the validator: either
store the Child in a shared structure (e.g., a static Mutex/OnceCell) or return
it from the function that starts the validator (so callers receive the Child and
can call kill/await), and update any callers/tests to call .kill()/.wait() or
drop appropriately during teardown; ensure the symbol names to change are the
creation site where _child is assigned and the surrounding function that gates
on use_surfpool.
| let output = std::process::Command::new("git") | ||
| .arg("rev-parse") | ||
| .arg("--show-toplevel") | ||
| .output() | ||
| .expect("Failed to get top-level directory"); | ||
| let light_protocol_toplevel = std::str::from_utf8(&output.stdout) | ||
| .ok()? | ||
| .trim_end_matches("\r\n") | ||
| .trim_end_matches('\n') | ||
| .to_string(); | ||
| let light_path = PathBuf::from(format!("{}/target/deploy/", light_protocol_toplevel)); |
There was a problem hiding this comment.
Handle git rev-parse failure explicitly before building the path.
This branch does not check output.status.success(). On failure, it can still derive an invalid path (e.g., "/target/deploy/"), and .expect(...) can panic despite the function returning Option<PathBuf>.
Suggested fix
- let output = std::process::Command::new("git")
+ let output = std::process::Command::new("git")
.arg("rev-parse")
.arg("--show-toplevel")
.output()
- .expect("Failed to get top-level directory");
+ .ok()?;
+ if !output.status.success() {
+ return None;
+ }
let light_protocol_toplevel = std::str::from_utf8(&output.stdout)
.ok()?
.trim_end_matches("\r\n")
.trim_end_matches('\n')
.to_string();
- let light_path = PathBuf::from(format!("{}/target/deploy/", light_protocol_toplevel));
+ let light_path = PathBuf::from(light_protocol_toplevel).join("target/deploy");
Some(light_path)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let output = std::process::Command::new("git") | |
| .arg("rev-parse") | |
| .arg("--show-toplevel") | |
| .output() | |
| .expect("Failed to get top-level directory"); | |
| let light_protocol_toplevel = std::str::from_utf8(&output.stdout) | |
| .ok()? | |
| .trim_end_matches("\r\n") | |
| .trim_end_matches('\n') | |
| .to_string(); | |
| let light_path = PathBuf::from(format!("{}/target/deploy/", light_protocol_toplevel)); | |
| let output = std::process::Command::new("git") | |
| .arg("rev-parse") | |
| .arg("--show-toplevel") | |
| .output() | |
| .ok()?; | |
| if !output.status.success() { | |
| return None; | |
| } | |
| let light_protocol_toplevel = std::str::from_utf8(&output.stdout) | |
| .ok()? | |
| .trim_end_matches("\r\n") | |
| .trim_end_matches('\n') | |
| .to_string(); | |
| let light_path = PathBuf::from(light_protocol_toplevel).join("target/deploy"); | |
| Some(light_path) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/client/src/utils.rs` around lines 36 - 46, The code builds
light_path from the git rev-parse output without checking Command::output()
success, so on git failure you may produce an invalid path or panic; update the
block around the std::process::Command::new("git") call to check
output.status.success() (or handle Err from .output()) and return None (or an
appropriate error) if the command failed, only parsing stdout into
light_protocol_toplevel and constructing light_path when the status is success;
reference the existing Command invocation, the light_protocol_toplevel variable
and the light_path PathBuf construction to find and fix the logic.
| match response_result { | ||
| Ok(response_result) => { | ||
| let status = response_result.status(); | ||
| let body = response_result.text().await.map_err(|error| { | ||
| IndexerError::CustomError(format!( | ||
| "failed to read prover response body: {error}" | ||
| )) | ||
| })?; | ||
|
|
||
| if status.is_success() { | ||
| return Ok(ValidityProofWithContext { | ||
| accounts: account_proof_inputs, | ||
| addresses: address_proof_inputs, | ||
| proof: build_compressed_proof(&body)?.into(), | ||
| }); |
There was a problem hiding this comment.
Don't bypass the retry loop on body/proof decode failures.
Line 2596 and Line 2606 still use ? inside the retry loop, so a transient truncated response or malformed proof payload exits immediately instead of consuming the remaining attempts. These failures should update last_error and continue, just like send/status failures.
🛠️ Suggested fix
match response_result {
Ok(response_result) => {
let status = response_result.status();
- let body = response_result.text().await.map_err(|error| {
- IndexerError::CustomError(format!(
- "failed to read prover response body: {error}"
- ))
- })?;
-
- if status.is_success() {
- return Ok(ValidityProofWithContext {
- accounts: account_proof_inputs,
- addresses: address_proof_inputs,
- proof: build_compressed_proof(&body)?.into(),
- });
- }
-
- let body_preview: String = body.chars().take(512).collect();
- last_error = format!(
- "prover returned HTTP {status} for validity proof request: {body_preview}"
- );
+ match response_result.text().await {
+ Ok(body) => {
+ if status.is_success() {
+ match build_compressed_proof(&body) {
+ Ok(proof) => {
+ return Ok(ValidityProofWithContext {
+ accounts: account_proof_inputs,
+ addresses: address_proof_inputs,
+ proof: proof.into(),
+ });
+ }
+ Err(error) => {
+ let body_preview: String =
+ body.chars().take(512).collect();
+ last_error = format!(
+ "failed to parse prover proof response: {error}; body: {body_preview}"
+ );
+ }
+ }
+ } else {
+ let body_preview: String = body.chars().take(512).collect();
+ last_error = format!(
+ "prover returned HTTP {status} for validity proof request: {body_preview}"
+ );
+ }
+ }
+ Err(error) => {
+ last_error =
+ format!("failed to read prover response body: {error}");
+ }
+ }
}
Err(error) => {
last_error =
format!("failed to contact prover for validity proof: {error}");
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2593 - 2607,
The retry loop currently uses the `?` operator when reading the response body
and when decoding via `build_compressed_proof`, which causes early exit on
transient failures; change the `response_result.text().await.map_err(...) ?` and
`build_compressed_proof(&body)?` sites to explicit error handling: capture the
errors into `last_error` (e.g., set last_error =
Some(IndexerError::CustomError(...)) or wrap the build error similarly) and
continue the loop instead of returning, and only return
`Ok(ValidityProofWithContext { accounts: account_proof_inputs, addresses:
address_proof_inputs, proof: ... })` when both reading and decoding succeed;
ensure you reference `response_result`, `build_compressed_proof`,
`ValidityProofWithContext`, `account_proof_inputs`, `address_proof_inputs`, and
`last_error` when implementing the changes.
| pub in_tlv: Option<Vec<Vec<ExtensionInstructionData>>>, | ||
| #[cfg(feature = "token")] | ||
| pub token_seeds: Vec<Vec<u8>>, | ||
| pub token_seeds: Vec<Vec<Vec<u8>>>, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider naming the new signer-seed shape.
Vec<Vec<Vec<u8>>> is hard to parse in public fields. A small alias or newtype for “one seed group per signer PDA” would make this contract much easier to follow and reduce accidental flattening in future call sites.
Also applies to: 299-299
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/sdk-types/src/interface/program/decompression/processor.rs` at line
129, The public field token_seeds uses the confusing type Vec<Vec<Vec<u8>>>;
define a small alias or newtype (e.g., type SeedsPerSigner = Vec<Vec<u8>>; or
pub struct SignerSeedGroups(pub Vec<SeedsPerSigner>);) to represent “one seed
group per signer PDA” and replace token_seeds: Vec<Vec<Vec<u8>>> with
token_seeds: SeedsPerSignerGroups (or the chosen name) and update the other
occurrence referenced (the second use at the later location) to use the new
alias/newtype so the public API is readable and less error-prone.
| // Push one signer seed group per vault PDA for the later Transfer2 CPI. | ||
| ctx.token_seeds | ||
| .push(seeds.iter().map(|seed| seed.to_vec()).collect()); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== PackedLightAccountVariantTrait surface =="
rg -n -C3 'trait\s+PackedLightAccountVariantTrait|fn\s+into_in_token_data|fn\s+seed_refs_with_bump|fn\s+derive_owner' --type rust
echo
echo "== Places that populate MultiInputTokenDataWithContext.owner =="
rg -n -C4 'MultiInputTokenDataWithContext|owner\s*:' --type rust
echo
echo "== Token owner seed generation / macro expansion inputs =="
rg -n -C4 'owner_seeds|token::owner_seeds|derive_owner\(' --type rustRepository: Lightprotocol/light-protocol
Length of output: 50386
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Full context around lines 140-160 in decompression/token.rs ==="
sed -n '140,160p' sdk-libs/sdk-types/src/interface/program/decompression/token.rs
echo
echo "=== Function signature and broader scope (lines 100-170) ==="
sed -n '100,170p' sdk-libs/sdk-types/src/interface/program/decompression/token.rs
echo
echo "=== How packed.derive_owner() is used ==="
rg -n 'derive_owner\(' sdk-libs/sdk-types/src/interface/program/decompression/token.rs -A2 -B2
echo
echo "=== Into in_token_data usage and return value ==="
rg -n 'into_in_token_data' sdk-libs/sdk-types/src/interface/program/decompression/token.rs -A3 -B1Repository: Lightprotocol/light-protocol
Length of output: 3779
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check macro generation for owner_seeds vs token_seeds ==="
rg -n 'owner_seeds|token::owner_seeds' sdk-libs/macros/src/ -B3 -A3 | head -100
echo
echo "=== Check how derive_owner is implemented in variant.rs macro ==="
sed -n '499,502p' sdk-libs/macros/src/light_pdas/program/variant_enum.rs
echo
echo "=== Look for owner_derivation construction ==="
rg -n 'owner_derivation' sdk-libs/macros/src/ -B5 -A5 | head -80
echo
echo "=== Check actual implementation of into_in_token_data in token_seeds.rs ==="
sed -n '265,296p' sdk-libs/sdk-types/src/interface/account/token_seeds.rsRepository: Lightprotocol/light-protocol
Length of output: 12860
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check how owner_derivation is constructed for owner_seeds case ==="
sed -n '419,450p' sdk-libs/macros/src/light_pdas/program/variant_enum.rs
echo
echo "=== Check what token_data.owner is and how it's set ==="
rg -n 'struct.*TokenData|pub owner:' sdk-libs/sdk-types/src/interface/account/token_seeds.rs -A2 -B2
echo
echo "=== Check downstream Transfer2 usage in processor.rs ==="
rg -n 'Transfer2|invoke_signed' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs -B3 -A3Repository: Lightprotocol/light-protocol
Length of output: 4626
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Full owner_derivation code for both backends ==="
sed -n '419,453p' sdk-libs/macros/src/light_pdas/program/variant_enum.rs
echo
echo "=== Confirm: seed_refs_with_bump returns vault seeds, not owner seeds ==="
rg -n 'fn seed_refs_with_bump' sdk-libs/macros/src/light_pdas/program/variant_enum.rs -A10 | head -30
echo
echo "=== How in_token_data.owner is used at line 630 in processor.rs ==="
sed -n '625,655p' sdk-libs/sdk-types/src/interface/program/decompression/processor.rsRepository: Lightprotocol/light-protocol
Length of output: 4339
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if owner seeds are stored anywhere separately ==="
rg -n 'owner_seeds|push.*owner' sdk-libs/sdk-types/src/interface/program/decompression/token.rs -B3 -A3
echo
echo "=== Full context: what gets pushed to ctx.token_seeds vs what gets used in Transfer2 ==="
sed -n '148,160p' sdk-libs/sdk-types/src/interface/program/decompression/token.rs
echo
echo "=== Confirm what data.owner actually is (the u8 index) ==="
sed -n '49,56p' sdk-libs/sdk-types/src/interface/account/token_seeds.rs
echo
echo "=== Check if TokenData owner field stores the derived owner pubkey or just an index ==="
rg -n 'struct Token |pub owner:' sdk-libs/sdk-types/src/ --type rust | grep -A1 'pub struct Token'Repository: Lightprotocol/light-protocol
Length of output: 1307
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Find where token_data is created/populated in decompression/token.rs ==="
sed -n '1,50p' sdk-libs/sdk-types/src/interface/program/decompression/token.rs
echo
echo "=== Search for Token struct definition ==="
fd -e rs . sdk-libs/sdk-types/src | xargs rg -l 'pub struct Token[^D]' | head -5
echo
echo "=== Look at PackedTokenData and how owner field is set ==="
rg -n 'owner:.*packed\|owner.*index' sdk-libs/sdk-types/src/interface/program/decompression/ -B2 -A2Repository: Lightprotocol/light-protocol
Length of output: 1903
🏁 Script executed:
#!/bin/bash
set -euo pusixo pipefail
echo "=== Find Token struct definition ==="
rg -n 'pub struct Token' sdk-libs/sdk-types/src/ --type rust -A5
echo
echo "=== How is token_data.owner set - look at into_in_token_data implementations ==="
rg -n 'into_in_token_data' sdk-libs/ --type rust -A15 | head -100
echo
echo "=== Look at create_accounts to understand owner field ==="
sed -n '1,100p' sdk-libs/sdk-types/src/interface/accounts/create_accounts.rsRepository: Lightprotocol/light-protocol
Length of output: 123
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Find Token struct definition ==="
rg -n 'pub struct Token' sdk-libs/sdk-types/src/ --type rust -A5
echo
echo "=== How is token_data.owner set - look at into_in_token_data implementations ==="
rg -n 'into_in_token_data' sdk-libs/ --type rust -A15 | head -100
echo
echo "=== Look at create_accounts to understand owner field ==="
sed -n '1,100p' sdk-libs/sdk-types/src/interface/accounts/create_accounts.rsRepository: Lightprotocol/light-protocol
Length of output: 10408
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if there's a separate owner_seeds passed to Transfer2 ==="
rg -n 'signer_seed|owner_seed' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs -B2 -A2
echo
echo "=== Full Transfer2 invoke_signed call ==="
sed -n '640,670p' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs
echo
echo "=== What accounts are passed and how is owner marked ==="
sed -n '560,635p' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs | head -60Repository: Lightprotocol/light-protocol
Length of output: 4180
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check how owner is set in the packed token data vs what derives owner ==="
rg -n 'owner.*index|owner.*as usize' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs -B3 -A3
echo
echo "=== Verify: does the owner account at data.owner actually need to sign? ==="
rg -n 'is_signer.*true' sdk-libs/sdk-types/src/interface/program/decompression/processor.rs -B5 -A2 | head -40
echo
echo "=== Critical: Check an example where owner_seeds might differ from token::seeds ==="
fd -e rs . sdk-tests/ | xargs rg -l 'token::owner_seeds' | head -3Repository: Lightprotocol/light-protocol
Length of output: 1638
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Example: Check single-token-test for owner_seeds ==="
rg -n 'token::owner_seeds|#\[light_account' sdk-tests/single-token-test/src/lib.rs -B2 -A5
echo
echo "=== Check pinocchio test ==="
rg -n 'token::owner_seeds|#\[light_account' sdk-tests/pinocchio-light-program-test/src/lib.rs -B2 -A5 | head -80Repository: Lightprotocol/light-protocol
Length of output: 2629
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Verify: in the macro, does owner_seeds actually create a different PDA? ==="
sed -n '419,453p' sdk-libs/macros/src/light_pdas/program/variant_enum.rs
echo
echo "=== Final confirmation: seed_refs_with_bump vs derive_owner use different seeds ==="
echo "Token seeds example: VAULT_SEED + mint (from token::seeds)"
echo "Owner seeds example: VAULT_AUTH_SEED (from token::owner_seeds)"
echo "These derive DIFFERENT PDAs!"
echo
echo "=== Check: are the stored signer seeds in ctx.token_seeds ever updated to include owner_seeds? ==="
rg -n 'token_seeds|owner_seeds' sdk-libs/sdk-types/src/interface/program/decompression/ --type rustRepository: Lightprotocol/light-protocol
Length of output: 3757
The seed group stored does not include owner seeds—mark the owner PDA as signer only if it matches the vault PDA.
When token::owner_seeds differs from token::seeds, derive_owner() derives a separate PDA. Line 152 stores only the vault PDA seeds in ctx.token_seeds, but line 631 in processor.rs marks account_metas[data.owner as usize + transfer2_packed_start].is_signer = true. If the owner account is a different PDA (which happens when token::owner_seeds ≠ token::seeds), the Transfer2 CPI will fail because there are no signer seeds for it. Either pass owner_seeds separately to invoke_signed, or ensure the marked owner account is actually the vault PDA itself.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/sdk-types/src/interface/program/decompression/token.rs` around lines
151 - 153, The code currently only pushes the vault PDA seed group into
ctx.token_seeds (ctx.token_seeds.push(seeds.iter().map(|seed|
seed.to_vec()).collect() in the decompression token flow), but when
token::owner_seeds ≠ token::seeds derive_owner() produces a different owner PDA
and processor.rs later marks the owner account (account_metas[data.owner as
usize + transfer2_packed_start].is_signer = true) without providing signer seeds
for it; fix by including the owner PDA seeds as a separate signer seed group
when owner_seeds differs from seeds (i.e., push token::owner_seeds mapped to
Vec<u8> into ctx.token_seeds in addition to the vault seeds, making sure the
order of seed groups matches how account_metas are constructed for the Transfer2
CPI), or alternatively change the signer-marking logic to only mark the owner as
signer when the owner PDA equals the vault PDA.
30f6544 to
a9b23b9
Compare
a9b23b9 to
bdb7850
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/epoch_manager.rs (1)
570-586:⚠️ Potential issue | 🔴 CriticalStop producer tasks before draining
new_tree_workersto avoid leaked workers on shutdown.At Line 709 you start shutdown-draining workers, but Line 570’s guard (which aborts
discover_trees_periodically/ other producers) is only dropped at function exit. A producer can still register a new worker after the drain starts, leaving it unmanaged during shutdown.🔧 Suggested ordering fix
- let _guard = scopeguard::guard( + let background_tasks_guard = scopeguard::guard( ( current_previous_handle, tree_discovery_handle, balance_check_handle, ), |(h2, h3, h4)| { @@ - monitor_handle.abort(); - self.shutdown_new_tree_workers(NEW_TREE_WORKER_SHUTDOWN_TIMEOUT) + monitor_handle.abort(); + drop(background_tasks_guard); + self.shutdown_new_tree_workers(NEW_TREE_WORKER_SHUTDOWN_TIMEOUT) .await; resultAlso applies to: 707-710
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 570 - 586, The scopeguard that aborts producer tasks (the guard capturing current_previous_handle, tree_discovery_handle, balance_check_handle which aborts discover_trees_periodically and other producers) runs only at function exit, so producers can spawn new workers while you later drain new_tree_workers; to fix, ensure producers are stopped before draining: explicitly abort or drop the producer handles (current_previous_handle.abort(), tree_discovery_handle.abort(), balance_check_handle.abort() or drop the guard) immediately prior to the code that drains new_tree_workers (the drain at/around the previous Line 709 region), so no new workers can be registered during the drain; keep the scopeguard or a new short-lived guard only for any remaining cleanup that must run at exit.
♻️ Duplicate comments (3)
prover/client/tests/batch_address_append.rs (2)
127-127: 🧹 Nitpick | 🔵 TrivialVariable name slightly misleading.
remaining_valuestakes the full slice&new_element_values[..], which includes all 10 elements—not a "remainder" after warmup. Consider renaming tobatch_valuesor usingnew_element_valuesdirectly for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` at line 127, The variable remaining_values is misleading because it is assigned the full slice &new_element_values[..] (all 10 elements) rather than a true "remainder"; rename remaining_values to batch_values (or use new_element_values directly) wherever it's declared and used to accurately reflect it contains the full batch, and update references in the test (e.g., the binding named remaining_values and any subsequent uses) to the new name to improve clarity.
108-123: 🧹 Nitpick | 🔵 TrivialWarmup result intentionally discarded.
The return value from
get_batch_address_append_circuit_inputsduring warmup is.unwrap()ed but not stored. This is intentional—the warmup's purpose is to mutatesparse_merkle_tree,changelog, andindexed_changelogto simulate prior insertions. The circuit inputs themselves aren't needed.Consider adding a brief comment to make this intent explicit, e.g.:
// Warmup: mutates sparse_merkle_tree, changelog, indexed_changelog; // returned circuit inputs are intentionally discarded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` around lines 108 - 123, Add a short inline comment where get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }>(...) is called and its result .unwrap() is discarded to make the intent explicit: state that this call is a warmup which mutates sparse_merkle_tree, changelog, and indexed_changelog and that the returned circuit inputs are intentionally ignored. Reference the call to get_batch_address_append_circuit_inputs and the mutated variables sparse_merkle_tree, changelog, and indexed_changelog in the comment.forester/src/epoch_manager.rs (1)
943-953:⚠️ Potential issue | 🟠 MajorPer-tree dedicated runtime creation still bypasses bounded worker orchestration.
Line 945 + Line 950 still create one OS thread plus a dedicated Tokio runtime per discovered tree. Under bursty tree discovery, this can oversubscribe host resources and degrade slot-time reliability; route this path through the same bounded worker model used in active work processing.
Also applies to: 950-954
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 943 - 953, The current code spawns a new OS thread and builds a dedicated Tokio runtime inside thread_handle (using std::thread::spawn and tokio::runtime::Builder in the closure that references tree_schedule.tree_accounts.merkle_tree, run_id/thread_run_id) which bypasses the bounded worker pool; instead, remove the per-tree std::thread::spawn + tokio::runtime::Builder usage and submit the per-tree work onto the existing bounded worker/orchestrator used for active work processing (use the shared worker pool API the project already uses to schedule tasks or spawn blocking work), passing tree_schedule.tree_accounts.merkle_tree and run_id into that task so we don't create a new runtime or OS thread per discovered tree.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prover/client/src/proof_types/batch_address_append/json.rs`:
- Around line 22-23: Add the missing LowElementNextIndices field to the
BatchAddressAppendParametersJSON schema so the Go server accepts and forwards
the Rust client's lowElementNextIndices JSON key; update the struct to include a
LowElementNextIndices []string `json:"lowElementNextIndices"` field (and any
corresponding export/serialization logic), then ensure the marshal/unmarshal
helpers and the code that assembles circuit inputs (e.g., wherever
BatchAddressAppendParametersJSON is converted to circuit parameters) propagate
LowElementNextIndices into the circuit input mapping so the Rust
low_element_next_indices values are preserved end-to-end.
In `@prover/client/src/proof.rs`:
- Around line 72-83: The function deserialize_hex_string_to_be_bytes currently
parses the entire hex string into a BigUint before enforcing the 32-byte limit;
preemptively validate trimmed_str length instead to avoid costly parsing of huge
inputs: after computing trimmed_str (from strip_prefix calls) check the hex
digit length and return ProverClientError::InvalidHexString if trimmed_str.len()
> 64 (or if it represents more than 64 hex chars), optionally also
handling/normalizing an odd-length hex string, and only then call
num_bigint::BigUint::from_str_radix and proceed with to_bytes_be().
---
Outside diff comments:
In `@forester/src/epoch_manager.rs`:
- Around line 570-586: The scopeguard that aborts producer tasks (the guard
capturing current_previous_handle, tree_discovery_handle, balance_check_handle
which aborts discover_trees_periodically and other producers) runs only at
function exit, so producers can spawn new workers while you later drain
new_tree_workers; to fix, ensure producers are stopped before draining:
explicitly abort or drop the producer handles (current_previous_handle.abort(),
tree_discovery_handle.abort(), balance_check_handle.abort() or drop the guard)
immediately prior to the code that drains new_tree_workers (the drain at/around
the previous Line 709 region), so no new workers can be registered during the
drain; keep the scopeguard or a new short-lived guard only for any remaining
cleanup that must run at exit.
---
Duplicate comments:
In `@forester/src/epoch_manager.rs`:
- Around line 943-953: The current code spawns a new OS thread and builds a
dedicated Tokio runtime inside thread_handle (using std::thread::spawn and
tokio::runtime::Builder in the closure that references
tree_schedule.tree_accounts.merkle_tree, run_id/thread_run_id) which bypasses
the bounded worker pool; instead, remove the per-tree std::thread::spawn +
tokio::runtime::Builder usage and submit the per-tree work onto the existing
bounded worker/orchestrator used for active work processing (use the shared
worker pool API the project already uses to schedule tasks or spawn blocking
work), passing tree_schedule.tree_accounts.merkle_tree and run_id into that task
so we don't create a new runtime or OS thread per discovered tree.
In `@prover/client/tests/batch_address_append.rs`:
- Line 127: The variable remaining_values is misleading because it is assigned
the full slice &new_element_values[..] (all 10 elements) rather than a true
"remainder"; rename remaining_values to batch_values (or use new_element_values
directly) wherever it's declared and used to accurately reflect it contains the
full batch, and update references in the test (e.g., the binding named
remaining_values and any subsequent uses) to the new name to improve clarity.
- Around line 108-123: Add a short inline comment where
get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as
usize }>(...) is called and its result .unwrap() is discarded to make the intent
explicit: state that this call is a warmup which mutates sparse_merkle_tree,
changelog, and indexed_changelog and that the returned circuit inputs are
intentionally ignored. Reference the call to
get_batch_address_append_circuit_inputs and the mutated variables
sparse_merkle_tree, changelog, and indexed_changelog in the comment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9d48f8bf-b69c-4a55-bc20-837c6ac8df18
⛔ Files ignored due to path filters (5)
program-tests/utils/src/mock_batched_forester.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-native-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-pinocchio-v1-test/tests/test.rsis excluded by none and included by none
📒 Files selected for processing (29)
forester/src/compressible/ctoken/compressor.rsforester/src/compressible/ctoken/state.rsforester/src/compressible/ctoken/types.rsforester/src/compressible/mint/compressor.rsforester/src/compressible/mint/state.rsforester/src/compressible/mint/types.rsforester/src/compressible/pda/compressor.rsforester/src/epoch_manager.rsforester/src/forester_status.rsforester/src/metrics.rsforester/src/priority_fee.rsforester/src/processor/v2/proof_cache.rsforester/src/processor/v2/proof_worker.rsforester/tests/e2e_test.rsforester/tests/legacy/batched_state_async_indexer_test.rsforester/tests/test_batch_append_spent.rsforester/tests/test_compressible_ctoken.rsforester/tests/test_compressible_mint.rsprover/client/src/helpers.rsprover/client/src/proof.rsprover/client/src/proof_types/batch_address_append/json.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/combined/v2/json.rsprover/client/src/proof_types/non_inclusion/v2/json.rsprover/client/tests/batch_address_append.rsprover/client/tests/init_merkle_tree.rssdk-libs/client/src/interface/initialize_config.rssdk-libs/client/src/local_test_validator.rssdk-libs/program-test/src/indexer/test_indexer.rs
| pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> { | ||
| let trimmed_str = hex_str | ||
| .strip_prefix("0x") | ||
| .or_else(|| hex_str.strip_prefix("0X")) | ||
| .unwrap_or(hex_str); | ||
| let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16) | ||
| .map_err(|error| ProverClientError::InvalidHexString(format!("{hex_str}: {error}")))?; | ||
| let big_uint_bytes = big_uint.to_bytes_be(); | ||
| if big_uint_bytes.len() > 32 { | ||
| return Err(ProverClientError::InvalidHexString(format!( | ||
| "{hex_str}: exceeds 32 bytes" | ||
| ))); |
There was a problem hiding this comment.
Reject oversized hex strings before BigUint parsing.
At Line 77, parsing happens before enforcing the 32-byte bound. A very large hex string can trigger avoidable CPU/memory pressure before you reject it.
🔧 Suggested hardening diff
pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
let trimmed_str = hex_str
.strip_prefix("0x")
.or_else(|| hex_str.strip_prefix("0X"))
.unwrap_or(hex_str);
+ let normalized = trimmed_str.trim_start_matches('0');
+ if normalized.len() > 64 {
+ return Err(ProverClientError::InvalidHexString(format!(
+ "{hex_str}: exceeds 32 bytes"
+ )));
+ }
let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16)
.map_err(|error| ProverClientError::InvalidHexString(format!("{hex_str}: {error}")))?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> { | |
| let trimmed_str = hex_str | |
| .strip_prefix("0x") | |
| .or_else(|| hex_str.strip_prefix("0X")) | |
| .unwrap_or(hex_str); | |
| let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16) | |
| .map_err(|error| ProverClientError::InvalidHexString(format!("{hex_str}: {error}")))?; | |
| let big_uint_bytes = big_uint.to_bytes_be(); | |
| if big_uint_bytes.len() > 32 { | |
| return Err(ProverClientError::InvalidHexString(format!( | |
| "{hex_str}: exceeds 32 bytes" | |
| ))); | |
| pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> { | |
| let trimmed_str = hex_str | |
| .strip_prefix("0x") | |
| .or_else(|| hex_str.strip_prefix("0X")) | |
| .unwrap_or(hex_str); | |
| let normalized = trimmed_str.trim_start_matches('0'); | |
| if normalized.len() > 64 { | |
| return Err(ProverClientError::InvalidHexString(format!( | |
| "{hex_str}: exceeds 32 bytes" | |
| ))); | |
| } | |
| let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16) | |
| .map_err(|error| ProverClientError::InvalidHexString(format!("{hex_str}: {error}")))?; | |
| let big_uint_bytes = big_uint.to_bytes_be(); | |
| if big_uint_bytes.len() > 32 { | |
| return Err(ProverClientError::InvalidHexString(format!( | |
| "{hex_str}: exceeds 32 bytes" | |
| ))); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/proof.rs` around lines 72 - 83, The function
deserialize_hex_string_to_be_bytes currently parses the entire hex string into a
BigUint before enforcing the 32-byte limit; preemptively validate trimmed_str
length instead to avoid costly parsing of huge inputs: after computing
trimmed_str (from strip_prefix calls) check the hex digit length and return
ProverClientError::InvalidHexString if trimmed_str.len() > 64 (or if it
represents more than 64 hex chars), optionally also handling/normalizing an
odd-length hex string, and only then call num_bigint::BigUint::from_str_radix
and proceed with to_bytes_be().
bdb7850 to
72c11ff
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
forester/src/compressible/ctoken/state.rs (1)
86-92: 🧹 Nitpick | 🔵 TrivialCarry the shared-ownership win to
update_from_token.Nice direction, but this still forces an owned
Tokenat the boundary and only wraps it inArcat insertion time. If a caller already has anArc<Token>or wants to share the decoded token, it still pays a full clone before calling this method. Takingimpl Into<Arc<Token>>here would make the clone-pressure reduction end-to-end.Suggested refactor
- pub fn update_from_token( - &self, - pubkey: Pubkey, - ctoken: Token, - lamports: u64, - account_size: usize, - ) -> Result<()> { + pub fn update_from_token( + &self, + pubkey: Pubkey, + ctoken: impl Into<Arc<Token>>, + lamports: u64, + account_size: usize, + ) -> Result<()> { + let ctoken = ctoken.into(); + // Skip mint accounts - only track actual token accounts if !ctoken.is_token_account() { debug!("Skipping non-token account {}", pubkey); return Ok(()); } - let compressible_slot = match calculate_compressible_slot(&ctoken, lamports, account_size) { + let compressible_slot = + match calculate_compressible_slot(ctoken.as_ref(), lamports, account_size) { Ok(slot) => slot, Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e @@ let state = CTokenAccountState { pubkey, - account: Arc::new(ctoken), + account: ctoken, lamports, compressible_slot, is_ata, };Also applies to: 127-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/compressible/ctoken/state.rs` around lines 86 - 92, The method update_from_token currently accepts an owned Token which forces callers to clone even if they already have shared ownership; change its signature to accept token: impl Into<std::sync::Arc<Token>> (or a concrete Arc<Token>) and convert it to an Arc inside the function so callers can pass either Arc<Token> or Token without extra clones; apply the same change to the other analogous method that currently takes Token (the one referenced around the second occurrence) so both insertion/update paths use Arc<Token> and avoid unnecessary ownership cloning.forester/src/compressible/ctoken/compressor.rs (1)
49-59: 🧹 Nitpick | 🔵 TrivialPush the shared-signer boundary into
new().Right now the sharing starts only after construction. Any caller creating multiple compressors from the same payer still has to clone the
Keypairup front to satisfy this owned parameter. AcceptingArc<Keypair>— or better,impl Into<Arc<Keypair>>— would make the refactor actually remove those upstream clones too.Suggested refactor
pub fn new( rpc_pool: Arc<SolanaRpcPool<R>>, tracker: Arc<CTokenAccountTracker>, - payer_keypair: Keypair, + payer_keypair: impl Into<Arc<Keypair>>, transaction_policy: TransactionPolicy, ) -> Self { Self { rpc_pool, tracker, - payer_keypair: Arc::new(payer_keypair), + payer_keypair: payer_keypair.into(), transaction_policy, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/compressible/ctoken/compressor.rs` around lines 49 - 59, Change the CTokenCompressor::new constructor to accept a shared payer Keypair by taking either Arc<Keypair> or an impl Into<Arc<Keypair>> (instead of an owned Keypair), and store that Arc directly in the struct's payer_keypair field; update the new signature and the payer_keypair initialization (referenced as pub fn new and the payer_keypair field) so callers can pass a cloned Arc rather than cloning the Keypair itself.
♻️ Duplicate comments (5)
sdk-libs/client/src/local_test_validator.rs (2)
59-60:⚠️ Potential issue | 🟡 MinorReturn an explicit failure when project root resolution fails.
Right now,
Nonefromget_project_root()causes a silent no-op, which can make test failures hard to diagnose.Suggested fix
-pub async fn spawn_validator(config: LightValidatorConfig) { - if let Some(project_root) = get_project_root() { +pub async fn spawn_validator(config: LightValidatorConfig) -> Result<(), String> { + let project_root = get_project_root() + .ok_or_else(|| "Failed to determine project root; cannot spawn validator".to_string())?; + { let project_root = project_root.trim_end_matches(['\n', '\r']); let executable = format!("{}/cli/test_bin/run", project_root); let mut args = vec!["test-validator".to_string()]; // ... - } + } + Ok(()) }Also applies to: 134-135
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/client/src/local_test_validator.rs` around lines 59 - 60, When get_project_root() returns None the code silently does nothing; change spawn_validator to surface a failure instead of no-op by updating its signature to return a Result (e.g., pub async fn spawn_validator(config: LightValidatorConfig) -> anyhow::Result<()>) and handle the None case with an explicit error (use anyhow::bail! or return Err(anyhow!(...)) with a clear message like "could not resolve project root"). Apply the same change to the other similar spawn_validator occurrence so both functions consistently propagate or return a clear failure when get_project_root() is None.
118-133:⚠️ Potential issue | 🟠 MajorTighten non-surfpool process lifecycle checks.
Two reliability gaps remain:
- an early successful exit currently passes, even though validator availability is required;
- the child handle is dropped, so callers cannot deterministically tear down the process.
Suggested fix direction
- if let Some(status) = child.try_wait().expect("Failed to poll validator process") { - assert!( - status.success(), - "Validator exited early with error: {}", - status - ); - } + if let Some(status) = child.try_wait().expect("Failed to poll validator process") { + panic!("Validator exited before wait_time elapsed: {}", status); + } + // Return/store `child` so tests can terminate it during teardown.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/client/src/local_test_validator.rs` around lines 118 - 133, The current spawn logic allows an early successful exit to be treated as OK and then drops the child handle, preventing deterministic teardown; change the post-spawn check so that if child.try_wait() returns Some(_) (any exit) after the initial sleep it is treated as an error (fail the test or return Err) rather than allowing a success, and stop dropping the Child: return or store the spawned Child (the value from Command::new(...).spawn()) to the caller so they can call wait()/kill() deterministically on teardown (or implement Drop for the wrapper to kill the child). Concretely, update the block around Command::new(...).spawn(), replace the assert! that accepts status.success() with an unconditional error when try_wait() yields Some(status), and change the function signature/ownership so the spawned Child is returned or retained for controlled shutdown.sdk-libs/program-test/src/indexer/test_indexer.rs (1)
2594-2620:⚠️ Potential issue | 🟠 MajorEarly returns in retry loop prevent retries on transient failures.
The
?operators on lines 2597-2601 (reading response body) and line 2607 (build_compressed_proof) cause immediate exit on failure rather than consuming remaining retry attempts. This is inconsistent with how send failures are handled (lines 2616-2618), which properly setlast_errorand continue.A truncated HTTP response or temporarily malformed proof payload would skip remaining retries, while a network timeout would correctly retry.
,
🛠️ Suggested fix to handle all failures uniformly
match response_result { Ok(response_result) => { let status = response_result.status(); - let body = response_result.text().await.map_err(|error| { - IndexerError::CustomError(format!( - "failed to read prover response body: {error}" - )) - })?; - - if status.is_success() { - return Ok(ValidityProofWithContext { - accounts: account_proof_inputs, - addresses: address_proof_inputs, - proof: build_compressed_proof(&body)?.into(), - }); - } - - let body_preview: String = body.chars().take(512).collect(); - last_error = format!( - "prover returned HTTP {status} for validity proof request: {body_preview}" - ); + match response_result.text().await { + Ok(body) => { + if status.is_success() { + match build_compressed_proof(&body) { + Ok(proof) => { + return Ok(ValidityProofWithContext { + accounts: account_proof_inputs, + addresses: address_proof_inputs, + proof: proof.into(), + }); + } + Err(e) => { + let body_preview: String = body.chars().take(512).collect(); + last_error = format!( + "failed to parse prover proof: {e}; body: {body_preview}" + ); + } + } + } else { + let body_preview: String = body.chars().take(512).collect(); + last_error = format!( + "prover returned HTTP {status} for validity proof request: {body_preview}" + ); + } + } + Err(error) => { + last_error = format!("failed to read prover response body: {error}"); + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2594 - 2620, The match arm returns early on failures because the code uses the `?` operator when reading the response body and when calling `build_compressed_proof`, which prevents retries; change both to handle errors like the send failures do: replace `response_result.text().await.map_err(...) ?` with an explicit Result match (or `match response_result.text().await { Ok(body) => ..., Err(e) => { last_error = format!("failed to read prover response body: {e}"); continue; } }`) and wrap `build_compressed_proof(&body)` similarly (e.g. `match build_compressed_proof(&body) { Ok(p) => p, Err(e) => { last_error = format!("failed to build compressed proof: {e}"); continue; } }`) so that `ValidityProofWithContext` (using `account_proof_inputs` and `address_proof_inputs`) is only returned on success and all transient failures update `last_error` and allow the retry loop to continue.forester/src/epoch_manager.rs (1)
943-954:⚠️ Potential issue | 🟠 MajorRoute new-tree processing through the shared worker limiter instead of per-tree runtimes.
Line 945 still spawns one OS thread and Line 950 builds a new 2-thread Tokio runtime per discovered tree. This bypasses
max_parallel_tree_workers()and can still oversubscribe CPU/memory during tree discovery bursts.Also applies to: 945-1015
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 943 - 954, The code currently spawns a new OS thread (std::thread::spawn) and builds a fresh Tokio runtime per-tree (Builder::new_multi_thread()...build()) which bypasses the shared worker limiter; change this to run tree processing on the existing/shared runtime and gate it with the global worker limiter returned by max_parallel_tree_workers(): instead of creating thread_handle and a per-tree runtime, acquire a permit from the shared limiter (e.g. await limiter.acquire/ acquire_owned), then tokio::spawn an async task that receives cancel_rx and uses completion_tx to signal completion, reusing self_clone/run_id and tree_schedule/tree_accounts.merkle_tree values; keep the oneshot cancel_tx/cancel_rx and completion_tx/completion_rx semantics but remove the per-tree runtime and std::thread::spawn so parallelism is controlled by the shared limiter.prover/client/src/proof.rs (1)
72-83:⚠️ Potential issue | 🟠 MajorReject oversized hex before
BigUintparsing.At Line 77, parsing happens before enforcing the 32-byte bound. Very large inputs can cause avoidable CPU/memory pressure before rejection.
🔧 Proposed hardening diff
pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> { let trimmed_str = hex_str .strip_prefix("0x") .or_else(|| hex_str.strip_prefix("0X")) .unwrap_or(hex_str); + let normalized = trimmed_str.trim_start_matches('0'); + if normalized.len() > 64 { + return Err(ProverClientError::InvalidHexString(format!( + "{hex_str}: exceeds 32 bytes" + ))); + } let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16) .map_err(|error| ProverClientError::InvalidHexString(format!("{hex_str}: {error}")))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof.rs` around lines 72 - 83, In deserialize_hex_string_to_be_bytes, reject inputs that would exceed 32 bytes before calling BigUint::from_str_radix: after computing trimmed_str (the hex digits after removing "0x"/"0X"), strip leading '0' characters to find the significant hex length and if that significant length is greater than 64 hex chars (or if trimmed_str.len() > 64 and not just leading zeros) return ProverClientError::InvalidHexString immediately; also handle odd-length hex by allowing a single leading '0' pad rule when checking length so you still reject anything representing >32 bytes without invoking big-int parsing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/epoch_manager.rs`:
- Around line 2542-2548: The call to check_forester_eligibility is always
passing &tree_accounts.merkle_tree, which is incorrect for TreeType::StateV2
because V2 trees use the queue-based scheduling/eligibility key; compute the
proper eligibility key before the call (e.g., match on tree_accounts.tree_type /
TreeType::StateV2 and select the queue-based scheduling/eligibility key for V2,
otherwise use tree_accounts.merkle_tree) and pass that selected key (instead of
always tree_accounts.merkle_tree) into check_forester_eligibility alongside
epoch_pda, current_light_slot, and epoch_info so V2 trees use the queue-derived
eligible index.
In `@forester/tests/test_batch_append_spent.rs`:
- Around line 352-356: The timeout-based wait on the service JoinHandle can leak
a spawned task (and abort() won't stop spawn_blocking tasks); replace the
timeout(Duration::from_secs(5), service_handle).await pattern with an explicit
tokio::select! that awaits either the service_handle completion or a sleep
timeout, and in the timeout branch explicitly send the graceful shutdown signal
(use the existing shutdown_sender.send(())) and then wait a short additional
grace period for the handle to complete; if it still doesn't finish, fail the
test deterministically or refactor to remove the spawn_blocking wrapper around
run_pipeline::<LightClient>() so the handle can be properly cancelled. Ensure
references to service_handle, shutdown_sender.send(()),
run_pipeline::<LightClient>(), and spawn_blocking are used to locate and update
the code.
In `@prover/client/src/proof_types/batch_append/proof_inputs.rs`:
- Around line 131-142: The current batch validation only checks outer vector
lengths and does not validate each inner Merkle proof length, which leads to a
panic at the later try_into().unwrap() call; update the validation loop (the
section iterating over ("merkle_proofs", merkle_proofs.len())) to iterate each
element of merkle_proofs and verify its depth/length matches batch_size
expectations (or the expected depth for conversion) and if any inner proof has
an invalid length return Err(ProverClientError::GenericError(...)) instead of
allowing try_into().unwrap() to panic; ensure you reference and validate the
exact items used by the conversion that calls try_into().unwrap() so the
conversion can safely use try_into() without unwrap.
In `@prover/client/src/proof_types/non_inclusion/v2/json.rs`:
- Around line 28-29: The JSON struct field path_index (pub path_index: u64) and
the builder conversion that produces a u64 (the to_u64() call which currently
returns ProverClientError::IntegerConversion on failure) must be constrained to
the server-expected uint32 range: after converting the BigInt to u64, validate
that the value <= u32::MAX and return a clear ProverClientError if it exceeds
that bound, or else cast/truncate safely to u32 before assigning to path_index
(so the serialized JSON uses a 32-bit-compatible value); update the conversion
logic where input.index_hashed_indexed_element_leaf is converted to u64 to
perform this range check and error handling.
In `@prover/client/tests/init_merkle_tree.rs`:
- Around line 223-225: The test currently calls
.index_hashed_indexed_element_leaf.to_u64().unwrap(), which can panic; change
this to a fallible conversion that returns a Result and propagate or map the
error instead of unwrapping (e.g., replace the unwrap with a call that uses ? or
map_err to produce a descriptive test failure). Locate the use of
index_hashed_indexed_element_leaf and replace the to_u64().unwrap() with a
to_u64() handling path that converts the Option/Result into the test's
Result-returning flow (or maps the conversion error into a test error) so the
test fails deterministically without panicking.
---
Outside diff comments:
In `@forester/src/compressible/ctoken/compressor.rs`:
- Around line 49-59: Change the CTokenCompressor::new constructor to accept a
shared payer Keypair by taking either Arc<Keypair> or an impl Into<Arc<Keypair>>
(instead of an owned Keypair), and store that Arc directly in the struct's
payer_keypair field; update the new signature and the payer_keypair
initialization (referenced as pub fn new and the payer_keypair field) so callers
can pass a cloned Arc rather than cloning the Keypair itself.
In `@forester/src/compressible/ctoken/state.rs`:
- Around line 86-92: The method update_from_token currently accepts an owned
Token which forces callers to clone even if they already have shared ownership;
change its signature to accept token: impl Into<std::sync::Arc<Token>> (or a
concrete Arc<Token>) and convert it to an Arc inside the function so callers can
pass either Arc<Token> or Token without extra clones; apply the same change to
the other analogous method that currently takes Token (the one referenced around
the second occurrence) so both insertion/update paths use Arc<Token> and avoid
unnecessary ownership cloning.
---
Duplicate comments:
In `@forester/src/epoch_manager.rs`:
- Around line 943-954: The code currently spawns a new OS thread
(std::thread::spawn) and builds a fresh Tokio runtime per-tree
(Builder::new_multi_thread()...build()) which bypasses the shared worker
limiter; change this to run tree processing on the existing/shared runtime and
gate it with the global worker limiter returned by max_parallel_tree_workers():
instead of creating thread_handle and a per-tree runtime, acquire a permit from
the shared limiter (e.g. await limiter.acquire/ acquire_owned), then
tokio::spawn an async task that receives cancel_rx and uses completion_tx to
signal completion, reusing self_clone/run_id and
tree_schedule/tree_accounts.merkle_tree values; keep the oneshot
cancel_tx/cancel_rx and completion_tx/completion_rx semantics but remove the
per-tree runtime and std::thread::spawn so parallelism is controlled by the
shared limiter.
In `@prover/client/src/proof.rs`:
- Around line 72-83: In deserialize_hex_string_to_be_bytes, reject inputs that
would exceed 32 bytes before calling BigUint::from_str_radix: after computing
trimmed_str (the hex digits after removing "0x"/"0X"), strip leading '0'
characters to find the significant hex length and if that significant length is
greater than 64 hex chars (or if trimmed_str.len() > 64 and not just leading
zeros) return ProverClientError::InvalidHexString immediately; also handle
odd-length hex by allowing a single leading '0' pad rule when checking length so
you still reject anything representing >32 bytes without invoking big-int
parsing.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 59-60: When get_project_root() returns None the code silently does
nothing; change spawn_validator to surface a failure instead of no-op by
updating its signature to return a Result (e.g., pub async fn
spawn_validator(config: LightValidatorConfig) -> anyhow::Result<()>) and handle
the None case with an explicit error (use anyhow::bail! or return
Err(anyhow!(...)) with a clear message like "could not resolve project root").
Apply the same change to the other similar spawn_validator occurrence so both
functions consistently propagate or return a clear failure when
get_project_root() is None.
- Around line 118-133: The current spawn logic allows an early successful exit
to be treated as OK and then drops the child handle, preventing deterministic
teardown; change the post-spawn check so that if child.try_wait() returns
Some(_) (any exit) after the initial sleep it is treated as an error (fail the
test or return Err) rather than allowing a success, and stop dropping the Child:
return or store the spawned Child (the value from Command::new(...).spawn()) to
the caller so they can call wait()/kill() deterministically on teardown (or
implement Drop for the wrapper to kill the child). Concretely, update the block
around Command::new(...).spawn(), replace the assert! that accepts
status.success() with an unconditional error when try_wait() yields
Some(status), and change the function signature/ownership so the spawned Child
is returned or retained for controlled shutdown.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2594-2620: The match arm returns early on failures because the
code uses the `?` operator when reading the response body and when calling
`build_compressed_proof`, which prevents retries; change both to handle errors
like the send failures do: replace `response_result.text().await.map_err(...) ?`
with an explicit Result match (or `match response_result.text().await { Ok(body)
=> ..., Err(e) => { last_error = format!("failed to read prover response body:
{e}"); continue; } }`) and wrap `build_compressed_proof(&body)` similarly (e.g.
`match build_compressed_proof(&body) { Ok(p) => p, Err(e) => { last_error =
format!("failed to build compressed proof: {e}"); continue; } }`) so that
`ValidityProofWithContext` (using `account_proof_inputs` and
`address_proof_inputs`) is only returned on success and all transient failures
update `last_error` and allow the retry loop to continue.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 629e56fd-8178-4129-a65e-dfe5832c21a3
⛔ Files ignored due to path filters (12)
program-tests/utils/src/mock_batched_forester.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-native-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-pinocchio-v1-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/ctoken_pda.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/decompress_full_cpi.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/pda_ctoken.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_invocations.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_transfer2.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_deposit.rsis excluded by none and included by none
📒 Files selected for processing (29)
forester/src/compressible/ctoken/compressor.rsforester/src/compressible/ctoken/state.rsforester/src/compressible/ctoken/types.rsforester/src/compressible/mint/compressor.rsforester/src/compressible/mint/state.rsforester/src/compressible/mint/types.rsforester/src/compressible/pda/compressor.rsforester/src/epoch_manager.rsforester/src/forester_status.rsforester/src/metrics.rsforester/src/priority_fee.rsforester/src/processor/v2/proof_cache.rsforester/src/processor/v2/proof_worker.rsforester/tests/e2e_test.rsforester/tests/legacy/batched_state_async_indexer_test.rsforester/tests/test_batch_append_spent.rsforester/tests/test_compressible_ctoken.rsforester/tests/test_compressible_mint.rsprover/client/src/helpers.rsprover/client/src/proof.rsprover/client/src/proof_types/batch_address_append/json.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/combined/v2/json.rsprover/client/src/proof_types/non_inclusion/v2/json.rsprover/client/tests/batch_address_append.rsprover/client/tests/init_merkle_tree.rssdk-libs/client/src/interface/initialize_config.rssdk-libs/client/src/local_test_validator.rssdk-libs/program-test/src/indexer/test_indexer.rs
| let batch_len = batch_size as usize; | ||
| for (name, len) in [ | ||
| ("old_leaves", old_leaves.len()), | ||
| ("leaves", leaves.len()), | ||
| ("merkle_proofs", merkle_proofs.len()), | ||
| ] { | ||
| if len != batch_len { | ||
| return Err(ProverClientError::GenericError(format!( | ||
| "invalid batch append inputs: {name} len {len} != batch_size {batch_len}" | ||
| ))); | ||
| } | ||
| } |
There was a problem hiding this comment.
Validate each Merkle proof depth before conversion.
The new guard checks only batch vector lengths. A malformed inner proof length still panics at try_into().unwrap() (Line 193). Return an error instead of allowing panic.
🔧 Proposed fix
for (name, len) in [
("old_leaves", old_leaves.len()),
("leaves", leaves.len()),
("merkle_proofs", merkle_proofs.len()),
] {
if len != batch_len {
return Err(ProverClientError::GenericError(format!(
"invalid batch append inputs: {name} len {len} != batch_size {batch_len}"
)));
}
}
+ if let Some((i, proof)) = merkle_proofs
+ .iter()
+ .enumerate()
+ .find(|(_, proof)| proof.len() != HEIGHT)
+ {
+ return Err(ProverClientError::GenericError(format!(
+ "invalid batch append inputs: merkle_proofs[{i}] len {} != expected height {HEIGHT}",
+ proof.len()
+ )));
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/proof_types/batch_append/proof_inputs.rs` around lines 131
- 142, The current batch validation only checks outer vector lengths and does
not validate each inner Merkle proof length, which leads to a panic at the later
try_into().unwrap() call; update the validation loop (the section iterating over
("merkle_proofs", merkle_proofs.len())) to iterate each element of merkle_proofs
and verify its depth/length matches batch_size expectations (or the expected
depth for conversion) and if any inner proof has an invalid length return
Err(ProverClientError::GenericError(...)) instead of allowing
try_into().unwrap() to panic; ensure you reference and validate the exact items
used by the conversion that calls try_into().unwrap() so the conversion can
safely use try_into() without unwrap.
| pub path_index: u64, | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking pathIndex field types across client/server..."
rg -n --type=go 'PathIndex\s+u?int(32|64)|json:"pathIndex"' -C2
rg -n --type=rust 'path_index:\s*u(32|64)|pathIndex' -C2
echo
echo "Expected: client and server use the same width for pathIndex."
echo "If server remains uint32, clamp/validate client values to u32::MAX or update server schema."Repository: Lightprotocol/light-protocol
Length of output: 7048
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking non_inclusion v2 JSON builder conversion logic ==="
# Find the full file to see lines 54-62 and builder implementation
wc -l prover/client/src/proof_types/non_inclusion/v2/json.rs
echo
echo "=== Full file content ==="
cat -n prover/client/src/proof_types/non_inclusion/v2/json.rs
echo
echo "=== Searching for From/TryFrom implementations for NonInclusionInputsJSON ==="
ast-grep --pattern 'impl.*From.*NonInclusionInputsJSON {
$$$
}'
echo
echo "=== Searching for IntegerConversion or path_index conversion ==="
rg -n 'IntegerConversion|path_index|PathIndex' prover/client/src/proof_types/non_inclusion/v2/ -A2 -B2Repository: Lightprotocol/light-protocol
Length of output: 3665
Schema mismatch: client serializes pathIndex as u64 while server expects uint32.
The builder converts to u64 (lines 54–62) and the JSON struct field is declared as u64 (line 28), but all server-side schemas (prover/server/prover/v2/marshal_non_inclusion.go and all v1 variants) define PathIndex uint32. Values above u32::MAX will be serialized to JSON and fail during server unmarshaling.
The conversion from BigInt to u64 is safely handled with to_u64().ok_or_else(...) returning IntegerConversion on failure, but there's no validation that the resulting u64 fits within uint32 bounds before it reaches the server.
Conversion logic (lines 54–62)
path_index: input
.index_hashed_indexed_element_leaf
.to_u64()
.ok_or_else(|| {
ProverClientError::IntegerConversion(format!(
"failed to convert path index {} to u64",
input.index_hashed_indexed_element_leaf
))
})?,Either clamp the field to u32 before serialization, validate that the value is ≤ u32::MAX, or update the server schema to accept uint64.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/src/proof_types/non_inclusion/v2/json.rs` around lines 28 - 29,
The JSON struct field path_index (pub path_index: u64) and the builder
conversion that produces a u64 (the to_u64() call which currently returns
ProverClientError::IntegerConversion on failure) must be constrained to the
server-expected uint32 range: after converting the BigInt to u64, validate that
the value <= u32::MAX and return a clear ProverClientError if it exceeds that
bound, or else cast/truncate safely to u32 before assigning to path_index (so
the serialized JSON uses a 32-bit-compatible value); update the conversion logic
where input.index_hashed_indexed_element_leaf is converted to u64 to perform
this range check and error handling.
| .index_hashed_indexed_element_leaf | ||
| .to_u32() | ||
| .to_u64() | ||
| .unwrap(), |
There was a problem hiding this comment.
Avoid unwrap() in this fallible path.
Line 225 can still panic on conversion failure even though surrounding code uses Result. Prefer explicit error mapping for deterministic test failures.
🔧 Proposed fix
- path_index: merkle_inputs
- .index_hashed_indexed_element_leaf
- .to_u64()
- .unwrap(),
+ path_index: merkle_inputs
+ .index_hashed_indexed_element_leaf
+ .to_u64()
+ .ok_or_else(|| {
+ ProverClientError::IntegerConversion(
+ "failed to convert index_hashed_indexed_element_leaf to u64".to_string(),
+ )
+ })?,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prover/client/tests/init_merkle_tree.rs` around lines 223 - 225, The test
currently calls .index_hashed_indexed_element_leaf.to_u64().unwrap(), which can
panic; change this to a fallible conversion that returns a Result and propagate
or map the error instead of unwrapping (e.g., replace the unwrap with a call
that uses ? or map_err to produce a descriptive test failure). Locate the use of
index_hashed_indexed_element_leaf and replace the to_u64().unwrap() with a
to_u64() handling path that converts the Option/Result into the test's
Result-returning flow (or maps the conversion error into a test error) so the
test fails deterministically without panicking.
72c11ff to
0454ae4
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
prover/client/src/helpers.rs (2)
97-99:⚠️ Potential issue | 🟡 MinorInconsistent error handling:
unwrap()can panic on conversion failure.While
bigint_to_u8_32was refactored to returnResultfor robustness,create_vec_of_u32still usesunwrap()onto_u32(). If theBigIntexceedsu32::MAX, this will panic. Consider whether this function should also returnResult<Vec<u32>, ProverClientError>for consistency with the error-handling improvements elsewhere in this PR.🛡️ Proposed fix for consistent error handling
-pub fn create_vec_of_u32(number_of_utxos: usize, element: &BigInt) -> Vec<u32> { - vec![element.to_u32().unwrap(); number_of_utxos] +pub fn create_vec_of_u32(number_of_utxos: usize, element: &BigInt) -> Result<Vec<u32>, ProverClientError> { + let value = element.to_u32().ok_or_else(|| { + ProverClientError::IntegerConversion(format!( + "failed to convert {} to u32", + element + )) + })?; + Ok(vec![value; number_of_utxos]) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/helpers.rs` around lines 97 - 99, The function create_vec_of_u32 currently uses element.to_u32().unwrap() which can panic for out-of-range BigInt; change its signature to return Result<Vec<u32>, ProverClientError> and attempt a fallible conversion for each element (e.g., element.to_u32().ok_or(...) or equivalent), map/collect the conversion into a Vec<u32> and propagate a descriptive ProverClientError on failure to match the bigint_to_u8_32 style; update callers accordingly to handle the Result.
50-53: 🧹 Nitpick | 🔵 TrivialMinor: Redundant slice operation.
Line 51 creates a slice of the entire vector, which is unnecessary. This can be simplified.
♻️ Simplify the slice
let mut array = [0; 32]; - let bytes = &bytes_be[..bytes_be.len()]; - array[(32 - bytes.len())..].copy_from_slice(bytes); + array[(32 - bytes_be.len())..].copy_from_slice(&bytes_be); Ok(array)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/helpers.rs` around lines 50 - 53, The slice bytes_be[..bytes_be.len()] is redundant; change the allocation to use the vector's slice directly (e.g., replace let bytes = &bytes_be[..bytes_be.len()]; with let bytes = bytes_be.as_slice(); or simply use &bytes_be) and then perform array[(32 - bytes.len())..].copy_from_slice(bytes) so variable names array, bytes, and bytes_be are preserved and the redundant slice operation is removed.
♻️ Duplicate comments (9)
prover/client/tests/init_merkle_tree.rs (1)
222-225:⚠️ Potential issue | 🟡 MinorAvoid panicking conversion in a
Result-returning path.Line 225 still uses
to_u64().unwrap(), which can panic and bypass structured error propagation. Keep this conversion fallible and map it toProverClientErrorinstead.🔧 Suggested fix
path_index: merkle_inputs .index_hashed_indexed_element_leaf .to_u64() - .unwrap(), + .ok_or_else(|| { + ProverClientError::IntegerConversion( + "failed to convert index_hashed_indexed_element_leaf to u64".to_string(), + ) + })?,#!/bin/bash # Verify this conversion path and panic usage in the reviewed file. rg -n -C3 'index_hashed_indexed_element_leaf|to_u64\(\)|unwrap\(\)' prover/client/tests/init_merkle_tree.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/init_merkle_tree.rs` around lines 222 - 225, The code currently calls merkle_inputs.index_hashed_indexed_element_leaf.to_u64().unwrap(), which can panic; change this to a fallible conversion that maps the failure into the existing Result error type (ProverClientError) and returns it instead of panicking—for example replace the unwrap with a safe conversion using to_u64().ok_or_else(...) or map_err(...) producing a ProverClientError with context, and propagate the error via the function's Result; update the construction of path_index to use that fallible result so the test path returns Err rather than panicking.prover/client/tests/batch_address_append.rs (2)
108-123: 🧹 Nitpick | 🔵 TrivialWarmup phase discards circuit inputs—add a clarifying comment.
The call to
get_batch_address_append_circuit_inputsduring warmup mutatessparse_merkle_tree,changelog, andindexed_changelogas side effects, but the returnedBatchAddressAppendInputsis discarded. This is intentional, but the intent isn't obvious to future readers.A brief comment would prevent confusion:
+ // Warmup: mutates sparse_merkle_tree, changelog, and indexed_changelog + // to simulate prior insertions. Returned circuit inputs are not needed. get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }>(,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` around lines 108 - 123, Call to get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }>() mutates sparse_merkle_tree, changelog, and indexed_changelog but its returned BatchAddressAppendInputs is intentionally discarded during the warmup phase; add a brief inline comment above this call (referencing get_batch_address_append_circuit_inputs, sparse_merkle_tree, changelog, indexed_changelog, and BatchAddressAppendInputs) stating that the return value is intentionally ignored because the call is only used for its side effects that populate the trees/changelogs during warmup.
127-134: 🧹 Nitpick | 🔵 TrivialVariable name
remaining_valuesis misleading.The variable takes the full
new_element_valuesslice (all 10 elements), not a "remaining" subset after warmup. The warmup phase uses a separateprior_value(999), so nothing is being "left over" here.Consider renaming for clarity:
- let remaining_values = &new_element_values[..]; + let batch_values = &new_element_values[..]; let ( low_element_values, ... - ) = collect_non_inclusion_data(&relayer_merkle_tree, remaining_values); + ) = collect_non_inclusion_data(&relayer_merkle_tree, batch_values);,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` around lines 127 - 134, The name remaining_values is misleading because it actually references the full slice &new_element_values[..] used by collect_non_inclusion_data with relayer_merkle_tree; rename the variable (for example to new_values_slice or new_element_values_slice) and update all usages (the binding passed into collect_non_inclusion_data and any subsequent references like low_element_values, low_element_indices, etc.) so the name reflects that it contains the complete new_element_values slice rather than a post-warmup "remaining" subset.sdk-libs/program-test/src/indexer/test_indexer.rs (1)
2586-2628:⚠️ Potential issue | 🟠 MajorRetry loop still exits early on body/proof decode failures.
The retry block still uses
?for response body read and proof decoding, so transient payload/read issues abort immediately instead of updatinglast_errorand continuing retries.Proposed fix
let mut retries = 3; let mut last_error = "Failed to get proof from server".to_string(); while retries > 0 { let response_result = client .post(format!("{}{}", SERVER_ADDRESS, PROVE_PATH)) .header("Content-Type", "text/plain; charset=utf-8") .body(json_payload.clone()) .send() .await; match response_result { Ok(response_result) => { let status = response_result.status(); - let body = response_result.text().await.map_err(|error| { - IndexerError::CustomError(format!( - "failed to read prover response body: {error}" - )) - })?; - - if status.is_success() { - return Ok(ValidityProofWithContext { - accounts: account_proof_inputs, - addresses: address_proof_inputs, - proof: build_compressed_proof(&body)?.into(), - }); - } - - let body_preview: String = body.chars().take(512).collect(); - last_error = format!( - "prover returned HTTP {status} for validity proof request: {body_preview}" - ); + match response_result.text().await { + Ok(body) => { + if status.is_success() { + match build_compressed_proof(&body) { + Ok(proof) => { + return Ok(ValidityProofWithContext { + accounts: account_proof_inputs, + addresses: address_proof_inputs, + proof: proof.into(), + }); + } + Err(error) => { + let body_preview: String = + body.chars().take(512).collect(); + last_error = format!( + "failed to parse prover proof response: {error}; body: {body_preview}" + ); + } + } + } else { + let body_preview: String = body.chars().take(512).collect(); + last_error = format!( + "prover returned HTTP {status} for validity proof request: {body_preview}" + ); + } + } + Err(error) => { + last_error = + format!("failed to read prover response body: {error}"); + } + } } Err(error) => { last_error = format!("failed to contact prover for validity proof: {error}"); } }#!/bin/bash # Verify retry-loop early-return sites inside validity proof fetch path. rg -n -C3 'response_result\.text\(\)\.await.*\?|build_compressed_proof\(&body\)\?' sdk-libs/program-test/src/indexer/test_indexer.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2586 - 2628, The retry loop currently uses `?` on `response_result.text().await` and `build_compressed_proof(&body)?`, causing early returns instead of retrying; change both to explicit error handling that sets `last_error` and continues the loop: when calling `response_result.text().await` replace the `map_err(...)?` usage with a match (or map/inspect) that on Err sets `last_error = format!(...)` and continues to the next retry, and when `status.is_success()` try to decode the proof with `build_compressed_proof(&body)` using a match/if-let that on Err updates `last_error` (using `IndexerError::CustomError`-style message) and continues instead of returning; keep the existing `retries`/sleep logic and final `Err(IndexerError::CustomError(last_error))` unchanged.prover/client/src/proof_types/non_inclusion/v2/json.rs (1)
28-28:⚠️ Potential issue | 🟠 MajorSchema mismatch:
path_indexserializes as u64 but server expects uint32.The conversion logic properly handles
BigInt→u64failures, but there's no validation that the resulting value fits withinu32::MAX. The server-side schema (as noted inprover/server/prover/v2/marshal_non_inclusion.go) expectsPathIndex uint32. Values aboveu32::MAXwill serialize successfully but fail server-side unmarshaling.🔧 Add u32 range validation
path_index: input .index_hashed_indexed_element_leaf .to_u64() .ok_or_else(|| { ProverClientError::IntegerConversion(format!( "failed to convert path index {} to u64", input.index_hashed_indexed_element_leaf )) - })?, + }) + .and_then(|v| { + if v > u32::MAX as u64 { + Err(ProverClientError::IntegerConversion(format!( + "path index {} exceeds u32::MAX", + v + ))) + } else { + Ok(v) + } + })?,Alternatively, if the field should remain
u64client-side, consider updating the server schema touint64.Also applies to: 54-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/non_inclusion/v2/json.rs` at line 28, The struct field path_index (in prover/client/src/proof_types/non_inclusion/v2/json.rs) currently stores a u64 but the server expects a uint32; update the conversion/validation logic where BigInt → u64 is handled to enforce that the value fits within u32::MAX (i.e., check value <= u32::MAX) and return an error if it doesn't, or alternatively change the client-side field to u32 if that matches intended semantics; apply the same u32-range validation to the other similar fields referenced around lines 54-62 to prevent server-side unmarshaling failures.prover/client/src/proof_types/batch_append/proof_inputs.rs (1)
193-207:⚠️ Potential issue | 🟠 MajorInner Merkle proof length still vulnerable to panic.
The new outer-length validation (lines 131-142) is helpful, but line 193 still uses
try_into().unwrap()without validating that each innermerkle_proofhas exactlyHEIGHTelements. A malformed proof will panic here rather than returning an error.Note how
BatchAppendsCircuitInputs::new(lines 67-72) handles this correctly withmap_err. The same pattern should be applied here for consistency.🔧 Proposed fix
- let merkle_proof_array = merkle_proof.try_into().unwrap(); + let merkle_proof_array: [[u8; 32]; HEIGHT] = merkle_proof.try_into().map_err(|_| { + ProverClientError::GenericError(format!( + "invalid merkle proof length at index {}: expected {}", + i, HEIGHT + )) + })?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/batch_append/proof_inputs.rs` around lines 193 - 207, The code currently calls merkle_proof.try_into().unwrap() which can panic if an inner merkle_proof does not have exactly HEIGHT elements; change that to attempt the conversion and return a proper error instead (mirror the pattern used in BatchAppendsCircuitInputs::new), e.g. replace the unwrap with try_into().map_err(|e| /* convert to the function's error type with a descriptive message about invalid merkle proof length */)? and propagate that error so compute_root_from_merkle_proof is only called with a valid merkle_proof_array; reference symbols: merkle_proof.try_into().unwrap(), merkle_proof_array, HEIGHT, compute_root_from_merkle_proof, and BatchAppendsCircuitInputs::new for the error-mapping pattern.prover/client/src/proof.rs (1)
72-93:⚠️ Potential issue | 🟠 MajorPre-validation of hex string length still missing.
The size check at line 80 occurs after
BigUint::from_str_radixparsing at line 77. A maliciously large hex string (e.g., thousands of digits) will still trigger CPU/memory allocation before being rejected. Pre-validating the trimmed string length would prevent this.🔧 Add pre-validation before parsing
let trimmed_str = hex_str .strip_prefix("0x") .or_else(|| hex_str.strip_prefix("0X")) .unwrap_or(hex_str); + // Reject obviously oversized inputs before expensive parsing + // 32 bytes = 64 hex chars max (leading zeros are fine) + let normalized = trimmed_str.trim_start_matches('0'); + if normalized.len() > 64 { + return Err(ProverClientError::InvalidHexString(format!( + "{hex_str}: exceeds 32 bytes" + ))); + } let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof.rs` around lines 72 - 93, The function deserialize_hex_string_to_be_bytes performs BigUint::from_str_radix on trimmed_str before ensuring it can't exceed 32 bytes; add a pre-parse length check using the trimmed_str (after stripping "0x"/"0X") to reject any hex string longer than 64 hex chars (64 > 32 bytes) and return ProverClientError::InvalidHexString early, so you avoid heavy allocations/parsing for maliciously large inputs; keep the existing post-parse byte-length checks for exactness.forester/src/epoch_manager.rs (2)
2542-2548:⚠️ Potential issue | 🟠 MajorUse the correct eligibility key for
TreeType::StateV2.This call always uses
tree_accounts.merkle_tree. ForTreeType::StateV2, eligibility should be computed with the queue-based key, otherwise you can get false “not eligible” results.🔧 Proposed fix
- if !self.check_forester_eligibility( + let eligibility_pubkey = match tree_accounts.tree_type { + TreeType::StateV2 => tree_accounts.queue, + TreeType::AddressV2 => tree_accounts.merkle_tree, + _ => tree_accounts.queue, + }; + if !self.check_forester_eligibility( &epoch_pda, current_light_slot, - &tree_accounts.merkle_tree, + &eligibility_pubkey, epoch_info.epoch, &epoch_info, )? {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 2542 - 2548, The call to check_forester_eligibility is always passing tree_accounts.merkle_tree but for TreeType::StateV2 you must pass the queue-based eligibility key instead; update the call site (the invocation that currently passes &tree_accounts.merkle_tree) to choose the key based on the tree type (check tree_accounts.tree_type or equivalent) and pass the queue key for TreeType::StateV2 (e.g., tree_accounts.queue or tree_accounts.queue_key) while keeping epoch_pda, current_light_slot and &epoch_info arguments the same.
943-955:⚠️ Potential issue | 🟠 MajorBound new-tree worker fan-out with the same shared tree-worker limiter.
The immediate new-tree path still creates one OS thread + dedicated Tokio runtime per tree, bypassing the bounded parallelism used in active-cycle processing. This can still oversubscribe the host during discovery bursts.
Also applies to: 1016-1023
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/forester_status.rs`:
- Around line 729-746: Extract a small helper to remove the duplicated parsing
logic: create a function (e.g. parse_v1_queue_info) that accepts the
Option<Account> (queue_account) and a tree_type_label, performs unsafe {
parse_hash_set_from_bytes::<QueueAccount>(&acc.data) }, counts entries where
cell.sequence_number.is_none(), reads hs.get_capacity(), and returns (Some(len),
Some(cap)) on Ok or logs warn!(?error, "Failed to parse {} queue hash set",
tree_type_label) and returns (None, None) on Err; then replace the duplicated
blocks that call parse_hash_set_from_bytes::<QueueAccount> with calls to
parse_v1_queue_info for both StateV1 and AddressV1 occurrences.
In `@forester/src/processor/v2/proof_worker.rs`:
- Around line 135-155: The constructor new currently bubbles raw errors from
ProofClient::with_config so you can't tell which client failed; for each client
initialization (append_client, nullify_client, address_append_client) call
ProofClient::with_config and convert/map the error to include explicit context
naming the failing endpoint (e.g., "append", "update"/"nullify",
"address-append") and the URL being used before returning with ?, using map_err
or your project's error-context helper so each failure shows which
client/endpoint failed; update the three calls in new accordingly.
In `@forester/tests/e2e_test.rs`:
- Around line 802-817: The test currently creates a nested Tokio runtime inside
tokio::task::spawn_blocking and runs run_pipeline::<LightClient> there; replace
this with running the pipeline on the test's async runtime (use tokio::spawn
with the cloned config and the shutdown/work_report receivers) or, if isolation
is required, create the inner runtime with a matching worker_threads count to
the outer runtime (instead of 2) and avoid spawn_blocking for hosting an async
runtime; locate the spawn_blocking block, the
tokio::runtime::Builder::new_multi_thread() call, and
run_pipeline::<LightClient> invocation to implement the change.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 64-96: The loops currently clone strings even though `config` is
moved in; to avoid unnecessary allocations, iterate by value (e.g., use
into_iter on `config.validator_args`, `config.sbf_programs`, and
`config.upgradeable_programs`) and push the owned strings directly instead of
calling `.clone()` (replace `for arg in config.validator_args.iter() {
args.push(arg.clone()); }` with consuming iteration and likewise consume each
`(sbf_program.0, sbf_program.1)` and each `upgradeable_program.program_id`,
`program_path`, `upgrade_authority`), leaving other flags/conditions unchanged.
---
Outside diff comments:
In `@prover/client/src/helpers.rs`:
- Around line 97-99: The function create_vec_of_u32 currently uses
element.to_u32().unwrap() which can panic for out-of-range BigInt; change its
signature to return Result<Vec<u32>, ProverClientError> and attempt a fallible
conversion for each element (e.g., element.to_u32().ok_or(...) or equivalent),
map/collect the conversion into a Vec<u32> and propagate a descriptive
ProverClientError on failure to match the bigint_to_u8_32 style; update callers
accordingly to handle the Result.
- Around line 50-53: The slice bytes_be[..bytes_be.len()] is redundant; change
the allocation to use the vector's slice directly (e.g., replace let bytes =
&bytes_be[..bytes_be.len()]; with let bytes = bytes_be.as_slice(); or simply use
&bytes_be) and then perform array[(32 - bytes.len())..].copy_from_slice(bytes)
so variable names array, bytes, and bytes_be are preserved and the redundant
slice operation is removed.
---
Duplicate comments:
In `@forester/src/epoch_manager.rs`:
- Around line 2542-2548: The call to check_forester_eligibility is always
passing tree_accounts.merkle_tree but for TreeType::StateV2 you must pass the
queue-based eligibility key instead; update the call site (the invocation that
currently passes &tree_accounts.merkle_tree) to choose the key based on the tree
type (check tree_accounts.tree_type or equivalent) and pass the queue key for
TreeType::StateV2 (e.g., tree_accounts.queue or tree_accounts.queue_key) while
keeping epoch_pda, current_light_slot and &epoch_info arguments the same.
In `@prover/client/src/proof_types/batch_append/proof_inputs.rs`:
- Around line 193-207: The code currently calls merkle_proof.try_into().unwrap()
which can panic if an inner merkle_proof does not have exactly HEIGHT elements;
change that to attempt the conversion and return a proper error instead (mirror
the pattern used in BatchAppendsCircuitInputs::new), e.g. replace the unwrap
with try_into().map_err(|e| /* convert to the function's error type with a
descriptive message about invalid merkle proof length */)? and propagate that
error so compute_root_from_merkle_proof is only called with a valid
merkle_proof_array; reference symbols: merkle_proof.try_into().unwrap(),
merkle_proof_array, HEIGHT, compute_root_from_merkle_proof, and
BatchAppendsCircuitInputs::new for the error-mapping pattern.
In `@prover/client/src/proof_types/non_inclusion/v2/json.rs`:
- Line 28: The struct field path_index (in
prover/client/src/proof_types/non_inclusion/v2/json.rs) currently stores a u64
but the server expects a uint32; update the conversion/validation logic where
BigInt → u64 is handled to enforce that the value fits within u32::MAX (i.e.,
check value <= u32::MAX) and return an error if it doesn't, or alternatively
change the client-side field to u32 if that matches intended semantics; apply
the same u32-range validation to the other similar fields referenced around
lines 54-62 to prevent server-side unmarshaling failures.
In `@prover/client/src/proof.rs`:
- Around line 72-93: The function deserialize_hex_string_to_be_bytes performs
BigUint::from_str_radix on trimmed_str before ensuring it can't exceed 32 bytes;
add a pre-parse length check using the trimmed_str (after stripping "0x"/"0X")
to reject any hex string longer than 64 hex chars (64 > 32 bytes) and return
ProverClientError::InvalidHexString early, so you avoid heavy
allocations/parsing for maliciously large inputs; keep the existing post-parse
byte-length checks for exactness.
In `@prover/client/tests/batch_address_append.rs`:
- Around line 108-123: Call to get_batch_address_append_circuit_inputs::<{
DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }>() mutates sparse_merkle_tree,
changelog, and indexed_changelog but its returned BatchAddressAppendInputs is
intentionally discarded during the warmup phase; add a brief inline comment
above this call (referencing get_batch_address_append_circuit_inputs,
sparse_merkle_tree, changelog, indexed_changelog, and BatchAddressAppendInputs)
stating that the return value is intentionally ignored because the call is only
used for its side effects that populate the trees/changelogs during warmup.
- Around line 127-134: The name remaining_values is misleading because it
actually references the full slice &new_element_values[..] used by
collect_non_inclusion_data with relayer_merkle_tree; rename the variable (for
example to new_values_slice or new_element_values_slice) and update all usages
(the binding passed into collect_non_inclusion_data and any subsequent
references like low_element_values, low_element_indices, etc.) so the name
reflects that it contains the complete new_element_values slice rather than a
post-warmup "remaining" subset.
In `@prover/client/tests/init_merkle_tree.rs`:
- Around line 222-225: The code currently calls
merkle_inputs.index_hashed_indexed_element_leaf.to_u64().unwrap(), which can
panic; change this to a fallible conversion that maps the failure into the
existing Result error type (ProverClientError) and returns it instead of
panicking—for example replace the unwrap with a safe conversion using
to_u64().ok_or_else(...) or map_err(...) producing a ProverClientError with
context, and propagate the error via the function's Result; update the
construction of path_index to use that fallible result so the test path returns
Err rather than panicking.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2586-2628: The retry loop currently uses `?` on
`response_result.text().await` and `build_compressed_proof(&body)?`, causing
early returns instead of retrying; change both to explicit error handling that
sets `last_error` and continues the loop: when calling
`response_result.text().await` replace the `map_err(...)?` usage with a match
(or map/inspect) that on Err sets `last_error = format!(...)` and continues to
the next retry, and when `status.is_success()` try to decode the proof with
`build_compressed_proof(&body)` using a match/if-let that on Err updates
`last_error` (using `IndexerError::CustomError`-style message) and continues
instead of returning; keep the existing `retries`/sleep logic and final
`Err(IndexerError::CustomError(last_error))` unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 99eed15f-d771-4fb3-ad93-1e597a222eb9
⛔ Files ignored due to path filters (13)
program-tests/utils/src/mock_batched_forester.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-native-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-pinocchio-v1-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/ctoken_pda.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/decompress_full_cpi.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/pda_ctoken.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_invocations.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_transfer2.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_deposit.rsis excluded by none and included by nonesdk-tests/sdk-v1-native-test/tests/test.rsis excluded by none and included by none
📒 Files selected for processing (29)
forester/src/compressible/ctoken/compressor.rsforester/src/compressible/ctoken/state.rsforester/src/compressible/ctoken/types.rsforester/src/compressible/mint/compressor.rsforester/src/compressible/mint/state.rsforester/src/compressible/mint/types.rsforester/src/compressible/pda/compressor.rsforester/src/epoch_manager.rsforester/src/forester_status.rsforester/src/metrics.rsforester/src/priority_fee.rsforester/src/processor/v2/proof_cache.rsforester/src/processor/v2/proof_worker.rsforester/tests/e2e_test.rsforester/tests/legacy/batched_state_async_indexer_test.rsforester/tests/test_batch_append_spent.rsforester/tests/test_compressible_ctoken.rsforester/tests/test_compressible_mint.rsprover/client/src/helpers.rsprover/client/src/proof.rsprover/client/src/proof_types/batch_address_append/json.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/combined/v2/json.rsprover/client/src/proof_types/non_inclusion/v2/json.rsprover/client/tests/batch_address_append.rsprover/client/tests/init_merkle_tree.rssdk-libs/client/src/interface/initialize_config.rssdk-libs/client/src/local_test_validator.rssdk-libs/program-test/src/indexer/test_indexer.rs
| let (queue_len, queue_cap) = queue_account | ||
| .map(|acc| { | ||
| unsafe { parse_hash_set_from_bytes::<QueueAccount>(&acc.data) } | ||
| .ok() | ||
| .map(|hs| { | ||
| .map( | ||
| |acc| match unsafe { parse_hash_set_from_bytes::<QueueAccount>(&acc.data) } { | ||
| Ok(hs) => { | ||
| let len = hs | ||
| .iter() | ||
| .filter(|(_, cell)| cell.sequence_number.is_none()) | ||
| .count() as u64; | ||
| let cap = hs.get_capacity() as u64; | ||
| (len, cap) | ||
| }) | ||
| .unwrap_or((0, 0)) | ||
| }) | ||
| .map(|(l, c)| (Some(l), Some(c))) | ||
| (Some(len), Some(cap)) | ||
| } | ||
| Err(error) => { | ||
| warn!(?error, "Failed to parse AddressV1 queue hash set"); | ||
| (None, None) | ||
| } | ||
| }, | ||
| ) | ||
| .unwrap_or((None, None)); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider extracting a helper to reduce duplication.
This block is structurally identical to the StateV1 parsing at lines 672-689. Both:
- Parse a hash set from bytes via
unsafe { parse_hash_set_from_bytes::<QueueAccount>(...) } - Filter by
sequence_number.is_none()and count - Return
(Some(len), Some(cap))on success,(None, None)on failure with a warning
You could extract a small helper function to reduce duplication:
♻️ Optional: Extract helper function
fn parse_v1_queue_info(
queue_account: Option<Account>,
tree_type_label: &str,
) -> (Option<u64>, Option<u64>) {
queue_account
.map(|acc| {
match unsafe { parse_hash_set_from_bytes::<QueueAccount>(&acc.data) } {
Ok(hs) => {
let len = hs
.iter()
.filter(|(_, cell)| cell.sequence_number.is_none())
.count() as u64;
let cap = hs.get_capacity() as u64;
(Some(len), Some(cap))
}
Err(error) => {
warn!(?error, "Failed to parse {} queue hash set", tree_type_label);
(None, None)
}
}
})
.unwrap_or((None, None))
}Then call it as:
// StateV1
let (queue_len, queue_cap) = parse_v1_queue_info(queue_account, "StateV1");
// AddressV1
let (queue_len, queue_cap) = parse_v1_queue_info(queue_account, "AddressV1");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@forester/src/forester_status.rs` around lines 729 - 746, Extract a small
helper to remove the duplicated parsing logic: create a function (e.g.
parse_v1_queue_info) that accepts the Option<Account> (queue_account) and a
tree_type_label, performs unsafe {
parse_hash_set_from_bytes::<QueueAccount>(&acc.data) }, counts entries where
cell.sequence_number.is_none(), reads hs.get_capacity(), and returns (Some(len),
Some(cap)) on Ok or logs warn!(?error, "Failed to parse {} queue hash set",
tree_type_label) and returns (None, None) on Err; then replace the duplicated
blocks that call parse_hash_set_from_bytes::<QueueAccount> with calls to
parse_v1_queue_info for both StateV1 and AddressV1 occurrences.
| fn new(config: &ProverConfig) -> crate::Result<Self> { | ||
| Ok(Self { | ||
| append_client: ProofClient::with_config( | ||
| config.append_url.clone(), | ||
| config.polling_interval, | ||
| config.max_wait_time, | ||
| config.api_key.clone(), | ||
| ), | ||
| )?, | ||
| nullify_client: ProofClient::with_config( | ||
| config.update_url.clone(), | ||
| config.polling_interval, | ||
| config.max_wait_time, | ||
| config.api_key.clone(), | ||
| ), | ||
| )?, | ||
| address_append_client: ProofClient::with_config( | ||
| config.address_append_url.clone(), | ||
| config.polling_interval, | ||
| config.max_wait_time, | ||
| config.api_key.clone(), | ||
| ), | ||
| } | ||
| )?, | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add explicit context to prover-client initialization failures
Great move making this constructor fallible. Please add per-client error context so failures identify which endpoint failed (append/update/address-append) instead of a generic bubbled error.
Proposed diff
+use anyhow::Context;
@@
fn new(config: &ProverConfig) -> crate::Result<Self> {
Ok(Self {
append_client: ProofClient::with_config(
config.append_url.clone(),
config.polling_interval,
config.max_wait_time,
config.api_key.clone(),
- )?,
+ )
+ .context("Failed to initialize append proof client")?,
nullify_client: ProofClient::with_config(
config.update_url.clone(),
config.polling_interval,
config.max_wait_time,
config.api_key.clone(),
- )?,
+ )
+ .context("Failed to initialize nullify proof client")?,
address_append_client: ProofClient::with_config(
config.address_append_url.clone(),
config.polling_interval,
config.max_wait_time,
config.api_key.clone(),
- )?,
+ )
+ .context("Failed to initialize address-append proof client")?,
})
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@forester/src/processor/v2/proof_worker.rs` around lines 135 - 155, The
constructor new currently bubbles raw errors from ProofClient::with_config so
you can't tell which client failed; for each client initialization
(append_client, nullify_client, address_append_client) call
ProofClient::with_config and convert/map the error to include explicit context
naming the failing endpoint (e.g., "append", "update"/"nullify",
"address-append") and the URL being used before returning with ?, using map_err
or your project's error-context helper so each failure shows which
client/endpoint failed; update the three calls in new accordingly.
| let config = Arc::new(config.clone()); | ||
| let service_handle = tokio::task::spawn_blocking(move || { | ||
| let runtime = tokio::runtime::Builder::new_multi_thread() | ||
| .worker_threads(2) | ||
| .enable_all() | ||
| .build()?; | ||
| runtime.block_on(run_pipeline::<LightClient>( | ||
| config, | ||
| None, | ||
| None, | ||
| shutdown_receiver, | ||
| Some(shutdown_compressible_receiver), | ||
| Some(shutdown_bootstrap_receiver), | ||
| work_report_sender, | ||
| )) | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Nested runtime inside spawn_blocking is unconventional but may be intentional for test isolation.
The pattern here creates a blocking task that spawns its own 2-thread Tokio runtime and runs the async pipeline inside it. While this works, it has some implications worth noting:
spawn_blockingis typically for CPU-bound or blocking sync code, not for hosting another async runtime- The inner runtime with only 2 worker threads may bottleneck the pipeline if it expects more concurrency
- The outer test runtime's 16 worker threads (line 192) won't be utilized by the pipeline
If the goal is test isolation (preventing the pipeline from interfering with the test's async context), this achieves it. However, if performance matters, consider matching the worker thread count to the outer runtime or using a regular tokio::spawn with a shared runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@forester/tests/e2e_test.rs` around lines 802 - 817, The test currently
creates a nested Tokio runtime inside tokio::task::spawn_blocking and runs
run_pipeline::<LightClient> there; replace this with running the pipeline on the
test's async runtime (use tokio::spawn with the cloned config and the
shutdown/work_report receivers) or, if isolation is required, create the inner
runtime with a matching worker_threads count to the outer runtime (instead of 2)
and avoid spawn_blocking for hosting an async runtime; locate the spawn_blocking
block, the tokio::runtime::Builder::new_multi_thread() call, and
run_pipeline::<LightClient> invocation to implement the change.
| if !config.enable_indexer { | ||
| path.push_str(" --skip-indexer"); | ||
| args.push("--skip-indexer".to_string()); | ||
| } | ||
|
|
||
| if let Some(limit_ledger_size) = config.limit_ledger_size { | ||
| path.push_str(&format!(" --limit-ledger-size {}", limit_ledger_size)); | ||
| args.push("--limit-ledger-size".to_string()); | ||
| args.push(limit_ledger_size.to_string()); | ||
| } | ||
|
|
||
| for sbf_program in config.sbf_programs.iter() { | ||
| path.push_str(&format!( | ||
| " --sbf-program {} {}", | ||
| sbf_program.0, sbf_program.1 | ||
| )); | ||
| args.push("--sbf-program".to_string()); | ||
| args.push(sbf_program.0.clone()); | ||
| args.push(sbf_program.1.clone()); | ||
| } | ||
|
|
||
| for upgradeable_program in config.upgradeable_programs.iter() { | ||
| path.push_str(&format!( | ||
| " --upgradeable-program {} {} {}", | ||
| upgradeable_program.program_id, | ||
| upgradeable_program.program_path, | ||
| upgradeable_program.upgrade_authority | ||
| )); | ||
| args.push("--upgradeable-program".to_string()); | ||
| args.push(upgradeable_program.program_id.clone()); | ||
| args.push(upgradeable_program.program_path.clone()); | ||
| args.push(upgradeable_program.upgrade_authority.clone()); | ||
| } | ||
|
|
||
| if !config.enable_prover { | ||
| path.push_str(" --skip-prover"); | ||
| args.push("--skip-prover".to_string()); | ||
| } | ||
|
|
||
| if config.use_surfpool { | ||
| path.push_str(" --use-surfpool"); | ||
| args.push("--use-surfpool".to_string()); | ||
| } | ||
|
|
||
| for arg in config.validator_args.iter() { | ||
| path.push_str(&format!(" {}", arg)); | ||
| args.push(arg.clone()); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clean refactor to argument vector construction.
Building the command as an executable + args vector rather than a shell command string is a solid improvement—it's safer (no shell injection concerns) and easier to debug. The structured approach also makes the logic clearer.
Minor optimization opportunity: since config is moved into this function, you could take ownership of the strings instead of cloning. For example:
for arg in config.validator_args { // into_iter implicit
args.push(arg);
}This avoids allocating clones, though in a test utility the impact is negligible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sdk-libs/client/src/local_test_validator.rs` around lines 64 - 96, The loops
currently clone strings even though `config` is moved in; to avoid unnecessary
allocations, iterate by value (e.g., use into_iter on `config.validator_args`,
`config.sbf_programs`, and `config.upgradeable_programs`) and push the owned
strings directly instead of calling `.clone()` (replace `for arg in
config.validator_args.iter() { args.push(arg.clone()); }` with consuming
iteration and likewise consume each `(sbf_program.0, sbf_program.1)` and each
`upgradeable_program.program_id`, `program_path`, `upgrade_authority`), leaving
other flags/conditions unchanged.
0454ae4 to
1041e25
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
prover/client/src/helpers.rs (1)
97-99: 🧹 Nitpick | 🔵 TrivialRemaining
unwrap()could panic on large BigInt values.While this PR improves error handling in
bigint_to_u8_32andcompute_root_from_merkle_proof,create_vec_of_u32still uses.unwrap()onto_u32(). If theBigIntexceedsu32::MAX, this will panic.♻️ Consider returning Result for consistency
-pub fn create_vec_of_u32(number_of_utxos: usize, element: &BigInt) -> Vec<u32> { - vec![element.to_u32().unwrap(); number_of_utxos] +pub fn create_vec_of_u32(number_of_utxos: usize, element: &BigInt) -> Result<Vec<u32>, ProverClientError> { + let val = element.to_u32().ok_or_else(|| { + ProverClientError::IntegerConversion(format!("failed to convert {} to u32", element)) + })?; + Ok(vec![val; number_of_utxos]) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/helpers.rs` around lines 97 - 99, create_vec_of_u32 currently calls element.to_u32().unwrap() which can panic for values > u32::MAX; change the function to return a Result and propagate a conversion error instead of unwrapping: update the signature of create_vec_of_u32(number_of_utxos: usize, element: &BigInt) to return Result<Vec<u32>, E> (use the crate's existing error type or a suitable error like anyhow::Error/String), call element.to_u32().ok_or_else(|| /* construct descriptive error including element */) to get the u32, then return Ok(vec![converted_value; number_of_utxos]) so large BigInt values yield a controlled error rather than a panic.forester/src/epoch_manager.rs (1)
1916-1957:⚠️ Potential issue | 🟠 MajorRun
process_queue()directly without wrapping inspawn_blocking.
process_queue()is fully async and contains only async I/O (RPC calls). Wrapping it inspawn_blocking+Handle::block_on()prevents cancellation: when this task is dropped during shutdown, the blocking thread detaches and continues running, consuming a thread pool slot for the entire active phase. Additionally, holding the semaphore permit across a blocking boundary is unnecessary. Move the semaphore acquire into the async context directly before awaitingprocess_queue().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 1916 - 1957, The current task spawns a blocking thread and uses runtime_handle.block_on to call process_queue(), which prevents cancellation and holds a semaphore permit across a blocking boundary; change the task to call self_clone.process_queue(...).await directly inside the async closure (remove tokio::task::spawn_blocking and Handle::block_on usage) and move the worker_slots.acquire_owned().await so the permit is acquired in the async context immediately before awaiting process_queue(); ensure the permit is dropped after process_queue completes (do not hold it across any blocking call) and keep the same returned tuple shape (tree_pubkey, result) so the outer code still receives the result.
♻️ Duplicate comments (11)
sdk-libs/program-test/src/indexer/test_indexer.rs (1)
2594-2608:⚠️ Potential issue | 🟠 MajorDon't bypass the retry loop on body/proof decode failures.
Lines 2597-2601 use
?to read the response body, and line 2607 uses?forbuild_compressed_proof. Both exit the retry loop immediately on failure instead of consuming remaining attempts. A transient truncated response or malformed proof payload should updatelast_errorand continue, just like send/status failures do.🛠️ Suggested fix
match response_result { Ok(response_result) => { let status = response_result.status(); - let body = response_result.text().await.map_err(|error| { - IndexerError::CustomError(format!( - "failed to read prover response body: {error}" - )) - })?; - - if status.is_success() { - return Ok(ValidityProofWithContext { - accounts: account_proof_inputs, - addresses: address_proof_inputs, - proof: build_compressed_proof(&body)?.into(), - }); - } - - let body_preview: String = body.chars().take(512).collect(); - last_error = format!( - "prover returned HTTP {status} for validity proof request: {body_preview}" - ); + match response_result.text().await { + Ok(body) => { + if status.is_success() { + match build_compressed_proof(&body) { + Ok(proof) => { + return Ok(ValidityProofWithContext { + accounts: account_proof_inputs, + addresses: address_proof_inputs, + proof: proof.into(), + }); + } + Err(error) => { + let body_preview: String = + body.chars().take(512).collect(); + last_error = format!( + "failed to parse prover proof response: {error}; body: {body_preview}" + ); + } + } + } else { + let body_preview: String = body.chars().take(512).collect(); + last_error = format!( + "prover returned HTTP {status} for validity proof request: {body_preview}" + ); + } + } + Err(error) => { + last_error = + format!("failed to read prover response body: {error}"); + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2594 - 2608, The response body read and proof decoding currently use `?` inside the retry loop (in the `response_result` handling and `build_compressed_proof(&body)?`), which prematurely exits on transient failures; change these to capture errors into `last_error` (e.g., map the `response_result.text().await` and `build_compressed_proof(&body)` failures into `IndexerError::CustomError`), set `last_error = Some(...)`, and continue the retry loop instead of returning or propagating the error; ensure the successful path still returns `ValidityProofWithContext { accounts: account_proof_inputs, addresses: address_proof_inputs, proof: ... }` when both read and decode succeed.forester/src/processor/v2/proof_worker.rs (1)
135-155:⚠️ Potential issue | 🟡 MinorStill missing endpoint-specific error context in client initialization.
On Line 137, Line 143, and Line 149, raw
with_config(...)errors are still propagated without identifying which endpoint failed. This makes startup failures harder to debug.Suggested patch
fn new(config: &ProverConfig) -> crate::Result<Self> { + let append_url = config.append_url.clone(); + let update_url = config.update_url.clone(); + let address_append_url = config.address_append_url.clone(); + Ok(Self { append_client: ProofClient::with_config( - config.append_url.clone(), + append_url.clone(), config.polling_interval, config.max_wait_time, config.api_key.clone(), - )?, + ) + .map_err(|e| anyhow::anyhow!("Failed to initialize append proof client (url={}): {}", append_url, e))?, nullify_client: ProofClient::with_config( - config.update_url.clone(), + update_url.clone(), config.polling_interval, config.max_wait_time, config.api_key.clone(), - )?, + ) + .map_err(|e| anyhow::anyhow!("Failed to initialize nullify proof client (url={}): {}", update_url, e))?, address_append_client: ProofClient::with_config( - config.address_append_url.clone(), + address_append_url.clone(), config.polling_interval, config.max_wait_time, config.api_key.clone(), - )?, + ) + .map_err(|e| anyhow::anyhow!("Failed to initialize address-append proof client (url={}): {}", address_append_url, e))?, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/processor/v2/proof_worker.rs` around lines 135 - 155, The client initializations in the ProofWorker::new constructor propagate raw errors from ProofClient::with_config for append_client, nullify_client, and address_append_client; update each call to wrap or map the error with endpoint-specific context (e.g., include config.append_url / config.update_url / config.address_append_url or descriptive names) so the resulting error identifies which endpoint failed during startup—modify the three ProofClient::with_config invocations in new to convert the ? into a context-adding error wrapper (using map_err, with_context, or anyhow::Context) that includes the endpoint URL/name and then return that enriched error.forester/tests/test_batch_append_spent.rs (1)
331-356:⚠️ Potential issue | 🟠 MajorTimeout +
spawn_blockingcan leak the forester task past test shutdown.On Line 352,
timeout(..., service_handle)consumes the handle; if elapsed, the handle is dropped while thespawn_blockingtask can keep running detached. This risks cross-test interference and nondeterministic failures.Proposed direction
- let service_handle = tokio::task::spawn_blocking(move || { - let runtime = tokio::runtime::Builder::new_multi_thread() - .worker_threads(2) - .enable_all() - .build()?; - runtime.block_on(run_pipeline::<LightClient>( - config, - None, - None, - shutdown_receiver, - Some(shutdown_compressible_receiver), - None, // shutdown_bootstrap - work_report_sender, - )) - }); + let mut service_handle = tokio::spawn(run_pipeline::<LightClient>( + config, + None, + None, + shutdown_receiver, + Some(shutdown_compressible_receiver), + None, // shutdown_bootstrap + work_report_sender, + )); @@ - let join_result = timeout(Duration::from_secs(5), service_handle) + let join_result = timeout(Duration::from_secs(5), &mut service_handle) .await .expect("forester service did not shut down within timeout"); let service_result = join_result.expect("forester service task panicked"); service_result.expect("run_pipeline::<LightClient>() failed");In Tokio (latest), if `tokio::time::timeout(duration, join_handle).await` returns `Err(Elapsed)`, what happens to the wrapped `JoinHandle` future, and does dropping a `JoinHandle` from `tokio::task::spawn_blocking` cancel the underlying task?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/tests/test_batch_append_spent.rs` around lines 331 - 356, The test currently uses timeout(..., service_handle).await which, on timeout, drops the JoinHandle from tokio::task::spawn_blocking and does not cancel the underlying blocking task (spawn_blocking tasks keep running), causing potential leaks; change the logic so that if timeout returns Err(Elapsed) you explicitly cancel and then join the spawned task: call service_handle.abort() (or otherwise signal cancellation) and then await the JoinHandle to completion (optionally with another timeout) to ensure run_pipeline::<LightClient> (the task spawned into service_handle) has terminated; ensure you handle and unwrap the JoinHandle result (service_result) or the abort result to fail the test if the task did not exit.prover/client/tests/batch_address_append.rs (2)
108-123: 🧹 Nitpick | 🔵 TrivialConsider adding a comment clarifying the warmup phase's purpose.
The warmup call mutates
sparse_merkle_tree,changelog, andindexed_changelogas side effects, but the returned circuit inputs are intentionally discarded. A brief comment would help future readers understand this is by design.📝 Suggested documentation
+ // Warmup phase: mutates sparse_merkle_tree, changelog, and indexed_changelog + // to simulate prior insertions; the returned circuit inputs are not needed. get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }>(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` around lines 108 - 123, Add a brief inline comment above the call to get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as usize }> explaining that this invocation is a "warmup" phase whose return value is intentionally ignored, and that the call is performed solely for its side effects on sparse_merkle_tree, changelog, and indexed_changelog (mutating them with proofs/state) so readers know the behavior is intentional and not a missing use of the result.
127-134: 🧹 Nitpick | 🔵 TrivialVariable name
remaining_valuesis misleading.
remaining_valuesbinds to&new_element_values[..], which is the full slice of all 10 elements—not a subset remaining after warmup. Consider renaming tobatch_valuesor usingnew_element_valuesdirectly.✏️ Suggested rename
- let remaining_values = &new_element_values[..]; + let batch_values = &new_element_values[..]; let ( low_element_values, low_element_indices, low_element_next_indices, low_element_next_values, low_element_proofs, - ) = collect_non_inclusion_data(&relayer_merkle_tree, remaining_values); - let new_element_values = remaining_values + ) = collect_non_inclusion_data(&relayer_merkle_tree, batch_values); + let new_element_values = batch_values🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/batch_address_append.rs` around lines 127 - 134, The variable remaining_values is misleading because it’s assigned &new_element_values[..] (the full batch) — rename it to batch_values (or remove the binding and pass new_element_values directly) where it’s used in collect_non_inclusion_data to make intent clear; update the binding name and all references (remaining_values → batch_values) or replace the binding with new_element_values when calling collect_non_inclusion_data to avoid confusion.prover/client/src/proof_types/batch_append/proof_inputs.rs (1)
131-142:⚠️ Potential issue | 🟠 MajorGood batch-level validation, but inner Merkle proof depths remain unchecked.
The new validation correctly ensures the outer vectors have matching lengths. However, each inner
merkle_proofstill isn't validated againstHEIGHTbefore Line 193'stry_into().unwrap(), which can panic on malformed proofs.🔧 Add inner proof depth validation
for (name, len) in [ ("old_leaves", old_leaves.len()), ("leaves", leaves.len()), ("merkle_proofs", merkle_proofs.len()), ] { if len != batch_len { return Err(ProverClientError::GenericError(format!( "invalid batch append inputs: {name} len {len} != batch_size {batch_len}" ))); } } + if let Some((i, proof)) = merkle_proofs + .iter() + .enumerate() + .find(|(_, proof)| proof.len() != HEIGHT) + { + return Err(ProverClientError::GenericError(format!( + "invalid batch append inputs: merkle_proofs[{i}] len {} != expected height {HEIGHT}", + proof.len() + ))); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/batch_append/proof_inputs.rs` around lines 131 - 142, The loop validates outer vector lengths but does not check each inner merkle proof depth, so before the code that does try_into().unwrap() on merkle_proof entries you must validate each inner proof length against HEIGHT to avoid panics: iterate over merkle_proofs (and corresponding entries if needed) and ensure each proof.len() == HEIGHT (or the expected constant) and return Err(ProverClientError::GenericError(...)) on mismatch; update the function handling batch append inputs (references: batch_len, merkle_proofs, HEIGHT, and the code path that calls try_into().unwrap()) to perform this check prior to conversion.prover/client/tests/init_merkle_tree.rs (1)
222-225: 🧹 Nitpick | 🔵 TrivialTest still uses
unwrap()for fallible conversion.The change from
to_u32()toto_u64()aligns with the type change inNonInclusionJsonStruct, but the.unwrap()remains. While acceptable in tests, using explicit error handling would provide clearer failure messages.♻️ Suggested improvement
path_index: merkle_inputs .index_hashed_indexed_element_leaf .to_u64() - .unwrap(), + .expect("index_hashed_indexed_element_leaf should fit in u64"),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/tests/init_merkle_tree.rs` around lines 222 - 225, The test currently calls merkle_inputs.index_hashed_indexed_element_leaf.to_u64().unwrap(), which can panic with an opaque message; replace the unwrap with explicit error handling such as using .expect("convert index_hashed_indexed_element_leaf to u64 failed: {value:?}") or mapping the Option/Result to a test-specific error message so failures are clearer. Locate the conversion call on path_index in the test and swap .unwrap() for .expect(...) (or propagate using .ok_or_else/unwrap_or_else with a descriptive message) referencing merkle_inputs and index_hashed_indexed_element_leaf.prover/client/src/proof.rs (1)
72-94:⚠️ Potential issue | 🟠 MajorPre-validate hex string length before BigUint parsing.
The size check happens after
BigUint::from_str_radix(Line 77), meaning a maliciously large hex string can trigger memory allocation before rejection. Check the trimmed string length first to avoid this.🔧 Add early length check
let trimmed_str = hex_str .strip_prefix("0x") .or_else(|| hex_str.strip_prefix("0X")) .unwrap_or(hex_str); + // Reject before parsing: 64 hex chars = 32 bytes max + let normalized = trimmed_str.trim_start_matches('0'); + if normalized.len() > 64 { + return Err(ProverClientError::InvalidHexString(format!( + "{hex_str}: exceeds 32 bytes" + ))); + } let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof.rs` around lines 72 - 94, The function deserialize_hex_string_to_be_bytes currently parses the entire trimmed_str with BigUint before checking byte length; add an early validation on trimmed_str (after computing trimmed_str via strip_prefix) to reject hex strings longer than 64 hex chars (which would exceed 32 bytes) and return ProverClientError::InvalidHexString before calling num_bigint::BigUint::from_str_radix; keep the existing error messages/format, and leave the rest of the logic (big_uint_bytes conversion, padding, and final try_into) unchanged.prover/client/src/proof_types/non_inclusion/v2/json.rs (1)
27-28:⚠️ Potential issue | 🟠 MajorSchema mismatch: client serializes
path_indexasu64while server expectsuint32.The field type widened from
u32tou64, and the conversion properly handles errors. However, the Go server'sNonInclusionJsonStructlikely still definesPathIndexasuint32. Values exceedingu32::MAXwill be serialized correctly but may fail or truncate during server-side deserialization.Consider either: (1) validating the value fits within
u32bounds before serialization, or (2) updating the Go server schema to acceptuint64.#!/bin/bash # Check Go server schema for PathIndex type rg -n "PathIndex\s+u?int" --type go prover/server/ -A1 -B1Also applies to: 54-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/non_inclusion/v2/json.rs` around lines 27 - 28, The client now stores path_index as u64 (field path_index in the non_inclusion v2 JSON struct) but the Go server expects a uint32; before serializing/send the JSON either enforce the u32 bound or revert the type: add a validation that path_index <= u32::MAX and return an error if it exceeds that limit, then cast to u32 for serialization (or alternatively change the field back to u32 if you want compile-time enforcement); apply this check where the NonInclusion v2 JSON object is constructed/serialized in prover/client/src/proof_types/non_inclusion/v2/json.rs (the path_index field with serde rename "pathIndex").forester/src/epoch_manager.rs (2)
2542-2548:⚠️ Potential issue | 🟠 MajorUse the queue key for
StateV2eligibility.
check_forester_eligibility()hashes the supplied pubkey into the eligible index. Passingmerkle_treehere disagrees with the queue-based scheduling key forStateV2, so the forester can be marked ineligible for slots it actually owns. Select the eligibility key by tree type (queueforStateV2,merkle_treeforAddressV2).🔧 Proposed fix
- if !self.check_forester_eligibility( + let eligibility_pubkey = match tree_accounts.tree_type { + TreeType::StateV2 => tree_accounts.queue, + TreeType::AddressV2 => tree_accounts.merkle_tree, + _ => tree_accounts.merkle_tree, + }; + if !self.check_forester_eligibility( &epoch_pda, current_light_slot, - &tree_accounts.merkle_tree, + &eligibility_pubkey, epoch_info.epoch, &epoch_info, )? {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 2542 - 2548, The call to check_forester_eligibility is hashing the supplied pubkey but currently passes tree_accounts.merkle_tree for all tree types; update the argument selection so StateV2 uses the queue pubkey and AddressV2 uses the merkle_tree pubkey: inspect the tree type (e.g. match on tree_accounts or the enum that distinguishes StateV2 vs AddressV2) and pass tree_accounts.queue when the tree is StateV2 and tree_accounts.merkle_tree when it is AddressV2 to the check_forester_eligibility(...) call that currently receives epoch_pda, current_light_slot, &tree_accounts.merkle_tree, epoch_info.epoch, &epoch_info.
943-954:⚠️ Potential issue | 🟠 MajorNew-tree injection still bypasses the global worker limit.
This path spins up one OS thread plus a dedicated 2-thread runtime per discovered tree and never acquires the shared semaphore used in
perform_active_work(). A burst of trees can still oversubscribe the host and starve scheduled work. Route immediate new-tree processing through the same shared worker pool/runtime as regular tree processing.Also applies to: 1016-1023
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/epoch_manager.rs` around lines 943 - 954, The new-tree discovery path currently spawns an OS thread and builds a dedicated Tokio runtime (see cancel_tx/cancel_rx, completion_tx/completion_rx, thread_handle, runtime, runtime.block_on) which bypasses the shared worker pool and semaphore used by perform_active_work(); change this to schedule the new-tree processing onto the existing shared runtime/worker pool instead of spawning a thread or creating a per-tree runtime, and ensure the task acquires the same shared semaphore (the one used by perform_active_work()) before doing CPU/IO work so it is counted against the global worker limit and cannot oversubscribe the host.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/epoch_manager.rs`:
- Around line 477-480: Make registration atomic with shutdown by holding the
same mutex and checking shutdown_requested before pushing: in
register_new_tree_worker, acquire the mutex used by shutdown_new_tree_workers
(the one protecting new_tree_workers and shutdown_requested), check
shutdown_requested and if set, cancel and join the provided NewTreeWorker
instead of pushing it; otherwise push the worker and release the lock. Mirror
that locking/check logic used in shutdown_new_tree_workers so no worker can be
pushed after a drain; update add_new_tree usage if it currently starts the OS
thread before calling register_new_tree_worker so the thread is started only
after successful registration or ensure started threads are canceled/joined when
registration is rejected. Ensure you reference and use the existing symbols
new_tree_workers, shutdown_requested, register_new_tree_worker,
shutdown_new_tree_workers, and add_new_tree when making changes.
---
Outside diff comments:
In `@forester/src/epoch_manager.rs`:
- Around line 1916-1957: The current task spawns a blocking thread and uses
runtime_handle.block_on to call process_queue(), which prevents cancellation and
holds a semaphore permit across a blocking boundary; change the task to call
self_clone.process_queue(...).await directly inside the async closure (remove
tokio::task::spawn_blocking and Handle::block_on usage) and move the
worker_slots.acquire_owned().await so the permit is acquired in the async
context immediately before awaiting process_queue(); ensure the permit is
dropped after process_queue completes (do not hold it across any blocking call)
and keep the same returned tuple shape (tree_pubkey, result) so the outer code
still receives the result.
In `@prover/client/src/helpers.rs`:
- Around line 97-99: create_vec_of_u32 currently calls element.to_u32().unwrap()
which can panic for values > u32::MAX; change the function to return a Result
and propagate a conversion error instead of unwrapping: update the signature of
create_vec_of_u32(number_of_utxos: usize, element: &BigInt) to return
Result<Vec<u32>, E> (use the crate's existing error type or a suitable error
like anyhow::Error/String), call element.to_u32().ok_or_else(|| /* construct
descriptive error including element */) to get the u32, then return
Ok(vec![converted_value; number_of_utxos]) so large BigInt values yield a
controlled error rather than a panic.
---
Duplicate comments:
In `@forester/src/epoch_manager.rs`:
- Around line 2542-2548: The call to check_forester_eligibility is hashing the
supplied pubkey but currently passes tree_accounts.merkle_tree for all tree
types; update the argument selection so StateV2 uses the queue pubkey and
AddressV2 uses the merkle_tree pubkey: inspect the tree type (e.g. match on
tree_accounts or the enum that distinguishes StateV2 vs AddressV2) and pass
tree_accounts.queue when the tree is StateV2 and tree_accounts.merkle_tree when
it is AddressV2 to the check_forester_eligibility(...) call that currently
receives epoch_pda, current_light_slot, &tree_accounts.merkle_tree,
epoch_info.epoch, &epoch_info.
- Around line 943-954: The new-tree discovery path currently spawns an OS thread
and builds a dedicated Tokio runtime (see cancel_tx/cancel_rx,
completion_tx/completion_rx, thread_handle, runtime, runtime.block_on) which
bypasses the shared worker pool and semaphore used by perform_active_work();
change this to schedule the new-tree processing onto the existing shared
runtime/worker pool instead of spawning a thread or creating a per-tree runtime,
and ensure the task acquires the same shared semaphore (the one used by
perform_active_work()) before doing CPU/IO work so it is counted against the
global worker limit and cannot oversubscribe the host.
In `@forester/src/processor/v2/proof_worker.rs`:
- Around line 135-155: The client initializations in the ProofWorker::new
constructor propagate raw errors from ProofClient::with_config for
append_client, nullify_client, and address_append_client; update each call to
wrap or map the error with endpoint-specific context (e.g., include
config.append_url / config.update_url / config.address_append_url or descriptive
names) so the resulting error identifies which endpoint failed during
startup—modify the three ProofClient::with_config invocations in new to convert
the ? into a context-adding error wrapper (using map_err, with_context, or
anyhow::Context) that includes the endpoint URL/name and then return that
enriched error.
In `@forester/tests/test_batch_append_spent.rs`:
- Around line 331-356: The test currently uses timeout(...,
service_handle).await which, on timeout, drops the JoinHandle from
tokio::task::spawn_blocking and does not cancel the underlying blocking task
(spawn_blocking tasks keep running), causing potential leaks; change the logic
so that if timeout returns Err(Elapsed) you explicitly cancel and then join the
spawned task: call service_handle.abort() (or otherwise signal cancellation) and
then await the JoinHandle to completion (optionally with another timeout) to
ensure run_pipeline::<LightClient> (the task spawned into service_handle) has
terminated; ensure you handle and unwrap the JoinHandle result (service_result)
or the abort result to fail the test if the task did not exit.
In `@prover/client/src/proof_types/batch_append/proof_inputs.rs`:
- Around line 131-142: The loop validates outer vector lengths but does not
check each inner merkle proof depth, so before the code that does
try_into().unwrap() on merkle_proof entries you must validate each inner proof
length against HEIGHT to avoid panics: iterate over merkle_proofs (and
corresponding entries if needed) and ensure each proof.len() == HEIGHT (or the
expected constant) and return Err(ProverClientError::GenericError(...)) on
mismatch; update the function handling batch append inputs (references:
batch_len, merkle_proofs, HEIGHT, and the code path that calls
try_into().unwrap()) to perform this check prior to conversion.
In `@prover/client/src/proof_types/non_inclusion/v2/json.rs`:
- Around line 27-28: The client now stores path_index as u64 (field path_index
in the non_inclusion v2 JSON struct) but the Go server expects a uint32; before
serializing/send the JSON either enforce the u32 bound or revert the type: add a
validation that path_index <= u32::MAX and return an error if it exceeds that
limit, then cast to u32 for serialization (or alternatively change the field
back to u32 if you want compile-time enforcement); apply this check where the
NonInclusion v2 JSON object is constructed/serialized in
prover/client/src/proof_types/non_inclusion/v2/json.rs (the path_index field
with serde rename "pathIndex").
In `@prover/client/src/proof.rs`:
- Around line 72-94: The function deserialize_hex_string_to_be_bytes currently
parses the entire trimmed_str with BigUint before checking byte length; add an
early validation on trimmed_str (after computing trimmed_str via strip_prefix)
to reject hex strings longer than 64 hex chars (which would exceed 32 bytes) and
return ProverClientError::InvalidHexString before calling
num_bigint::BigUint::from_str_radix; keep the existing error messages/format,
and leave the rest of the logic (big_uint_bytes conversion, padding, and final
try_into) unchanged.
In `@prover/client/tests/batch_address_append.rs`:
- Around line 108-123: Add a brief inline comment above the call to
get_batch_address_append_circuit_inputs::<{ DEFAULT_BATCH_ADDRESS_TREE_HEIGHT as
usize }> explaining that this invocation is a "warmup" phase whose return value
is intentionally ignored, and that the call is performed solely for its side
effects on sparse_merkle_tree, changelog, and indexed_changelog (mutating them
with proofs/state) so readers know the behavior is intentional and not a missing
use of the result.
- Around line 127-134: The variable remaining_values is misleading because it’s
assigned &new_element_values[..] (the full batch) — rename it to batch_values
(or remove the binding and pass new_element_values directly) where it’s used in
collect_non_inclusion_data to make intent clear; update the binding name and all
references (remaining_values → batch_values) or replace the binding with
new_element_values when calling collect_non_inclusion_data to avoid confusion.
In `@prover/client/tests/init_merkle_tree.rs`:
- Around line 222-225: The test currently calls
merkle_inputs.index_hashed_indexed_element_leaf.to_u64().unwrap(), which can
panic with an opaque message; replace the unwrap with explicit error handling
such as using .expect("convert index_hashed_indexed_element_leaf to u64 failed:
{value:?}") or mapping the Option/Result to a test-specific error message so
failures are clearer. Locate the conversion call on path_index in the test and
swap .unwrap() for .expect(...) (or propagate using .ok_or_else/unwrap_or_else
with a descriptive message) referencing merkle_inputs and
index_hashed_indexed_element_leaf.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2594-2608: The response body read and proof decoding currently use
`?` inside the retry loop (in the `response_result` handling and
`build_compressed_proof(&body)?`), which prematurely exits on transient
failures; change these to capture errors into `last_error` (e.g., map the
`response_result.text().await` and `build_compressed_proof(&body)` failures into
`IndexerError::CustomError`), set `last_error = Some(...)`, and continue the
retry loop instead of returning or propagating the error; ensure the successful
path still returns `ValidityProofWithContext { accounts: account_proof_inputs,
addresses: address_proof_inputs, proof: ... }` when both read and decode
succeed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9d61c4f9-385a-40d6-89de-d2bfba3c4d74
⛔ Files ignored due to path filters (13)
program-tests/utils/src/mock_batched_forester.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rsis excluded by none and included by nonesdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-native-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-pinocchio-v1-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/ctoken_pda.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/decompress_full_cpi.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/pda_ctoken.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_invocations.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_4_transfer2.rsis excluded by none and included by nonesdk-tests/sdk-token-test/tests/test_deposit.rsis excluded by none and included by nonesdk-tests/sdk-v1-native-test/tests/test.rsis excluded by none and included by none
📒 Files selected for processing (29)
forester/src/compressible/ctoken/compressor.rsforester/src/compressible/ctoken/state.rsforester/src/compressible/ctoken/types.rsforester/src/compressible/mint/compressor.rsforester/src/compressible/mint/state.rsforester/src/compressible/mint/types.rsforester/src/compressible/pda/compressor.rsforester/src/epoch_manager.rsforester/src/forester_status.rsforester/src/metrics.rsforester/src/priority_fee.rsforester/src/processor/v2/proof_cache.rsforester/src/processor/v2/proof_worker.rsforester/tests/e2e_test.rsforester/tests/legacy/batched_state_async_indexer_test.rsforester/tests/test_batch_append_spent.rsforester/tests/test_compressible_ctoken.rsforester/tests/test_compressible_mint.rsprover/client/src/helpers.rsprover/client/src/proof.rsprover/client/src/proof_types/batch_address_append/json.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/combined/v2/json.rsprover/client/src/proof_types/non_inclusion/v2/json.rsprover/client/tests/batch_address_append.rsprover/client/tests/init_merkle_tree.rssdk-libs/client/src/interface/initialize_config.rssdk-libs/client/src/local_test_validator.rssdk-libs/program-test/src/indexer/test_indexer.rs
5689e19 to
8c74aa1
Compare
8c74aa1 to
c5e3fb5
Compare
Summary by CodeRabbit
New Features
Bug Fixes
Improvements
Tests