Skip to content
8 changes: 8 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions forester/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ reqwest = { workspace = true, features = ["json", "rustls-tls", "blocking"] }
futures = { workspace = true }
thiserror = { workspace = true }
borsh = { workspace = true }
bincode = "1.3"
bs58 = { workspace = true }
hex = { workspace = true }
env_logger = { workspace = true }
Expand All @@ -61,6 +62,7 @@ itertools = "0.14"
async-channel = "2.5"
solana-pubkey = { workspace = true }
dotenvy = "0.15"
mwmatching = "0.1.1"

[dev-dependencies]
serial_test = { workspace = true }
Expand Down
164 changes: 164 additions & 0 deletions forester/docs/v1_forester_flows.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# Forester V1 Flows (PR: v2 Nullify + Blockhash)

## 1. Transaction Send Flow (Blockhash)

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

These blocks trigger markdownlint MD040.

🛠 Suggested patch
-```
+```text
 ...
-```
+```


-```
+```text
 ...
-```
+```


-```
+```text
 ...
-```
+```


-```
+```text
 ...
-```
+```

Also applies to: 59-59, 87-87, 131-131

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 5-5: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/docs/v1_forester_flows.md` at line 5, Several fenced code blocks in
v1_forester_flows.md are missing language identifiers (plain triple-backtick
blocks at the shown diffs and also at the other locations noted), which triggers
markdownlint MD040; update each triple-backtick block so it starts with a
language tag (e.g., ```text or ```yaml or ```json as appropriate) instead of
bare ```, keeping the existing block content unchanged, specifically adjust the
opening fences for the blocks shown in the diff and the additional occurrences
around the other noted locations (lines referenced in the review) so every code
fence includes a language identifier.

┌─────────────────────────────────────────────────────────────────────────────────┐
│ send_batched_transactions │
└─────────────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ prepare_batch_prerequisites │
│ - fetch queue items │
│ - single RPC: blockhash + │
│ priority_fee (same connection) │
│ - PreparedBatchData: │
│ recent_blockhash │
│ last_valid_block_height │
└──────────────┬───────────────────┘
┌──────────────────────────────────┐
│ for each work_chunk (100 items) │
└──────────────┬───────────────────┘
┌────────────┴────────────┐
│ elapsed > 30s? │
│ YES → refresh blockhash│
│ (pool.get_connection │
│ → rpc.get_latest_ │
│ blockhash) │
│ NO → keep current │
└────────────┬────────────┘
┌──────────────────────────────────┐
│ build_signed_transaction_batch │
│ (recent_blockhash, │
│ last_valid_block_height) │
│ → (txs, chunk_last_valid_ │
│ block_height) │
└──────────────┬───────────────────┘
┌──────────────────────────────────┐
│ execute_transaction_chunk_sending │
│ PreparedTransaction::legacy( │
│ tx, chunk_last_valid_block_ │
│ height) │
│ - send + confirm │
│ - blockhash expiry check via │
│ last_valid_block_height │
└──────────────────────────────────┘
No refetch-before-send. No re-sign.
```

## 2. State Nullify Instruction Flow (Legacy vs v2)

```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ Registry: nullify instruction paths │
└─────────────────────────────────────────────────────────────────────────────────┘
LEGACY (proof in ix data) v2 (proof in remaining_accounts)
─────────────────────── ────────────────────────────────────
create_nullify_instruction() create_nullify_with_proof_accounts_instruction()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the actual v2 SDK function name in this flow.

This line references create_nullify_with_proof_accounts_instruction(), but the implementation path uses create_nullify_2_instruction (see forester/src/processor/v1/helpers.rs, Line 391).

As per coding guidelines "Cross-reference any mentioned function names, parameters, return types, and error conditions with the source code".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/docs/v1_forester_flows.md` at line 67, The docs reference the wrong
SDK function name: replace create_nullify_with_proof_accounts_instruction() with
the actual v2 function name create_nullify_2_instruction in the flow and ensure
any mentioned parameters/returns match the signature of
create_nullify_2_instruction; cross-check the helper implementation
(create_nullify_2_instruction) and update the documentation line so it
accurately reflects the real function name and its parameter/return names.

│ │
│ ix data: [change_log, queue_idx, │ ix data: [change_log, queue_idx,
│ leaf_idx, proofs[16][32]] │ leaf_idx] (no proofs)
│ │
│ remaining_accounts: standard │ remaining_accounts: 16 proof
│ (authority, merkle_tree, queue...) │ account pubkeys (key = node bytes)
│ │
▼ ▼
process_nullify() nullify_2 instruction
(proofs from ix data) - validate: 1 change, 1 queue, 1 index
- validate: exactly 16 proof accounts
- extract_proof_nodes_from_remaining_accounts
- process_nullify(..., vec![proof_nodes])
Forester V1 uses nullify_2 only (create_nullify_2_instruction).
```

## 3. Forester V1 State Nullify Pairing Flow

```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ build_instruction_batches (state nullify path) │
└─────────────────────────────────────────────────────────────────────────────────┘
fetch_proofs_and_create_instructions
│ For each state item:
│ create_nullify_with_proof_accounts_instruction (v2)
│ → StateNullifyInstruction { instruction, proof_nodes, leaf_index }
┌─────────────────────────────────────────────────────────────────────────────┐
│ allow_pairing? │
│ batch_size >= 2 AND should_attempt_pairing() │
└─────────────────────────────────────────────────────────────────────────────┘
│ should_attempt_pairing checks:
│ - pair_candidates = n*(n-1)/2 <= 2000 (MAX_PAIR_CANDIDATES)
│ - state_nullify_count <= 96 (MAX_PAIRING_INSTRUCTIONS)
Comment on lines +105 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Pairing limits in docs are out of sync with code.

The docs state <= 2000 candidates and <= 96 instructions, while forester/src/processor/v1/tx_builder.rs currently enforces MAX_PAIR_CANDIDATES = 4_950 and MAX_PAIRING_INSTRUCTIONS = 100.

As per coding guidelines "Ensure documentation describes the actual behavior, not outdated or planned behavior".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/docs/v1_forester_flows.md` around lines 105 - 106, The docs values
for pair_candidates and state_nullify_count are stale; update the documentation
to match the actual enforced constants in code by replacing the documented
limits with MAX_PAIR_CANDIDATES = 4_950 and MAX_PAIRING_INSTRUCTIONS = 100 (the
constants named MAX_PAIR_CANDIDATES and MAX_PAIRING_INSTRUCTIONS in
tx_builder.rs), or alternatively change those constants if you intend the docs
to be the source of truth—ensure the numbers in the docs and the values of the
constants stay identical.

│ - remaining_blocks = last_valid - current > 25 (MIN_REMAINING_BLOCKS_FOR_PAIRING)
├── NO → each nullify → 1 tx (no pairing)
└── YES → pair_state_nullify_batches
│ For each pair (i,j):
│ - pair_fits_transaction_size(ix_i, ix_j)? (serialized <= 1232)
│ - weight = 10000 + proof_overlap_count
│ Max-cardinality matching (mwmatching)
│ - prioritize number of pairs
│ - then maximize proof overlap (fewer unique accounts)
Output: Vec<Vec<Instruction>>
- paired: [ix_a, ix_b] in one tx
- unpaired: [ix] in one tx
Address updates: no pairing, chunked by batch_size only.
```

## 4. End-to-End Forester V1 State Tree Flow

```
Queue (state nullifier) Indexer (proofs)
│ │
└──────────┬─────────────────┘
prepare_batch_prerequisites
- queue items
- blockhash + last_valid_block_height
- priority_fee
for chunk in work_items.chunks(100):
refresh blockhash if 30s elapsed
build_signed_transaction_batch
├─ fetch_proofs_and_create_instructions
│ - state: v2 nullify ix (proof in remaining_accounts)
│ - address: update ix
├─ build_instruction_batches
│ - address: chunk by batch_size
│ - state nullify: pair if allow_pairing else 1-per-tx
└─ create_smart_transaction per batch
execute_transaction_chunk_sending
- PreparedTransaction::legacy(tx, chunk_last_valid_block_height)
- send + confirm with blockhash expiry check
```

47 changes: 36 additions & 11 deletions forester/src/processor/v1/helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ use forester_utils::{rpc_pool::SolanaRpcPool, utils::wait_for_indexer};
use light_client::{indexer::Indexer, rpc::Rpc};
use light_compressed_account::TreeType;
use light_registry::account_compression_cpi::sdk::{
create_nullify_instruction, create_update_address_merkle_tree_instruction,
CreateNullifyInstructionInputs, UpdateAddressMerkleTreeInstructionInputs,
create_nullify_2_instruction, create_update_address_merkle_tree_instruction,
CreateNullify2InstructionInputs, UpdateAddressMerkleTreeInstructionInputs,
};
use solana_program::instruction::Instruction;
use tokio::time::Instant;
Expand All @@ -32,14 +32,27 @@ use crate::{
errors::ForesterError,
};

#[derive(Clone, Debug)]
pub enum PreparedV1Instruction {
AddressUpdate(Instruction),
StateNullify(StateNullifyInstruction),
}

#[derive(Clone, Debug)]
pub struct StateNullifyInstruction {
pub instruction: Instruction,
pub proof_nodes: Vec<[u8; 32]>,
pub leaf_index: u64,
}

/// Work items should be of only one type and tree
pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
authority: Pubkey,
derivation: Pubkey,
pool: Arc<SolanaRpcPool<R>>,
epoch: u64,
work_items: &[WorkItem],
) -> crate::Result<(Vec<MerkleProofType>, Vec<Instruction>)> {
) -> crate::Result<(Vec<MerkleProofType>, Vec<PreparedV1Instruction>)> {
let mut proofs = Vec::new();
let mut instructions = vec![];

Expand Down Expand Up @@ -360,7 +373,7 @@ pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
},
epoch,
);
instructions.push(instruction);
instructions.push(PreparedV1Instruction::AddressUpdate(instruction));
}

// Process state proofs and create instructions
Expand All @@ -375,21 +388,33 @@ pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
for (item, proof) in state_items.iter().zip(state_proofs.into_iter()) {
proofs.push(MerkleProofType::StateProof(proof.clone()));

let instruction = create_nullify_instruction(
CreateNullifyInstructionInputs {
let instruction = create_nullify_2_instruction(
CreateNullify2InstructionInputs {
nullifier_queue: item.tree_account.queue,
merkle_tree: item.tree_account.merkle_tree,
change_log_indices: vec![proof.root_seq % STATE_MERKLE_TREE_CHANGELOG],
leaves_queue_indices: vec![item.queue_item_data.index as u16],
indices: vec![proof.leaf_index],
proofs: vec![proof.proof.clone()],
change_log_index: proof.root_seq % STATE_MERKLE_TREE_CHANGELOG,
leaves_queue_index: item.queue_item_data.index as u16,
index: proof.leaf_index,
proof: proof
.proof
.clone()
.try_into()
.map_err(|_| ForesterError::General {
error: "Failed to convert state proof to fixed array".to_string(),
})?,
authority,
derivation,
is_metadata_forester: false,
},
epoch,
);
instructions.push(instruction);
instructions.push(PreparedV1Instruction::StateNullify(
StateNullifyInstruction {
instruction,
proof_nodes: proof.proof,
leaf_index: proof.leaf_index,
},
));
}

Ok((proofs, instructions))
Expand Down
Loading
Loading