Skip to content

Implement BDN signature aggregation scheme#34

Open
moshababo wants to merge 6 commits intoChainSafe:mainfrom
moshababo:bdn_agg
Open

Implement BDN signature aggregation scheme#34
moshababo wants to merge 6 commits intoChainSafe:mainfrom
moshababo:bdn_agg

Conversation

@moshababo
Copy link
Contributor

@moshababo moshababo commented Oct 12, 2025

Implement BDN signature aggregation scheme

Closes #29

This PR implements the BDN aggregation scheme for BLS signatures, fixing certificate verification failures in the F3 lightclient.

Due to the lack of Rust implementation for Blake2X XOF (specifically the 32-bit Blake2Xs variant), this PR uses the
pending PR RustCrypto/hashes#704

Changes

  • Implemented BDN coefficient computation using Blake2Xs XOF
  • Added coefficient weighting for both signatures and public keys aggregation
  • Updated Verifier::verify_aggregate() API to accept full power table and signer indices (BDN
    requires all keys for correct coefficient computation)
  • Removed temporary workaround that was silencing verification errors

Testing

Verified correctness using the certificates_validation example:

  cargo run --example certificates_validation -- calibrationnet
  cargo run --example certificates_validation -- filecoin

Summary by CodeRabbit

  • New Features

    • Signature verification now uses power table and signer indices for improved efficiency.
    • Added caching mechanism for aggregate verification operations.
  • Bug Fixes

    • Enhanced error handling with new validation variants for invalid signatures and out-of-range signer indices.
  • Tests

    • Added comprehensive test for aggregate signature verification workflow.

✏️ Tip: You can customize this high-level summary in your review settings.

@moshababo moshababo requested a review from a team as a code owner October 12, 2025 20:14
@moshababo moshababo requested review from akaladarshi and elmattic and removed request for a team October 12, 2025 20:14
@coderabbitai
Copy link

coderabbitai bot commented Oct 12, 2025

📝 Walkthrough

Walkthrough

This PR implements the BDN (Boneh-Drijvers-Neven) signature aggregation scheme to prevent rogue public-key attacks. It adds Blake2xs-based coefficient computation, precomputes per-key terms, updates the verification API to use power tables and signer indices, and introduces a caching mechanism for BDNAggregation instances.

Changes

Cohort / File(s) Summary
Dependency Updates
Cargo.toml, blssig/Cargo.toml
Added blake2 (from git with blake2x feature), rayon 1.10 to workspace; blssig now depends on blstrs 0.7 and group 0.13 instead of bls12_381
Dictionary Entries
.config/rust-f3.dic
Added three new public data entries: coef_i, sig_i, pub_key_i
BDN Aggregation Core
blssig/src/bdn/mod.rs
Implemented per-key coefficient and term precomputation; refactored BDNAggregation to compute and cache coefficients/terms; added calc_coefficients (Blake2xs-based) and calc_terms helper methods; updated aggregate_sigs and aggregate_pub_keys to use index-based aggregation
Verifier Implementation
blssig/src/verifier/mod.rs
Updated verify_aggregate signature to accept power_table and signer_indices; added bdn_cache with RwLock for per-power-table caching; introduced get_or_cache_bdn method; extended BLSError with EmptySigners, InvalidScalar, SignerIndexOutOfRange variants
Integration Updates
certs/src/lib.rs
Updated verify_signature to extract power table and signer indices; modified verify_aggregate call to pass new parameters
API Contract
gpbft/src/api.rs
Updated Verifier trait verify_aggregate signature to accept power_table and signer_indices; extended documentation
Test Coverage
blssig/src/verifier/tests.rs
Added test_aggregate_signature_verification to verify BDN aggregation with subset of signers against full power table

Sequence Diagram

sequenceDiagram
    participant Client
    participant Verifier as BLSVerifier
    participant Cache as bdn_cache
    participant BDN as BDNAggregation
    participant Blake2 as Blake2xs

    Client->>Verifier: verify_aggregate(payload, agg_sig, power_table, signer_indices)
    Verifier->>Cache: Check cached BDNAggregation for power_table
    alt Cache miss
        Verifier->>BDN: new(power_table)
        BDN->>Blake2: calc_coefficients(power_table)
        Blake2-->>BDN: coefficients: Vec<Scalar>
        BDN->>BDN: calc_terms(power_table, coefficients)
        BDN-->>Verifier: BDNAggregation initialized
        Verifier->>Cache: Store BDNAggregation
    else Cache hit
        Cache-->>Verifier: Cached BDNAggregation
    end
    Verifier->>BDN: aggregate_pub_keys(signer_indices)
    BDN-->>Verifier: aggregated_pub_key
    Verifier->>Verifier: Verify signature against aggregated_pub_key
    Verifier-->>Client: Result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • Add F3 light client functionality #27: Directly modifies the same blssig BDN and verifier code paths, with coordinated changes to BDNAggregation, aggregation methods, and dependency updates.

Suggested reviewers

  • elmattic

Poem

🐰 Coefficients dance with Blake2's hash,
Terms spring forth in precomputed splash,
BDN guards against rogue keys' attack,
Cached and indexed, there's no looking back! 🔐✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the primary change: implementing the BDN signature aggregation scheme across the codebase for BLS signatures.
Linked Issues check ✅ Passed The PR implements the BDN aggregation scheme using Blake2Xs XOF for coefficient computation and updates the verifier API to accept power_table and signer_indices, directly addressing issue #29 requirements.
Out of Scope Changes check ✅ Passed All changes are within scope: they implement BDN aggregation (coefficient/term computation), update verification APIs, add dependencies (blake2, rayon, blstrs), and include related tests and documentation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 70a8eb4 and 2f41ed4.

📒 Files selected for processing (1)
  • Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (1)
  • Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: All lint checks

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
blssig/src/bdn/mod.rs (2)

31-39: Precompute terms once; OK. Consider storing projective terms for perf.

Logic is correct. For speed (1k+ keys), store G1Projective terms to avoid repeated decode per aggregation.


120-129: Minor: preallocate and store projective for speed.

Allocate with capacity and consider storing G1Projective to avoid re-decode in aggregate.

-    pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
-        let mut terms = vec![];
+    pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
+        let mut terms = Vec::with_capacity(pub_keys.len());
         for (i, pub_key) in pub_keys.iter().enumerate() {
             let pub_key_point: G1Projective = (*pub_key).into();
             let pub_c = pub_key_point * coefficients[i];
             let term = pub_c + pub_key_point;
             terms.push(term.into());
         }
         terms
     }
certs/src/lib.rs (1)

296-305: Correct: pass full power table and signer indices.

This matches the updated Verifier API and BDN requirements. Minor: this clones all keys; unavoidable given &[PubKey] API.

blssig/src/verifier/mod.rs (3)

36-39: Pre-validate signer_indices and handle u64→usize conversion to avoid panics and wasted work

signer_indices is &[u64] but indices for slices/Vecs are usize; very large u64s can overflow on cast. Also, early range checks avoid building BDNAggregation when indices are out of range and return precise errors.

Apply near Line 180:

       if signer_indices.is_empty() {
           return Err(BLSError::EmptySigners);
       }
+
+      // Early validation of indices to avoid overflow/panic and wasted work
+      let n = power_table.len();
+      for &idx in signer_indices {
+          // Guard u64 -> usize conversion on 32-bit targets and ensure in-bounds
+          let idx_usize: usize = idx
+              .try_into()
+              .map_err(|_| BLSError::SignerIndexOutOfRange(usize::MAX))?;
+          if idx_usize >= n {
+              return Err(BLSError::SignerIndexOutOfRange(idx_usize));
+          }
+      }

Optional: if BDNAggregation::aggregate_pub_keys already performs these checks safely, you can keep one source of truth but still consider the early check to short-circuit and return the intended BLSError variant faster.

Also applies to: 171-179


182-188: Minor: pre-allocate capacity for typed_pub_keys

Small perf win: avoid reallocations when mirroring power_table.

-        let mut typed_pub_keys = vec![];
+        let mut typed_pub_keys = Vec::with_capacity(power_table.len());

191-193: Performance: consider caching BDNAggregation per power table

BDN setup over the full power table is likely the hot path. Cache precomputed state keyed by a stable hash of power_table (e.g., concatenated bytes or a Merkle root) to amortize cost across certificates within the same epoch/validator set. Keep using the point cache for deserialization; this would add another layer for BDN precomputation.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4520e4c and f2f103f.

📒 Files selected for processing (7)
  • .config/rust-f3.dic (1 hunks)
  • Cargo.toml (1 hunks)
  • blssig/Cargo.toml (1 hunks)
  • blssig/src/bdn/mod.rs (3 hunks)
  • blssig/src/verifier/mod.rs (4 hunks)
  • certs/src/lib.rs (2 hunks)
  • gpbft/src/api.rs (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
gpbft/src/api.rs (3)
blssig/src/verifier/mod.rs (1)
  • verify_aggregate (167-194)
certs/src/lib.rs (1)
  • verify_aggregate (712-720)
lightclient/src/lib.rs (1)
  • power_table (39-42)
blssig/src/verifier/mod.rs (1)
lightclient/src/lib.rs (1)
  • power_table (39-42)
certs/src/lib.rs (1)
lightclient/src/lib.rs (1)
  • power_table (39-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: All lint checks
🔇 Additional comments (6)
.config/rust-f3.dic (1)

45-47: LGTM!

The addition of coef_i, sig_i, and pub_key_i to the dictionary is appropriate for the BDN aggregation implementation, which introduces per-key coefficients and indexed signatures/public keys.

blssig/Cargo.toml (1)

13-13: Workspace dep OK; ensure features come through and pin upstream rev at root.

Enabling blake2 via workspace is fine. Since the root sets features, nothing else needed here. No action in this file.

blssig/src/bdn/mod.rs (1)

41-59: Weighted signature aggregation may not interop; verify expected input.

If upstream producers aggregate unweighted signatures (common BDN flow), scaling sigs by (c_i+1) will produce a different aggregate. Keep this method if you also control aggregation, but ensure verify paths assume unweighted sigs (as implemented elsewhere).

gpbft/src/api.rs (1)

36-46: Docs and signature update LGTM.

Clearer API: requires power_table plus signer_indices for BDN. Good.

certs/src/lib.rs (1)

712-718: MockVerifier updated correctly.

Signature matches trait; tests should compile.

blssig/src/verifier/mod.rs (1)

20-22: Good addition: explicit EmptySigners error

Clearer failure mode for empty signer sets. LGTM.

coef_i
sig_i
pub_key_i
+ No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the stray + character.

The + character does not belong in a spell-check dictionary file. Dictionary files typically contain one term per line without special characters.

Apply this diff to remove it:

-+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
+
# (line 48 removed; no content remains here)
🤖 Prompt for AI Agents
In .config/rust-f3.dic around line 48, remove the stray "+" character that was
accidentally added; open the file, delete the lone "+" on line 48 so the
dictionary contains only valid entries (one term per line), save the file, and
ensure no other stray punctuation remains.

Comment on lines +85 to +118
pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
let mut hasher = Blake2xs::new(0xFFFF);

// Hash all public keys
for pub_key in pub_keys {
let bytes = pub_key.as_bytes();
hasher.update(&bytes);
}

// Read 16 bytes per public key
let mut reader = hasher.finalize_xof();
let mut output = vec![0u8; pub_keys.len() * 16];
reader.read(&mut output);

// Convert every consecutive 16 bytes chunk to a scalar
let mut coefficients = Vec::with_capacity(pub_keys.len());
for i in 0..pub_keys.len() {
let chunk = &output[i * 16..(i + 1) * 16];

// Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
let mut bytes_32 = [0u8; 32];
bytes_32[..16].copy_from_slice(chunk);

// BLS12-381 scalars expects little-endian byte representation
let scalar = Scalar::from_bytes(&bytes_32);
if scalar.is_some().into() {
coefficients.push(scalar.unwrap());
} else {
return Err(BLSError::InvalidScalar);
}
}

Ok(coefficients)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Coefficient derivation: tighten XOF usage, domain separation, and scalar mapping.

  • Use exact out_len, not 0xFFFF.
  • Add a domain separation tag.
  • Consider including the message per BDN variant (compatibility dependent).
  • Derive scalars uniformly via from_bytes_wide with 64 bytes per coeff (128‑bit is OK but sub‑optimal).

Apply:

-    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
-        let mut hasher = Blake2xs::new(0xFFFF);
+    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
+        // 64 bytes per coefficient; uniform reduction via from_bytes_wide
+        let out_len = pub_keys.len() * 64;
+        let mut hasher = Blake2xs::new(out_len);
 
-        // Hash all public keys
+        // Domain separation + all public keys (compressed)
+        hasher.update(b"F3-BDN-COEFF-V1");
         for pub_key in pub_keys {
             let bytes = pub_key.as_bytes();
             hasher.update(&bytes);
         }
 
-        // Read 16 bytes per public key
         let mut reader = hasher.finalize_xof();
-        let mut output = vec![0u8; pub_keys.len() * 16];
+        let mut output = vec![0u8; out_len];
         reader.read(&mut output);
 
-        // Convert every consecutive 16 bytes chunk to a scalar
         let mut coefficients = Vec::with_capacity(pub_keys.len());
-        for i in 0..pub_keys.len() {
-            let chunk = &output[i * 16..(i + 1) * 16];
-
-            // Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
-            let mut bytes_32 = [0u8; 32];
-            bytes_32[..16].copy_from_slice(chunk);
-
-            // BLS12-381 scalars expects little-endian byte representation
-            let scalar = Scalar::from_bytes(&bytes_32);
-            if scalar.is_some().into() {
-                coefficients.push(scalar.unwrap());
-            } else {
-                return Err(BLSError::InvalidScalar);
-            }
+        for i in 0..pub_keys.len() {
+            let chunk = &output[i * 64..(i + 1) * 64];
+            let mut wide = [0u8; 64];
+            wide.copy_from_slice(chunk);
+            let scalar = Scalar::from_bytes_wide(&wide); // uniform mod r
+            coefficients.push(scalar);
         }
 
         Ok(coefficients)
     }

Verification needed: does go-f3 include message in the coefficient hash (L vs L||msg)? Aligning is critical for cross‑impl compatibility.


🌐 Web query:

In go-f3's BDN multi-signature implementation, are coefficients derived from H(L) only, or H(L || msg)? Provide file/line refs if possible.

💡 Result:

Short answer: coefficients are derived from H(L) only (i.e., from the public-key list), not H(L || msg).

Evidence:

  • kyber's BDN docs/state that AggregateSignatures uses c = H(pk) (coefficient = hash of the public key / key list). [1]
  • go-f3 uses a BDN-style BLS implementation in its blssig module (BDN aggregation is used for multisig). [2][3]

References:

  1. kyber bdn package docs — AggregateSignatures: "aggregates the signatures using a coefficient for each one of them where c = H(pk)". (pkg: github.com/.../kyber/sign/bdn). [1]
  2. go-f3 repository — blssig module (uses BDN/BDN-style aggregation). (go-f3/blssig). [2]
  3. Filecoin / bls/BDN references in Filecoin docs (BLS aggregation / BDN multisig). [3]

If you want I can open the exact kyber bdn.go source and paste the exact file+line reference.


Tighten XOF usage and scalar mapping; maintain BDN compatibility

  • Use exact out_len = pub_keys.len() * 64 and initialize Blake2xs with that.
  • Prepend a clear domain-separation tag (e.g. b"F3-BDN-COEFF-V1") before hashing keys.
  • Read 64 bytes per key and derive scalars via Scalar::from_bytes_wide for uniform reduction.
  • Derive coefficients from H(L) only (no message) to match go-f3’s BDN (c = H(pk list) only).
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 85-118, tighten XOF usage and scalar
mapping: initialize the Blake2xs XOF to produce out_len = pub_keys.len() * 64
bytes (not 0xFFFF), prepend a domain-separation tag (e.g. b"F3-BDN-COEFF-V1") to
the hasher before updating with each pubkey, finalize the XOF and read 64 bytes
per public key into the output buffer, and for each 64-byte chunk derive the
coefficient using Scalar::from_bytes_wide (instead of padding 16→32 and
from_bytes) so reduction is uniform; return InvalidScalar on any mapping failure
and ensure only H(L) (the hashed list with the domain tag) is used to derive
coefficients (no extra message).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
blssig/src/bdn/mod.rs (2)

67-84: Avoid lossy u64→usize cast; check bounds in u64 first.

This is the same issue previously identified: on 32-bit targets, casting a large u64 to usize before the bounds check could result in truncation, allowing an out-of-bounds access.


86-119: Tighten XOF usage, domain separation, and scalar mapping.

This is the same issue previously identified: the coefficient derivation should use an exact output length, include a domain separation tag, read 64 bytes per coefficient, and use Scalar::from_bytes_wide for uniform reduction to align with cryptographic best practices and ensure compatibility with go-f3.

🧹 Nitpick comments (2)
blssig/src/bdn/mod.rs (1)

121-133: Add bounds checking or restrict visibility.

The function is public but performs unchecked indexing of coefficients[i], which will panic if the slices have mismatched lengths. Consider either:

  1. Adding a length check and returning Result<Vec<PublicKey>, BLSError>
  2. Marking the function as pub(crate) if it's only intended for internal use

Apply this diff to add bounds checking:

 pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
+    assert_eq!(
+        pub_keys.len(),
+        coefficients.len(),
+        "pub_keys and coefficients must have the same length"
+    );
     pub_keys
         .par_iter()

Or restrict visibility:

-    pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
+    pub(crate) fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
blssig/src/verifier/mod.rs (1)

129-155: Cache logic is correct; consider optimization for large power tables.

The caching implementation correctly handles concurrent access and leverages the existing point cache for individual keys.

For large power tables (~1560 entries, ~75KB), the Vec equality comparison on line 133 could be expensive. Consider hashing the power table as the cache key:

use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};

// In struct:
bdn_cache: RwLock<Option<(u64, Arc<BDNAggregation>)>>,

// In get_or_cache_bdn:
let mut hasher = DefaultHasher::new();
power_table.hash(&mut hasher);
let key = hasher.finish();

if let Some((cached_key, cached_bdn)) = self.bdn_cache.read().as_ref() {
    if *cached_key == key {
        return Ok(cached_bdn.clone());
    }
}
// ... cache with (key, bdn)

However, given the comment that power tables rarely repeat, the current approach is acceptable.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5df78a7 and aa0d946.

📒 Files selected for processing (4)
  • Cargo.toml (2 hunks)
  • blssig/Cargo.toml (1 hunks)
  • blssig/src/bdn/mod.rs (3 hunks)
  • blssig/src/verifier/mod.rs (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • Cargo.toml
  • blssig/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (1)
blssig/src/bdn/mod.rs (1)
  • new (27-40)
🔇 Additional comments (5)
blssig/src/bdn/mod.rs (3)

1-24: LGTM: Imports and struct definition are appropriate.

The Blake2Xs import for XOF-based coefficient derivation, bls12_381 types for elliptic curve operations, and rayon for parallelism are all suitable for the BDN implementation. The struct fields are appropriately scoped as pub(crate).


26-40: LGTM: Construction logic is sound.

The constructor correctly validates inputs, precomputes coefficients and terms for efficient aggregation, and handles errors appropriately.


42-65: LGTM: Signature aggregation correctly implements BDN.

The implementation correctly applies coefficients in G2 projective space, computing sum((coef_i + 1) * sig_i) as required by the BDN scheme.

blssig/src/verifier/mod.rs (2)

1-79: LGTM: Cache structure and error types are appropriate.

The Arc-wrapped BDN cache with single-entry semantics is reasonable given the comment that power tables rarely repeat. The new error variants properly support the BDN operations.


200-218: LGTM: Verification logic correctly implements BDN requirements.

The updated API properly uses the full power table for BDN coefficient computation (as required by the scheme) while aggregating only the specified signer indices. The caching integration is efficient.

@karlem karlem force-pushed the bdn_agg branch 2 times, most recently from 70a8eb4 to 0abe7e4 Compare November 3, 2025 19:40
Comment on lines +45 to +47
coef_i
sig_i
pub_key_i
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +9 to +10
//! `https://eprint.iacr.org/2018/483.pdf`
//! `https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nicer to make those clickable links.

pub fn aggregate_sigs(&self, sigs: Vec<Signature>) -> Result<Signature, BLSError> {
if sigs.len() != self.pub_keys.len() {
/// Aggregates signatures using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * sig_i) for signatures at the given indices
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Computes: sum((coef_i + 1) * sig_i) for signatures at the given indices
/// Computes: `sum((coef_i + 1) * sig_i)` for signatures at the given indices

this should get rid of spellcheck errors.

for sig in sigs {
let sig: G2Affine = sig.into();
agg_point += sig;
for (sig, &idx) in sigs.iter().zip(indices.iter()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to make sanity check and assert both collections have the same length? might be a debug assertion

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you're doing this inside the loop, but this could be done before and save some cycles.

pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> {
// Standard BLS aggregation
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * pub_key_i)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Computes: sum((coef_i + 1) * pub_key_i)
/// Computes: `sum((coef_i + 1) * pub_key_i)`

Comment on lines +81 to +83
if idx >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as in the other method.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have unit test to prove correctness of this implementation?

// BLS12-381 scalars expects little-endian byte representation
let scalar = Scalar::from_bytes(&bytes_32);
if scalar.is_some().into() {
coefficients.push(scalar.unwrap());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not unwrap unless it's 100% infallible.

point_cache: RwLock<LruCache<Vec<u8>, PublicKey>>,
/// Cache for current power table's BDN aggregation
/// Only caches the current since power tables don't tend to repeat after rotation
bdn_cache: RwLock<Option<(Vec<PubKey>, Arc<BDNAggregation>)>>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Given the doc, I recommend renaming it to current_bdn_cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement BDN aggregation scheme

3 participants