Skip to content
Open
5 changes: 3 additions & 2 deletions .config/rust-f3.dic
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
26
23
API/M
APIs
benchmark/GD
Expand Down Expand Up @@ -41,4 +41,5 @@ G1
G2
JSON
CBOR
TODO
TODO
+
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the stray + character.

The + character does not belong in a spell-check dictionary file. Dictionary files typically contain one term per line without special characters.

Apply this diff to remove it:

-+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
+
# (line 48 removed; no content remains here)
🤖 Prompt for AI Agents
In .config/rust-f3.dic around line 48, remove the stray "+" character that was
accidentally added; open the file, delete the lone "+" on line 48 so the
dictionary contains only valid entries (one term per line), save the file, and
ensure no other stray punctuation remains.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ ahash = "0.8"
anyhow = "1"
base32 = "0.5"
base64 = "0.22"
blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "4d3debf264a45da9e33d52645eb6ee9963336f66", features = ["blake2x"] } # PR: https://github.com/RustCrypto/hashes/pull/704
bls-signatures = { version = "0.15" }
bls12_381 = "0.8"
cid = { version = "0.11", features = ["std"] }
Expand All @@ -28,6 +29,7 @@ num-bigint = { version = "0.4", features = ["serde"] }
num-traits = "0.2"
parking_lot = "0.12"
rand = "0.8"
rayon = "1.10"
serde = { version = "1", features = ["derive"] }
serde_cbor = { package = "serde_cbor_2", version = "0.13" }
serde_json = { version = "1", features = ["raw_value"] }
Expand Down
3 changes: 3 additions & 0 deletions blssig/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,15 @@ edition.workspace = true
rust-version.workspace = true

[dependencies]
blake2.workspace = true
bls-signatures.workspace = true
bls12_381.workspace = true
filecoin-f3-gpbft = { path = "../gpbft", version = "0.1.0" }
hashlink.workspace = true
parking_lot.workspace = true
rayon.workspace = true
thiserror.workspace = true

[dev-dependencies]
hex.workspace = true
rand.workspace = true
121 changes: 100 additions & 21 deletions blssig/src/bdn/mod.rs
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have unit test to prove correctness of this implementation?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The coverage was indeed limited, now added more here: moshababo@b461096

Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,24 @@
// SPDX-License-Identifier: Apache-2.0, MIT

//! BDN (Boneh-Drijvers-Neven) signature aggregation scheme, for preventing rogue public-key attacks.
//! Those attacks could allow an attacker to forge a public-key and then make a verifiable
//! signature for an aggregation of signatures. It fixes the situation by adding coefficients to the aggregate.
//!
//! NOTE: currently uses standard BLS aggregation without coefficient weighting, hence returns incorrect values compared to go-f3.
//! See the papers:
//! - <https://eprint.iacr.org/2018/483.pdf>
//! - <https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html>
//!
use crate::verifier::BLSError;
use bls_signatures::{PublicKey, Signature};
use bls12_381::{G1Projective, G2Affine, G2Projective};
use blake2::Blake2xs;
use blake2::digest::{ExtendableOutput, Update, XofReader};
use bls_signatures::{PublicKey, Serialize, Signature};
use bls12_381::{G1Projective, G2Projective, Scalar};
use rayon::prelude::*;

/// BDN aggregation context for managing signature and public key aggregation
pub struct BDNAggregation {
pub_keys: Vec<PublicKey>,
pub(crate) coefficients: Vec<Scalar>,
pub(crate) terms: Vec<PublicKey>,
}

impl BDNAggregation {
Expand All @@ -20,43 +28,114 @@ impl BDNAggregation {
return Err(BLSError::EmptyPublicKeys);
}

Ok(Self { pub_keys })
let coefficients = Self::calc_coefficients(&pub_keys)?;
let terms = Self::calc_terms(&pub_keys, &coefficients);

Ok(Self {
coefficients,
terms,
})
}

/// Aggregates signatures using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
pub fn aggregate_sigs(&self, sigs: Vec<Signature>) -> Result<Signature, BLSError> {
if sigs.len() != self.pub_keys.len() {
/// Aggregates signatures using BDN aggregation with coefficients.
/// Computes: `sum((coef_i + 1) * sig_i)` for signatures at the given indices
pub fn aggregate_sigs(
&self,
indices: &[u64],
sigs: &[Signature],
) -> Result<Signature, BLSError> {
if sigs.len() != indices.len() {
return Err(BLSError::LengthMismatch {
pub_keys: self.pub_keys.len(),
pub_keys: indices.len(),
sigs: sigs.len(),
});
}

// Standard BLS aggregation
for &idx in indices {
if idx as usize >= self.coefficients.len() {
return Err(BLSError::SignerIndexOutOfRange(idx as usize));
}
}

let mut agg_point = G2Projective::identity();
for sig in sigs {
let sig: G2Affine = sig.into();
agg_point += sig;
for (sig, &idx) in sigs.iter().zip(indices.iter()) {
let coef = self.coefficients[idx as usize];
let sig_point: G2Projective = (*sig).into();
let sig_c = sig_point * coef;
let sig_c = sig_c + sig_point;

agg_point += sig_c;
}

// Convert back to Signature
let agg_sig: Signature = agg_point.into();
Ok(agg_sig)
}

/// Aggregates public keys using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> {
// Standard BLS aggregation
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: `sum((coef_i + 1) * pub_key_i)`
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
for &idx in indices {
if idx as usize >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx as usize));
}
}

// Sum of pre-computed terms (which are already `(coef_i + 1) * pub_key_i`)
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
for &idx in indices {
let term_point: G1Projective = self.terms[idx as usize].into();
agg_point += term_point;
}

// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
Comment thread
coderabbitai[bot] marked this conversation as resolved.

pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
let mut hasher = Blake2xs::new(0xFFFF);

// Hash all public keys
for pub_key in pub_keys {
let bytes = pub_key.as_bytes();
hasher.update(&bytes);
}

// Read 16 bytes per public key
let mut reader = hasher.finalize_xof();
let mut output = vec![0u8; pub_keys.len() * 16];
reader.read(&mut output);

// Convert every consecutive 16 bytes chunk to a scalar
let mut coefficients = Vec::with_capacity(pub_keys.len());
for i in 0..pub_keys.len() {
let chunk = &output[i * 16..(i + 1) * 16];

// Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
let mut bytes_32 = [0u8; 32];
bytes_32[..16].copy_from_slice(chunk);

// BLS12-381 scalars expects little-endian byte representation
let scalar = Scalar::from_bytes(&bytes_32)
.into_option()
.ok_or(BLSError::InvalidScalar)?;
coefficients.push(scalar);
}

Ok(coefficients)
}
Comment on lines +96 to +127
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Coefficient derivation: tighten XOF usage, domain separation, and scalar mapping.

  • Use exact out_len, not 0xFFFF.
  • Add a domain separation tag.
  • Consider including the message per BDN variant (compatibility dependent).
  • Derive scalars uniformly via from_bytes_wide with 64 bytes per coeff (128‑bit is OK but sub‑optimal).

Apply:

-    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
-        let mut hasher = Blake2xs::new(0xFFFF);
+    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
+        // 64 bytes per coefficient; uniform reduction via from_bytes_wide
+        let out_len = pub_keys.len() * 64;
+        let mut hasher = Blake2xs::new(out_len);
 
-        // Hash all public keys
+        // Domain separation + all public keys (compressed)
+        hasher.update(b"F3-BDN-COEFF-V1");
         for pub_key in pub_keys {
             let bytes = pub_key.as_bytes();
             hasher.update(&bytes);
         }
 
-        // Read 16 bytes per public key
         let mut reader = hasher.finalize_xof();
-        let mut output = vec![0u8; pub_keys.len() * 16];
+        let mut output = vec![0u8; out_len];
         reader.read(&mut output);
 
-        // Convert every consecutive 16 bytes chunk to a scalar
         let mut coefficients = Vec::with_capacity(pub_keys.len());
-        for i in 0..pub_keys.len() {
-            let chunk = &output[i * 16..(i + 1) * 16];
-
-            // Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
-            let mut bytes_32 = [0u8; 32];
-            bytes_32[..16].copy_from_slice(chunk);
-
-            // BLS12-381 scalars expects little-endian byte representation
-            let scalar = Scalar::from_bytes(&bytes_32);
-            if scalar.is_some().into() {
-                coefficients.push(scalar.unwrap());
-            } else {
-                return Err(BLSError::InvalidScalar);
-            }
+        for i in 0..pub_keys.len() {
+            let chunk = &output[i * 64..(i + 1) * 64];
+            let mut wide = [0u8; 64];
+            wide.copy_from_slice(chunk);
+            let scalar = Scalar::from_bytes_wide(&wide); // uniform mod r
+            coefficients.push(scalar);
         }
 
         Ok(coefficients)
     }

Verification needed: does go-f3 include message in the coefficient hash (L vs L||msg)? Aligning is critical for cross‑impl compatibility.


🌐 Web query:

In go-f3's BDN multi-signature implementation, are coefficients derived from H(L) only, or H(L || msg)? Provide file/line refs if possible.

💡 Result:

Short answer: coefficients are derived from H(L) only (i.e., from the public-key list), not H(L || msg).

Evidence:

  • kyber's BDN docs/state that AggregateSignatures uses c = H(pk) (coefficient = hash of the public key / key list). [1]
  • go-f3 uses a BDN-style BLS implementation in its blssig module (BDN aggregation is used for multisig). [2][3]

References:

  1. kyber bdn package docs — AggregateSignatures: "aggregates the signatures using a coefficient for each one of them where c = H(pk)". (pkg: github.com/.../kyber/sign/bdn). [1]
  2. go-f3 repository — blssig module (uses BDN/BDN-style aggregation). (go-f3/blssig). [2]
  3. Filecoin / bls/BDN references in Filecoin docs (BLS aggregation / BDN multisig). [3]

If you want I can open the exact kyber bdn.go source and paste the exact file+line reference.


Tighten XOF usage and scalar mapping; maintain BDN compatibility

  • Use exact out_len = pub_keys.len() * 64 and initialize Blake2xs with that.
  • Prepend a clear domain-separation tag (e.g. b"F3-BDN-COEFF-V1") before hashing keys.
  • Read 64 bytes per key and derive scalars via Scalar::from_bytes_wide for uniform reduction.
  • Derive coefficients from H(L) only (no message) to match go-f3’s BDN (c = H(pk list) only).
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 85-118, tighten XOF usage and scalar
mapping: initialize the Blake2xs XOF to produce out_len = pub_keys.len() * 64
bytes (not 0xFFFF), prepend a domain-separation tag (e.g. b"F3-BDN-COEFF-V1") to
the hasher before updating with each pubkey, finalize the XOF and read 64 bytes
per public key into the output buffer, and for each 64-byte chunk derive the
coefficient using Scalar::from_bytes_wide (instead of padding 16→32 and
from_bytes) so reduction is uniform; return InvalidScalar on any mapping failure
and ensure only H(L) (the hashed list with the domain tag) is used to derive
coefficients (no extra message).


pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
pub_keys
.par_iter()
.enumerate()
.map(|(i, pub_key)| {
let pub_key_point: G1Projective = (*pub_key).into();
let pub_c = pub_key_point * coefficients[i];
let term = pub_c + pub_key_point;
term.into()
})
.collect()
}
}
5 changes: 4 additions & 1 deletion blssig/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,10 @@
//! The BDN (Boneh-Drijvers-Neven) scheme is used for signature and public key aggregation
//! to prevent rogue public-key attacks.

pub use verifier::{BLSError, BLSVerifier};

mod bdn;
mod verifier;

pub use verifier::{BLSError, BLSVerifier};
#[cfg(test)]
mod tests;
Loading
Loading