Skip to content

Latest commit

 

History

History
90 lines (67 loc) · 3.9 KB

File metadata and controls

90 lines (67 loc) · 3.9 KB

Contributing to rust-cache-benchmarks

Thanks for your interest in contributing! rust-cache-benchmarks is a benchmark harness that compares throughput, hit rate, and latency across concurrent in-memory caching libraries for Rust under a realistic Zipfian workload. It is intentionally focused — a fair, reproducible comparison tool, not a framework or library. The guidelines below exist to keep it that way.

How to contribute

Things we will merge

  • New cache implementations that wrap a published crate and follow the existing run_bench pattern in src/caches/
  • Bugfixes in benchmark methodology, workload generation, or metric collection
  • Improvements to benchmark fairness (e.g. ensuring all caches receive identical key sequences)
  • Documentation updates that improve accuracy — both inline comments and README.md
  • Workload additions (new access patterns, key distributions) that are well-motivated and don't break existing comparisons

Things we won't merge

  • Changes that bias results toward or against any particular cache crate
  • Benchmark changes without evidence they improve fairness or correctness
  • Code that introduces new dependencies unrelated to caching or benchmarking
  • Cache implementations that are not published crates (vendor your own crate first)
  • Code without clear, descriptive commit messages
  • Code that breaks existing benchmarks, cargo clippy -- -D warnings, or cargo fmt --check
  • Documentation changes that are verbose, speculative, or duplicate what the code already makes obvious

Workflow

  1. Fork the repository and create a branch from main.
  2. Make your changes. Keep commits focused; prefer small PRs.
  3. If adding a new cache, follow the Adding a new cache instructions below.
  4. Run the full local check (see Local checks). CI runs the same commands and will fail on warnings.
  5. For new caches or workload changes, include benchmark output in the PR description.
  6. Open a pull request against main.

Local checks

CI runs against Rust 1.94 (the project's MSRV — see rust-toolchain.toml and Cargo.toml's rust-version). All of these must pass:

cargo fmt --check
cargo clippy --all-targets --locked -- -D warnings
cargo build --locked
cargo build --release --locked
cargo test --locked

# Supply-chain audit. CI runs this with `--deny warnings`, so any *new*
# unmaintained or unsoundness advisory will fail the pipeline. Currently
# accepted advisories are listed in .cargo/audit.toml with rationale.
# `cargo install --locked cargo-audit` if you don't have it locally.
cargo audit --deny warnings

Running benchmarks

The benchmark binary lives at src/main.rs:

cargo run --release

Run benchmarks on a quiet machine with no other load. When comparing results, run both before and after on the same machine and paste the summary into the PR.

Adding a new cache

Each cache lives in its own file under src/caches/<name>.rs and exposes a single entry point that the benchmark harness calls:

pub async fn run_bench(
    cfg: Arc<BenchConfig>,
    value_pool: Arc<Vec<Arc<String>>>,
    key_pool: Arc<Vec<String>>,
) -> BenchResults

All cache files follow an identical warmup → barrier → measurement-loop structure. Use any existing file (e.g. src/caches/moka.rs) as a template.

  1. Add the crate to [dependencies] in Cargo.toml.
  2. Create src/caches/<crate_name>.rs exposing the run_bench function above.
  3. Register the module in src/caches/mod.rs (pub mod <crate_name>;).
  4. Add the cache name in two places in src/main.rs:
    • Append it to the ALL_CACHES constant (this drives --caches validation).
    • Add a matching arm in dispatch().
  5. Run the local checks plus the full benchmark suite (cargo run --release) and verify the new cache appears in the output.

License

By contributing, you agree that your contributions will be licensed under the MIT license.