Thanks for your interest in contributing! rust-cache-benchmarks is a benchmark harness that compares throughput, hit rate, and latency across concurrent in-memory caching libraries for Rust under a realistic Zipfian workload. It is intentionally focused — a fair, reproducible comparison tool, not a framework or library. The guidelines below exist to keep it that way.
- New cache implementations that wrap a published crate and follow the existing
run_benchpattern insrc/caches/ - Bugfixes in benchmark methodology, workload generation, or metric collection
- Improvements to benchmark fairness (e.g. ensuring all caches receive identical key sequences)
- Documentation updates that improve accuracy — both inline comments and
README.md - Workload additions (new access patterns, key distributions) that are well-motivated and don't break existing comparisons
- Changes that bias results toward or against any particular cache crate
- Benchmark changes without evidence they improve fairness or correctness
- Code that introduces new dependencies unrelated to caching or benchmarking
- Cache implementations that are not published crates (vendor your own crate first)
- Code without clear, descriptive commit messages
- Code that breaks existing benchmarks,
cargo clippy -- -D warnings, orcargo fmt --check - Documentation changes that are verbose, speculative, or duplicate what the code already makes obvious
- Fork the repository and create a branch from
main. - Make your changes. Keep commits focused; prefer small PRs.
- If adding a new cache, follow the Adding a new cache instructions below.
- Run the full local check (see Local checks). CI runs the same commands and will fail on warnings.
- For new caches or workload changes, include benchmark output in the PR description.
- Open a pull request against
main.
CI runs against Rust 1.94 (the project's MSRV — see rust-toolchain.toml
and Cargo.toml's rust-version). All of these must pass:
cargo fmt --check
cargo clippy --all-targets --locked -- -D warnings
cargo build --locked
cargo build --release --locked
cargo test --locked
# Supply-chain audit. CI runs this with `--deny warnings`, so any *new*
# unmaintained or unsoundness advisory will fail the pipeline. Currently
# accepted advisories are listed in .cargo/audit.toml with rationale.
# `cargo install --locked cargo-audit` if you don't have it locally.
cargo audit --deny warningsThe benchmark binary lives at src/main.rs:
cargo run --releaseRun benchmarks on a quiet machine with no other load. When comparing results, run both before and after on the same machine and paste the summary into the PR.
Each cache lives in its own file under src/caches/<name>.rs and exposes a
single entry point that the benchmark harness calls:
pub async fn run_bench(
cfg: Arc<BenchConfig>,
value_pool: Arc<Vec<Arc<String>>>,
key_pool: Arc<Vec<String>>,
) -> BenchResultsAll cache files follow an identical warmup → barrier → measurement-loop
structure. Use any existing file (e.g. src/caches/moka.rs) as a template.
- Add the crate to
[dependencies]inCargo.toml. - Create
src/caches/<crate_name>.rsexposing therun_benchfunction above. - Register the module in
src/caches/mod.rs(pub mod <crate_name>;). - Add the cache name in two places in
src/main.rs:- Append it to the
ALL_CACHESconstant (this drives--cachesvalidation). - Add a matching arm in
dispatch().
- Append it to the
- Run the local checks plus the full benchmark suite (
cargo run --release) and verify the new cache appears in the output.
By contributing, you agree that your contributions will be licensed under the MIT license.