Skip to content

brittonr/aspen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,787 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aspen

Aspen

Heavy development. Expect breakages and outdated docs.

Distributed systems primitives in Rust, built on iroh P2P QUIC. Ordered transactional KV at the bottom (Raft consensus), everything else built on top as key reads and writes.

Aspen's source lives in its own Git forge, built by its own CI, deployed to its own cluster. nix run .#dogfood-local runs this end-to-end.

Architecture

Applications     Forge, CI/CD, Secrets, DNS, Automerge, FUSE
Coordination     Locks, Elections, Queues, Barriers, Semaphores, Counters
Core             KV Store (Raft) + Blob Store (iroh-blobs) + Docs (iroh-docs)
Consensus        OpenRaft (vendored) + Redb (single-fsync writes)
Transport        Iroh QUIC (ALPN multiplexing, gossip, mDNS, DHT)

A single QUIC endpoint handles Raft replication, client RPCs, blob transfer, gossip, and federation through ALPN routing. No HTTP, no REST -- all communication goes through iroh.

Storage uses redb with the Raft log and state machine applied in a single transaction (one fsync per write batch). Concurrent client writes get batched into one Raft proposal. OpenRaft is vendored at openraft/openraft for direct patching.

Core traits are ClusterController (membership) and KeyValueStore (KV ops), both implemented by RaftNode.

Coordination

Distributed primitives on the KV layer's compare-and-swap. All linearizable through Raft.

  • DistributedLock / DistributedRwLock -- mutual exclusion with fencing tokens, reader-writer with fairness
  • LeaderElection -- lease-based with automatic renewal
  • DistributedBarrier -- N-party synchronization
  • Semaphore, AtomicCounter, SequenceGenerator -- bounded access, counters, monotonic IDs
  • DistributedRateLimiter -- token bucket
  • QueueManager -- FIFO with visibility timeout, ack/nack, dead letter queue
  • ServiceRegistry -- discovery with health checks
  • WorkerCoordinator -- work stealing, load balancing, failover
let lock = DistributedLock::new(store, "my_lock", "client_1", LockConfig::default());
let guard = lock.acquire().await?;
let token = guard.fencing_token(); // pass to external services
// released on drop

let election = LeaderElection::new(store, "service-leader", "node-1", ElectionConfig::default());
let handle = election.start().await?;
if handle.is_leader() { /* lead */ }

Pure business logic in src/verified/, with formal proofs in verus/ covering properties like fencing token monotonicity and mutual exclusion.

Forge

Git hosting on Aspen's storage layers. Git objects go into iroh-blobs (BLAKE3), refs go into Raft KV. Issues, patches, and reviews stored as immutable DAGs. iroh-gossip announces new commits to peers.

git remote add aspen aspen://cluster-ticket/my-repo
git push aspen main

CI/CD

Pipelines auto-trigger on Forge pushes. Three executor backends:

  • Shell -- host-level, fast builds in pre-isolated environments
  • Nix -- sandbox, reproducible flake builds, artifacts to iroh-blobs + binary cache
  • VM -- Cloud Hypervisor microVM for untrusted workloads

Pipelines defined in Nickel (.aspen/ci.ncl). Jobs distributed across the cluster via the Raft-backed job queue.

Federation

Independent clusters can sync over P2P. Each cluster runs its own consensus and works offline. Discovery uses BitTorrent Mainline DHT (BEP-44). Clusters identify with Ed25519 keypairs. Within a cluster: strong consistency via Raft. Across clusters: pull-based sync with cryptographic verification, eventual consistency.

Sync is application-level. Two Forge instances sync repos. Two CI systems share artifacts. The core provides transport and blob transfer; applications decide what to sync and when.

See Federation Guide.

AspenFS

FUSE filesystem that mounts a cluster as a POSIX directory. Paths map to KV keys.

aspen-fuse --mount-point /mnt/aspen --ticket <cluster-ticket>

echo "hello" > /mnt/aspen/myapp/config    # KV write
cat /mnt/aspen/myapp/config                # KV read
ls /mnt/aspen/myapp/                       # virtual directory from key prefixes

Also ships a VirtioFS backend for Cloud Hypervisor and QEMU. The VM CI executor uses this to give build jobs direct access to cluster storage.

Usage

nix develop                    # dev shell
cargo build                    # build
cargo build --features full    # everything
# run a node
cargo run --features jobs,docs,blob,hooks,automerge \
  --bin aspen-node -- --node-id 1 --cookie my-cluster

# CLI
cargo run -p aspen-cli -- kv get mykey

# 3-node local cluster
nix run .#cluster

# self-hosted build pipeline
nix run .#dogfood-local

Testing

cargo nextest run                                    # all tests
cargo nextest run -P quick                           # skip slow tests
cargo nextest run -E 'test(/raft/)'                  # filter
nix build .#checks.x86_64-linux.kv-operations-test   # NixOS VM test
nix run .#verify-verus                               # Verus proofs

madsim for deterministic simulation, proptest/Bolero for property-based testing and fuzzing, NixOS+QEMU for full cluster VM tests, buggify for fault injection, Verus for formal verification (verus/ specs, src/verified/ code).

Docs

License

Aspen: AGPL-3.0-or-later. Vendored OpenRaft: MIT OR Apache-2.0.

About

hybrid-consensus distributed systems framework in Rust built on Iroh, using local-first Raft coordination with eventual peer-to-peer convergence and federated cluster replication.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors