Heavy development. Expect breakages and outdated docs.
Distributed systems primitives in Rust, built on iroh P2P QUIC. Ordered transactional KV at the bottom (Raft consensus), everything else built on top as key reads and writes.
Aspen's source lives in its own Git forge, built by its own CI, deployed to its own cluster. nix run .#dogfood-local runs this end-to-end.
Applications Forge, CI/CD, Secrets, DNS, Automerge, FUSE
Coordination Locks, Elections, Queues, Barriers, Semaphores, Counters
Core KV Store (Raft) + Blob Store (iroh-blobs) + Docs (iroh-docs)
Consensus OpenRaft (vendored) + Redb (single-fsync writes)
Transport Iroh QUIC (ALPN multiplexing, gossip, mDNS, DHT)
A single QUIC endpoint handles Raft replication, client RPCs, blob transfer, gossip, and federation through ALPN routing. No HTTP, no REST -- all communication goes through iroh.
Storage uses redb with the Raft log and state machine applied in a single transaction (one fsync per write batch). Concurrent client writes get batched into one Raft proposal. OpenRaft is vendored at openraft/openraft for direct patching.
Core traits are ClusterController (membership) and KeyValueStore (KV ops), both implemented by RaftNode.
Distributed primitives on the KV layer's compare-and-swap. All linearizable through Raft.
DistributedLock/DistributedRwLock-- mutual exclusion with fencing tokens, reader-writer with fairnessLeaderElection-- lease-based with automatic renewalDistributedBarrier-- N-party synchronizationSemaphore,AtomicCounter,SequenceGenerator-- bounded access, counters, monotonic IDsDistributedRateLimiter-- token bucketQueueManager-- FIFO with visibility timeout, ack/nack, dead letter queueServiceRegistry-- discovery with health checksWorkerCoordinator-- work stealing, load balancing, failover
let lock = DistributedLock::new(store, "my_lock", "client_1", LockConfig::default());
let guard = lock.acquire().await?;
let token = guard.fencing_token(); // pass to external services
// released on drop
let election = LeaderElection::new(store, "service-leader", "node-1", ElectionConfig::default());
let handle = election.start().await?;
if handle.is_leader() { /* lead */ }Pure business logic in src/verified/, with formal proofs in verus/ covering properties like fencing token monotonicity and mutual exclusion.
Git hosting on Aspen's storage layers. Git objects go into iroh-blobs (BLAKE3), refs go into Raft KV. Issues, patches, and reviews stored as immutable DAGs. iroh-gossip announces new commits to peers.
git remote add aspen aspen://cluster-ticket/my-repo
git push aspen mainPipelines auto-trigger on Forge pushes. Three executor backends:
- Shell -- host-level, fast builds in pre-isolated environments
- Nix -- sandbox, reproducible flake builds, artifacts to iroh-blobs + binary cache
- VM -- Cloud Hypervisor microVM for untrusted workloads
Pipelines defined in Nickel (.aspen/ci.ncl). Jobs distributed across the cluster via the Raft-backed job queue.
Independent clusters can sync over P2P. Each cluster runs its own consensus and works offline. Discovery uses BitTorrent Mainline DHT (BEP-44). Clusters identify with Ed25519 keypairs. Within a cluster: strong consistency via Raft. Across clusters: pull-based sync with cryptographic verification, eventual consistency.
Sync is application-level. Two Forge instances sync repos. Two CI systems share artifacts. The core provides transport and blob transfer; applications decide what to sync and when.
See Federation Guide.
FUSE filesystem that mounts a cluster as a POSIX directory. Paths map to KV keys.
aspen-fuse --mount-point /mnt/aspen --ticket <cluster-ticket>
echo "hello" > /mnt/aspen/myapp/config # KV write
cat /mnt/aspen/myapp/config # KV read
ls /mnt/aspen/myapp/ # virtual directory from key prefixesAlso ships a VirtioFS backend for Cloud Hypervisor and QEMU. The VM CI executor uses this to give build jobs direct access to cluster storage.
nix develop # dev shell
cargo build # build
cargo build --features full # everything# run a node
cargo run --features jobs,docs,blob,hooks,automerge \
--bin aspen-node -- --node-id 1 --cookie my-cluster
# CLI
cargo run -p aspen-cli -- kv get mykey
# 3-node local cluster
nix run .#cluster
# self-hosted build pipeline
nix run .#dogfood-localcargo nextest run # all tests
cargo nextest run -P quick # skip slow tests
cargo nextest run -E 'test(/raft/)' # filter
nix build .#checks.x86_64-linux.kv-operations-test # NixOS VM test
nix run .#verify-verus # Verus proofsmadsim for deterministic simulation, proptest/Bolero for property-based testing and fuzzing, NixOS+QEMU for full cluster VM tests, buggify for fault injection, Verus for formal verification (verus/ specs, src/verified/ code).
- Deploy
- Federation
- Forge
- Host ABI
- Identity Persistence
- KV Branching
- Plugin Development
- SOPS Secrets
- Tiger Style
- VM Jobs
Aspen: AGPL-3.0-or-later. Vendored OpenRaft: MIT OR Apache-2.0.
