A full-scale testnet deployment framework to spin up validator nodes across teams using cloud infrastructure.
This project automates the process of defining, validating, and deploying validator nodes for a shared consensus testnet. It is designed to scale, support team-specific configuration, and deploy to GCP in a reproducible way.
Each team must add their validator configuration under the validators/<team> directory and submit a pull request to main.
✅ Need a working example? Check out PR #1 for a complete example of adding validator nodes and a boot node for your team.
validator_0xNNNN.json: Metadata for each validatorid_0xNNNN.json: libp2p identity keypairrun_validator.yaml: Runtime Docker config for your validator node (see template)- (Optional)
boot.json: Metadata for your boot node (one per team) - (Optional)
id_boot.json: Identity for your boot node - (Optional)
run_boot.yaml: Runtime Docker config for your boot node (see template)
Once merged to main, a CI workflow will validate and aggregate all validator files.
🛑 Do not modify
network-config/validators.jsonmanually.
It is automatically generated from the per-team files during CI.
To prevent collisions and make validator ownership clear, each team is assigned a hex address range:
| Team | Address Range (Hex) | (Prefix) |
|---|---|---|
| Apollo | 0x1000 – 0x10FF | 0x1000 |
| Juno | 0x2000 – 0x20FF | 0x2000 |
| Madara | 0x3000 – 0x30FF | 0x3000 |
| Pathfinder | 0x4000 – 0x40FF | 0x4000 |
Each validator metadata file must use an address from your team's assigned range.
For complete validator setup documentation, see validators/README.md.
Each team creates a subdirectory under validators/ with:
validator_0xNNNN.json- Validator metadata (address, peer_id, etc.)id_0xNNNN.json- libp2p identity keypairrun_validator.yaml- Runtime Docker configuration- Optional:
boot.json,id_boot.json,run_boot.yamlfor boot nodes
team(string): Team slug matching directory namenode_name(string): DNS-safe, unique name (e.g.,pathfinder-alice)address(string): Hex address from your team's assigned rangepeer_id(string): libp2p PeerId from your identity filelisten_addresses(string[]): libp2p multiaddrs for P2P networking
{
"team": "pathfinder",
"node_name": "pathfinder-alice",
"address": "0x4001",
"listen_addresses": ["/ip4/0.0.0.0/tcp/50001"],
"peer_id": "12D3KooWDJryKaxjwNCk6yTtZ4GbtbLrH7JrEUTngvStaDttLtid"
}Boot nodes help validators discover peers. They are optional: if none are configured, validators will bootstrap from other validators.
boot.json- Boot node metadatarun_boot.yaml- Runtime configuration (copy fromrun_boot.template.yaml)id_boot.json- Boot node identity
For detailed boot node configuration, see validators/README.md.
Validators can now automatically download and extract database snapshots during deployment, significantly reducing sync time. This feature is completely optional and maintains full backward compatibility.
For detailed configuration and examples, see validators/README.md.
Add a snapshot section to your run_validator.yaml:
snapshot:
url: "https://example.com/snapshot.sqlite.zst"
extract_command: "zstd -T0 -d {filename} -o {target}"
target_path: "mainnet.sqlite"The system automatically downloads, extracts, and places snapshots before starting your validator containers.
Deployment is handled via tools/deploynet.py, which provisions GCP resources and deploys validator containers using team configs.
cd tools
pip install -r requirements.txtexport GCP_PROJECT=<your-gcp-project-id>
export GCP_ZONE=<your-preferred-zone> # e.g. europe-west1-b
export GOOGLE_APPLICATION_CREDENTIALS=/absolute/path/to/your/service-account.json
export NETWORK_NAME=sepolia-testnet # project-wide; all nodes should use the same valueYou can run provisioning and app deployment separately or together.
- Infra only:
python3 tools/deploynet.py --stage infra- App only (uses previously saved state):
python3 tools/deploynet.py --stage app- All (infra + app):
python3 tools/deploynet.pyWhat happens:
-
Infra:
- Creates/reuses GCP instances (tagged
validator) - Creates/reuses/attaches persistent disks (validators only)
- Resolves and saves public IPs to
.deployed-state.json - Creates a GCP firewall rule
allow-validator-p2pthat allows the ports present inlisten_addressesbetween instances
- Creates/reuses GCP instances (tagged
-
App:
- Deploys boot nodes first (if any), then validators
- Uploads identity files
- Mounts disks (validators) and pulls images
- Downloads and extracts database snapshots if configured in
run_validator.yaml - Starts each node container with team-specific
run_*files - Injects bootstrap peers via
{{bootstrap_addrs}}(boot nodes if present; otherwise other validators) - Injects validator set via
{{validator_addrs}}(CSV of other validators' addresses) - Injects
{{network}}fromNETWORK_NAME(defaultsepolia-testnet)
✅ Re-running is safe: existing instances/disks are reused, containers are restarted cleanly.
The deployer writes .deployed-state.json with instance IPs and metadata.