Read this first (please) , before you touch anything here, read the companion theoretical design page. That page explains why every choice is made (isolation, reproducibility, observability, ethics). This document is the how: a safe, practical, reproducible blueprint to build a local lab for learning both network/infrastructure concepts and OS-level security (Linux + Windows mindset) in a fully isolated environment. TL;DR: this guide shows you how to build a safe lab for study and research only. If your VM “explodes,” don’t panic , celebrate the snapshot you took.
Requirements & resource planning 1- Topology & naming conventions (if i were you i would skip) 2- Component breakdown: Bastion, Attacker, Targets, Monitoring 3- Docker + Compose starter (safe defaults & snippets) 4- VM option: when to use Vagrant / Packer / Ansible 5- Snapshot strategy, retention, and cleanup policy 6- Observability, artefacts, and naming conventions 7- Experiment template (text-only mini-report) 8- Secrets & OPSEC practical rules 9- What to screenshot / demo for reviewers
Minimum host (toy lab): Linux / macOS / Windows (WSL2) host with Docker & Docker Compose Git 8 GB RAM, 2 CPU cores
Recommended (comfortable): 16 GB+ RAM, 4+ CPU, SSD, virtualization support (VirtualBox / libvirt) Optional: Packer + Ansible for reproducible VM images Budgeting tip: plan for snapshots and PCAPs; storage fills faster than your willpower.
Keep names human and IPs private and non-overlapping with your network. Logical plan (conceptual):
Lab network: 10.200.0.0/24 (lab_net)
- bastion: 10.200.0.10
- attacker: 10.200.0.20
- web_demo: 10.200.0.30
- legacy_srv: 10.200.0.31
- os_concept_vm: 10.200.0.40
- monitoring: 10.200.0.90
File & image naming:
Kebab-case directories: lab-bastion, lab-attacker, lab-web-demo
Artefacts: YYYYMMDD_experiment_slug_type.ext e.g. 20251112_weblab_pcap.pcap
Bastion , single control plane
Purpose: centralize control, coordinate snapshots, host a small status UI, and act as the only place where controlled egress (if ever) is allowed. Why: one place to check logs fewer mysteries. If something goes sideways, you don’t hunt through twelve containers like it’s a scavenger hunt. Minimal tech: static server (Caddy/Nginx) for status + small control scripts on host. Picture placeholder:
Final checklist & first-run plan
Closing thoughts
Purpose: workstation for analysis. Teach network/OS reasoning, not button-mashing. Toolset (example, with justification):
tcpdump / packet capture learning
curl / httpie - HTTP basics
python3 / parsing logs / quick scripts
git - versioning notes & artifacts
Run as a non-root user. Document why each tool exists in attacker-setup.md. This shows maturity , and reviewers like maturity.
Suggested targets (conceptual):
Web Demo , demonstrates input / processing / logs / response encoding. (Teach encoding and server observable traits.)
Legacy Service , simulates odd protocol behavior and parsing quirks for forensic exercises.
OS Concept VM , demonstrates permission models, audit logs, and file system artifacts. Use VMs here if you need real kernel behavior.
Each target includes: Objective, Expected Artefacts, Reset instructions.
collect logs, metrics, and PCAPs so every experiment produces evidence.
Minimal stack:
Grafana for dashboards (host port, secure password)
Loki + Promtail for log aggregation (lightweight alternative to ELK)
tcpdump on a central interface for PCAPs
Store artefacts in ./artifacts// on host. Keep retention policy sane.
Below are safe, copy-paste snippets. They are starter templates. Read the comments and replace placeholders; do not blindly run things if you are unsure about network isolation.
Important safety mantra: internal: true is your friend. It reduces the chance your lab phones home.
networks:
lab_net:
driver: bridge
internal: true # prevents direct egress by default
services:
bastion:
image: caddy:2-alpine
container_name: lab_bastion
networks: [ lab_net ]
ports: ["8080:80"] # management UI on host only
volumes: ["./bastion-data:/data"]
attacker:
build: ./attacker
container_name: lab_attacker
networks: [ lab_net ]
tty: true
stdin_open: true
volumes: ["./shared:/home/student/shared"]
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
web_demo:
build: ./targets/web-demo
container_name: lab_web_demo
networks: [ lab_net ]
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 256M
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
curl wget git python3 python3-pip tcpdump jq vim iproute2 \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m -s /bin/bash student
USER student
WORKDIR /home/student
CMD ["/bin/bash"]
WORKDIR /app
COPY app/ /app/
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8000
CMD ["python3", "app.py"]
Use VMs when you need realistic OS artifacts (Windows Event Logs, SUID behavior, systemd interactions).
High-level approach:
Packer builds a repeatable image (VirtualBox / libvirt).
Vagrant orchestrates local VMs (vagrant up).
Ansible provisions idempotently.
config.vm.box = "bento/ubuntu-22.04"
config.vm.network "private_network", ip: "10.200.0.40"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 2
end
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/playbooks/target.yml"
end
end
Snapshot cadence:
Baseline: after provisioning (baseline_).
Pre-experiment: pre_.
Post-experiment: post_ (if you want to preserve outcomes).
Retention recommendations:
Pre & baseline: keep (small).
Post: keep 30 days, then archive or delete.
PCAPs: 14–30 days depending on storage.
Add an automated cron job on the bastion/host that prunes artefacts older than retention thresholds. Always document the policy.
For each experiment, create artifacts/<YYYYMMDD_slug>/ and store:
experiment.md (metadata & findings)
pcap.pcap (if capturing network)
service.log (server-side logs)
response.txt or trace.txt (example responses)
screenshot.png / demo.gif (20–30s; no secrets)
snapshot-id.txt (snapshot reference)
Index idea: artifacts/index.md with executive summaries linking to each artefact.
Every experiment is a mini research paper. Use this required template in docs/experiment-template.md:
- Date:
- Author:
- Objective:
- Environment snapshot:
- Hypothesis:
- Expected artefacts:
- High-level steps (no exploit code):
- Evaluation rubric (pass/fail):
- Cleanup / restore snapshot:
- Learning points:
Example action wording: “Issue a controlled HTTP request with a controlled input and observe server logs and the HTTP response for encoding behaviour.” observe, don’t instruct exploitation.
Do not commit private keys. Keep them in ~/.ssh/lab/ and add that path to .gitignore.
Use a .env file (gitignored) for local passwords; require operators to populate it.
For teams, prefer a secrets manager. For solo practice, store secrets on the host and rotate them after high-impact experiments.
Short text to paste into docs/ops/secrets.md:
Use dedicated lab keys. Never push private keys to git. Place keys in ~/.ssh/lab/, add .gitignore entries, and rotate keys regularly.
Capture these to make your repo sing:
Topology diagram (SVG/PNG)
Bastion UI showing snapshot timestamps
Attacker tool list (text screenshot)
Targets matrix (document)
Monitoring Grafana panel (logs + simple graph)
If you simply downloaded a Kali ISO and called it a “lab,” you’ve done the Internet a favor: it now has one less overly dramatic VM lying around. If you follow this guide, you’ll have something better: a reproducible, auditable, ethical lab that tells a story , and can pass a faculty review or a recruiter’s sniff test. Build with care, document like a scientist, and when in doubt: snapshot, then brag about the snapshot.