A SYSTRON Lab project
from the Department of Computer Science at the University of York
Table of Contents
In this repository, we present TERIS, a Tool for Emulated Routing at Internet Scale. Using TERIS, it is possible to generate a feasible, scalable and representative virtual testbed, generating router and policy configurations for each AS which can then be deployed through Docker or Kubernetes.
Fundamental to TERIS are two components:
- The
internetemulatorPython library, which provides an interface for importing Internet topology data and generating an extendednetworkxgraph with the ability to specify desired topology characteristics. - The
internetemulator.generatormodule, which uses pre-prepared Jinja2 templates from thetemplates/folder or your own configurations from which to initialise each router.
We also provide instructions to help you deploy these generated scenarios.
Our emulation scenarios are deployed using:
- Kathará (uses Docker on a single host, best for smaller emulations)
- Megalos (uses a Kubernetes cluster, best for larger emulations)
You can install the required software by following the KatharaFramework installation guides.
The Kubernetes deployment requires more configuration than the smaller Kathará setup, but allows for the emulation of a vastly greater number of routers.
In our installation, we have previously configured Docker to be able to run Kathará emulations, and therefore we use
dockerd.
There are two primary container runtimes:
- Emulation with
dockerd(which also requires the Mirantis/cri-dockerd adapter). - Emulation with
containerd
There are a variety of different deployment mechanisms available that support both dockerd and containerd, such as the more production-focused Kubespray that provides an Ansible-based abstraction interface for Kubernetes deployment tools.
We instead use native Kubernetes tools:
kubeadm, which provides a commandline interface for cluster bootstrapping.kubelet, which runs the containers.kubectl, which provides a commandline interface for containers when running.
You can install kubeadm by following the Kubernetes documentation. We suggest using the apt package index to install and manage the necessary packages and dependencies.
Then install the CNI Network Plugins binaries, which are prerequisites for the Flannel CNI:
ARCH=$(uname -m)
case $ARCH in
armv7*) ARCH="arm";;
aarch64) ARCH="arm64";;
x86_64) ARCH="amd64";;
esac
mkdir -p /opt/cni/bin
curl -O -L https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-$ARCH-v1.6.2.tgz
tar -C /opt/cni/bin -xzf cni-plugins-linux-$ARCH-v1.6.2.tgzBefore starting the cluster, it is important to disable swap:
$ sudo swapoff -aIf you are using dockerd as the container runtime, it is also necessary to manually add the Flannel CNI configuration, as there is a big where it is not added automatically. Create the file /run/flannel/subnet.env on the host, and paste the contents:
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=trueThen start the cluster using:
$ kubeadm init --cri-socket=RUNTIME --apiserver-advertise-address=ADDRESSWhere:
RUNTIMEis one ofunix:///var/run/cri-dockerd.sockorunix:///var/run/containerd/containerd.sockADDRESSis the preferred IP address of the host on the subnet of the local cluster.
Then follow the instructions produced by kubeadm init to add other nodes to the cluster (supplementing with --cri-socket=RUNTIME if needed). When all cluster nodes are in the Ready state, run on the control host the relevant CNI implementations:
$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
$ kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
$ kubectl apply -f https://raw.githubusercontent.com/KatharaFramework/Megalos-CNI/master/kathara-daemonset.ymlThis implements the Flannel, Multus and Megalos CNIs respectively. Notably, the default memory and CPU configurations for Multus are insufficient for any meaningful emulation, so we use a customised implementation of Multus 'thick' to increase these limits.
Next, to be able to collect resource data, it is useful to install cadvisor, a daemon for Kubernetes that collects and exports information about running containers. This can be installed by selecting the latest release number from the google/cadvisor releases as VERSION:
$ kubectl kustomize build "https://github.com/google/cadvisor/deploy/kubernetes/base?ref=${VERSION}" | kubectl apply -f -Collect data from the cadvisor daemon with Prometheus, a monitoring platform that scrapes the cadvisor REST endpoints. Download the latest version of Prometheus and then:
tar xvfz prometheus-*.tar.gz
cd prometheus-*
./prometheusYou can modify the default settings in prometheus.yml to change things like the scrape interval and details of the cadvisor target, and then run this revised configuration using:
./prometheus --config.file=prometheus.ymlThe
cadvisorendpoint IP address can be retrieved withkubectl -n cadvisor get podsandkubectl -n cadvisor describe <PODNAME>. The defaultcadvisorport is8080.
Then access the Prometheus interface at localhost:9090.
If you intend to use the primary (
control-plane) node as a pod host, you also need to run:$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-to remove the default 'taint' and enable its scheduling.
Then you can use our internetemulator Python library. Install Python dependencies using the requirements.txt file:
$ pip install -r requirements.txtYou can then use our library in your project - copy the internetemulator directory and its subdirectories, and add to your file:
import internetemulator # For Internet topology graph functionality
import internetemulator.generator # To develop emulation configurationsWhen you've set your configuration requirements, these are then deployed using Kathará or Megalos. For instance, to deploy a Kathará emulation:
$ kathara lstart --noterminalsWe provide an example emulation scenario based on those discussed in our upcoming ANTS 2025 paper: TERIS: a Tool for Emulated Routing at Internet Scale. For each, we assume the following source data is present within a source-data/ directory:
- CAIDA AS Relationships Dataset (this should be a file of the form
YYYYMMDD.asrel2.txt) - CAIDA RouteViews Prefix2AS Dataset (this should be two files of the form
routeviews-rv[2/6]-YYYYMMDD-HHMM.pfx2aswhere2is the IPv4 prefixes and6is the IPv6 prefixes) - bgp.tools ASNs and Table (this should be a file of the form
asns-DD-MM-YY.csvand a file of the formtable-DD-MM-YY.txt)
You can later choose between use of the CAIDA Prefix2AS data or the bgp.tools routing table.
Use the Jupyter Notebook example.ipynb, where we present a scenario using collected Internet topology data, and configured as a "default-free" Internet using Quagga. In this example, everyone shares (and accepts) routes from everyone with no route filtering.