that also has access to the Pure CSI driver (PSO) to enable persistant storage (PVC) as well as the Pure PSO explorer web app which gives the user a much easier way to understand all the storage/pods they have running in the wild.
for those interested in the inner workings of Vagrant. I have listed below what the Vagrantfile actually does :
7. Create and copy the SSH key to all machines so you can SSH to any node from the Master. Add names & IPs to the local hosts file on each master and node. Create alias in vagrant home for kubectl...just use k
Since this lab is designed to be portable, it can be run on an Ubuntu 20.04 installation (including ones that are running under a local Hypervisor (Fusion/Workstation/VBirtualbox, etc))
I have listed the steps below that need to be preinstalled before you can download this repo and run it locally
Remember that while it will run anywhere, it will not have access to any Persistant Storage containers without access to a Pure array
When you want to use this demo find the Vhd1purevm1 machine in the pure folder in VCenter Right click on the machine and choose manage snapshots
From the snap shot menu, you can from three states
BASE Ubuntu image - you follow all the steps below starting at 43 - longest setup cdw_lab demo ready - you can skip to line 57 - long setup cluster running - skip to line 89 - short setup
Choose which one you want and click revert to from vcenter menu
Launch remote console from Vcenter From Linux GUI open a terminal window to proceed
You should install VirtualBox and Vagrant before you start.
Open a shell and install the vagrant disksize plugin:
$ vagrant plugin install vagrant-disksizeInstall git if you don't already have it.
$ git clone https://github.com/brucmod/cdw
$ cd cdwyou can download run the install_K8.sh script from this repo or create your own script from its contents##
$ ./install.shYou can create the cluster with:
$ vagrant upYou can delete the cluster with:
$ vagrant destroy -fSSH to Master and other Nodes:
$ vagrant ssh masterGet the status of the Nodes:
$ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 16m v1.17.4 10.0.0.10 <none> Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.6
node1 Ready <none> 11m v1.17.4 10.0.0.11 <none> Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.6
node2 Ready <none> 6m31s v1.17.4 10.0.0.12 <none> Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.6
node3 Ready <none> 102s v1.17.4 10.0.0.13 <none> Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.6SSH to other Nodes in the cluster from the Master:
$ ssh node1
$ ssh node2
$ ssh node3once all the nodes are up and running, from within the master node, you will now download another git repo to clone the yaml demo files/ansible demo files/PSO and PSO explorer files
git clone https://github.com/cdw_post
cd cdw_post
./master.shkubectl get svc --namespace psoexpl -w pso-explorer
The demo scripts are located in the kubernetes directory. They are designed to be run in order as there may be dependencies.###
as you run them in order, they will create a 14gb PVC on the local Pure FA, then start up a container/pod running minio and start the pod service
kubectl get svc