This a basic playbook and roles collection.
- Requirements
- Project structure
- Usage
- Tailscale integration
- Limitations and specifics
- Issues
- Development
- Skeleton
- Standalones
Ansible control node:
- OS: any that supports ansible installation
- ansible
- openssh-client
- bash (optional, in case you want to take advantage of helper bash scripts)
- sshpass (optional, for password-based ssh connections)
Ansible managed nodes:
- OS: latest versions of Alpine, Debian or alike (for Ubuntu latest LTS version), CentOS or alike
- openssh-server (running)
- ansible user (must be sudoer)
- python
Tips
Debian-like managed node:
# Performed by root user
# Install prereqs
apt-get update
apt-get install -y openssh-server python3 sudo
# Ensure ssh server running
systemctl enable --now sshd
# Create ansible user, replace USERNAME
useradd -m USERNAME
passwd USERNAME
# Make ansible user sudoer
usermod -aG sudo USERNAMEAlpine managed node:
# Performed by root user
# Install prereqs
apk add --update --no-cache openssh-server python3 sudo shadow
# Ensure ssh server running
rc-update add sshd
rc-service sshd start
# Create ansible user, replace USERNAME
useradd -m USERNAME
passwd USERNAME
# Make ansible user sudoer
echo '%wheel ALL=(ALL) ALL' > /etc/sudoers.d/wheel
usermod -aG wheel USERNAME├── .dev # <- Development convenience scripts
├── ansible.cfg
├── bin # <- Playbook convenience scripts
├── playbook.yaml
├── requirements.yaml # <- Required roles and collections declaretions
├── roles # <- Categorized roles
├── sample # <- Configuration samples
├── vaulted.txt # <- (optional) Create this empty file to trigger vault pass prompt
└── vendor # <- Vendor directory-
Copy configurations from sample directory and edit them:
cp -rp ./sample/* ./ -
Use
ansible-playbookwrapper scripts frombindirectory to deploy
In order to provoke vault password prompt bybinscripts createvaulted.txtfile in the project root directorytouch ./vaulted.txt
Some services have capability to be proxied via tailscale. Mostly these are traffic intensive ones, and they gain in afficiency when routed via tailnet (vs subnet routing).
Tailscale is just my personal preference. I don't get payed from them (wouldn't mind though 😏)
-
base/snapdrole is not supported by Alpine. In LXC context for non-Ubuntu-like systems installation of snaps fails with error:error: system does not fully support snapd: cannot mount squashfs image using "squashfs" ...
So I limited it in all contexts to Ubuntu-like only
-
base/tmuxprole is not supported by Alpine -
desktop/*roles are mostly oriented to Debian-based (sometimes narrowed to Ubuntu-based) distros -
service/*roles are primarely deployed with docker
- Due to keyserver.ubuntu.com sometimes switches comment in PGP keys, all tasks that download them from there are
changed_when: false.
Issues:
- If your playbook development is under git source control run
.dev/dev-init.shscript to ensure hooks. - See
base/dockerandbase/demo-noapproles for demos on how to write roles. - Run
.dev/sample-vars.shwhen completed to add configurations to sample vars file.
The goal of all these *_done variables is to reduce noise in the ansible-playbook log when roles are played more than once.
A lot of roles depend on data collected by factum, so it's better to be the first role in the playbook. It provides the following:
getent_passwd:
root:
- ... # <- Useless
- UID # [1]
- GID # [2]
- ... # <- Useless
- HOME # [4]
- SHELL # [5]
# ... # <- Other users
factum_os_family: # <- Lowercase ansible_os_family
factum_os_like: # <- More prioritized OS like
factum_ubuntu_codename: # <- Ubuntu code name for Ubuntu-like distrosPerforms all necessary initialization tasks (factum included), so it basically must be the first role in the playbook.
- name: 'MyBook'
hosts: all
roles:
- { role: init, tags: [always] } # <- Required initial tasks
# ... more rolesTesting environment LXC containers in PVE deployment scripts is available in ./tools/test-env. It works in conjunction with devenv.sh script.
In order to create a project cloned from this one, issue
curl -fsSL https://github.com/spaghetti-coder/ansible-bookshelf/raw/master/.dev/remote-proj.sh | bash -s -- install \
path/to/new/project \
master `# <- Optional tree-ish`To sync libraries and tools in the new project with the upstream issue:
# Alway `git commit` before this action
.dev/remote-proj.sh pull-upstream-
Installation
sudo mkdir -p /etc/tmux sudo curl -o /etc/tmux/default.conf CONFI_FILE_URL echo 'source-file /etc/tmux/default.conf' >> ~/.tmux.conf