Easy VM updates with Ansible
Oct 05, 2023
I have a load of VMs running on a physical server in my house. I started with a few VMs and a manual process, but now I have many VMs and I'm tired of updating them all.
I knew Ansible is a popular and easy tool for configuring infrastructure, so I decided to put it to work in this project.
I'll be creating a Docker container to run my Ansible scripts from, as well as my Ansible configuration and playbook.
Ansible configuration
I need to create two files to configure my Ansible project,
ansible.cfg
(general config) and
inventory.ini
(my list of VMs).
ansible.cfg
[defaults]
inventory = inventory.ini
remote_user = james
host_key_checking = False ; see below
I've disabled
host_key_checking
, which isn't too bad for a set of hobby VMs on a home network, but
should be avoided
on any production environment.
@davidolrik has a good solution on StackOverflow
which I'd recommend reading.
inventory.ini
[vms]
vm-1 ansible_host=192.168.99.10
vm-2 ansible_host=192.168.99.20
; ... and so on
This file contains all the VMs I want to address with Ansible. They're stored in the
vms
group for easy referencing.
The Docker container
Creating a Docker container for this might seem like overkill; Ansible is easy to install on most platforms. Generally I set up anything I'm working on in containers. This helps me keep my host system free of extraneous dependencies, as well as giving me some confidence that whatever I build will work when I come back to it months/years later.
I'm going to create a very simple Docker container which has access to my Ansible configuration, and is able to connect to my VMs to run Ansible scripts.
The first step is the Dockerfile:
# Use a base image with Python (Ansible's requirement)
FROM python:3.8-slim-buster
# Install Ansible and OpenSSH client
RUN pip install ansible && \
apt-get update && \
apt-get install -y openssh-client && \
rm -rf /var/lib/apt/lists/*
# Set up our user and copy in our SSH keys
RUN useradd -m user
RUN mkdir -p /home/user/.ssh && \
ln -s /run/secrets/user_ssh_key /home/user/.ssh/id_ed25519 && \
chown -R user:user /home/user/.ssh
USER user
This is a simple Dockerfile - it installs a couple of dependencies, and copies some SSH information from
/run/secrets/
. We'll get onto the secrets in a moment. This SSH information is copied so that we can supply an SSH key to our container, allowing it to authenticate with the various VMs.
Secrets are a feature of Docker Compose
that allow us to work more safely with, well, secrets, such as SSH information. They can be passed in through Docker Compose, and my
docker-compose.yml
ends up being quite simple indeed.
docker-compose.yml
version: '3.3'
services:
ansible-manager:
build: .
volumes:
- ./ansible.cfg:/etc/ansible/ansible.cfg
- ./inventory.ini:/etc/ansible/inventory.ini
- ./playbooks:/playbooks
secrets:
- user_ssh_key
secrets:
user_ssh_key:
file: ~/.ssh/id_ed25519
As you can see, I'm mounting the two config files (and a
playbooks/
directory from the next chapter) into the container, and passing a secret called
user_ssh_key
in. The
docker-compose.yml
file sets the source of this secret (
~/.ssh/id_ed25519
, a key I generated earlier) and it gets attached to the container with the
secrets
instruction. This secret is then available at
/run/secrets/user_ssh_key
within the container.
The playbook
The last step in this creation is the Ansible playbook. Ansible comes with some useful built-in playbooks, so this playbook was very simple to create.
playbooks/update-vms.yml
- name: Update and Restart All VMs
hosts: vms # Target the 'vms' group
become: yes # Run tasks as sudo
tasks:
- name: Update apt cache and upgrade packages
update_cache: yes
upgrade: yes
- name: Reboot the server
reboot:
msg: "Rebooting after upgrades"
reboot_timeout: 300 # Wait up to 5 minutes for reboot
Execution
Now all I need to do is build my image and run my container for the correct playbook.
docker compose build
docker compose run \
ansible-manager \
ansible-playbook /playbooks/update-vms.yml \