Testing Ansible Playbooks with Docker
As long as I have been working with configuration management tools (puppet & ansible) there hasn’t really been a good way to test the units you’ve been written. Up until recently my experience has been something like this: working on a feature on the master branch, in best case someone will lend their eyes to look at the changes, run changes directly on the target environment and hope everything works without spewing errors. Except being an embarrasing workflow this imposes a risk; there’s no way of knowing that your changes won’t set the target environment on fire and either way testing in production (or any other target environment for that matter) is unacceptable!
Operations have come a long way to embrace good software developer practices, so there should be a way for us to embrace this regarding writing and testing playbooks/modules/cookbooks as well. What I have in mind is hardly revolutionary (basically just a feedback-loop):
Working on a feature on a branch -> Commit and push changes -> Automatically trigger a run on the CI server -> CI server checks syntax, lint and runs the actual playbook -> If successful; branch can be merged to master, If failed; redo process until success
Let’s look at a couple of alternatives how to set this up. As config management tools change state I ideally want something that I can spin up, test on and later tear down without changing state on something that is persistent.
Alternatives
Locally. Running playbooks locally is indeed a possibility. However from my point of view it’s not really a viable alternative as my host operation system need to be the same as the target environment OS, and it needs the same configuration and repos. And I don’t want to mess around installing and configuring on my workstation.
Vagrant. The old workhorse. Vagrant is definitely a viable solution. The major selling point, as it’s virtualization, is that you can mirror your target environment with Vagrant. The drawbacks are that it’s pretty slow to spin up and I haven’t found a good way to integrate Vagrant with CI servers (Jenkins).
Docker. Docker is an interesting alternative, containers are blazingly fast to spin up and tear down and integration to a CI server is really easy. Docker best practices is to run one process per container and if we’re going to use Docker containers to test ansible playbooks we’re going to violate that (since we need systemd as a init system and openssh server). With that in mind let’s try it anyway!
Dockerfile
I’ll be using centos 7 here. The first problem is to find an image that has both systemd and sshd installed. I couldn’t find a good image so we’re going to build it ourselves. Take a look at the Dockerfile below:
Explanation
- We base this on the official centos7 image
- The ugly hack starting at the RUN command and below is needed to get systemd working, docs here
- We install openssh-server and passwd and sudo (this enables us to create a ssh user)
- We enable sshd service with systemctl so it starts at boot
- We run a script called start.sh (see below) that will create a user, called user, and add it to sudoers
- We create an environment variable,
AUTHORIZED_KEYS
, which we will inject our public ssh key when the containers starts that ansible will use to authenticate - run.sh (see below) checks if
AUTHORIZED_KEYS
environment variable is set, if true it takes the value and popluate authorized_keys file. Then it runsexec /usr/sbin/init
start.sh
run.sh
Putting it all together
Our project should look something like this now:
First things first, lets build our docker image (I won’t bore you with the output): docker build -f docker/Dockerfile -t local/centos7-systemd .
Then let’s create a ssh key-pair to use for ansible to ssh into the container. ssh-keygen -t rsa
, then put them in the ssh dir (I chose to put the ssh dir in the ansible dir, but you could place them somewhere else)
Let’s fix the ansible part of it. I’m going to create a very simple httpd role that installs httpd and check that it’s running. But first we need to add details to the inventory file (env/local-test
) to let ansible know how it can access the container:
Now lets take a look at our httpd role at roles/httpd/tasks/main.yml
:
As you can see, nothing revolutionary is going on here, it’s just instructions to install httpd and enable it and see that port 80 is open. Now let’s take a look at httpd.yml
:
Again, this is nothing out of the ordinary. An important thing to note is that we’re setting up privilegie escalation so we can execute command as root.
Cool, now we’re ready to start a container from the image we created and run ansible-playbook
against it! For convenience I’ll put all the commands in a shell script, container-start-and-playbook-run.sh
, that way it’s easy to chain everything together. Here’s the content of the shell script:
This should be pretty self-explanatory by now. We map port 22 inside the container to port 5000 on the host which we also defined in ansibles inventory file. Lets go ahead and run the script:
It worked! Great success! :)
Pro’s and Con’s
Pro’s
- Fast and flexible
- Easy integration with CI servers so setup can establish a nice workflow
- Very little extra configuration needed; just ssh-keys and another environment file, otherwise playbook can be run as is
Con’s
- The container needs to mount systemd cgroups on the host as a volume so it cannot run on distos using another init system or OSX
- Setup feels a bit hackish