Using Packer and Ansible to Build Immutable Infrastructure


At Codeship we run immutable servers which we internally call Checkbot. These are the machines responsible for running your tests, deploying your software and reporting the results back to our web application. Of course, there are constant changes to the setup of these images. New software needs to be installed, packages upgraded, old software versions removed. Let’s see how we do that!

Vagrant and Packer Workflow

The software stack used for building and testing these images in our current workflow consists of Vagrant for development, Packer for actual image generation and a series of shell scripts for provisioning. This worked fine for the last years, but as our team grows and more people are making changes to the scripts, this can easily get out of hand and become confusing. So we were looking for a lightweight tool to replace our shell scripts with. As we didn’t want to have an agent running to watch over the host, most configuration management tools were not an acceptable solution.

Using Ansible

Ansible with it’s YAML based syntax and agentless model fits quite nicely. We are still in the process of getting started, but the experience was so good, I couldn’t wait to share my findings. Maybe this post can convince you to take a look at Ansible and get started with configuration management yourself.

Getting started with Ansible

According to their website “Ansible is the simplest way to automate IT”. You could compare it to other configuration management systems like Puppet or Chef. These are complicated to setup and require installation of an agent on every node. Ansible is different. You simply install it on your machine and every command you issue is run via SSH on your servers. There is nothing you need to install on your servers and there are no running agents either.

> # Ansible installation via pip
> $ sudo pip install ansible

Something that took me a while to appreciate was the fact that Ansible playbooks (the pendant to Chef cookbooks or Puppet modules) are plain YAML files. This makes certain aspects a bit harder, but keeps the playbooks simple and easy to understand. (Try writing complicated shell commands with multiple levels of quoting and you will see what I mean.) Even for somebody who doesn’t know a lot about Ansible. For a more thorough introduction, please see the Ansible homepage and don’t forget to check the fantastic docs available at

Building Immutable Infrastructure with Ansible

I started with the default integrations in Packer and Vagrant, which are straightforward to setup and require just a few lines of configuration.


    "provisioners": [
            "type": "shell",
            "execute_command": "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
            "inline": [
                "sleep 30",
                "apt-add-repository ppa:rquillo/ansible",
                "/usr/bin/apt-get update",
                "/usr/bin/apt-get -y install ansible"
            "type": "ansible-local",
            "playbook_file": "../ansible/checkbot.yml",
            "role_paths": [

Update 2014-05-12: Specifying a glob in the role_paths variable is not yet possible with packer v0.6. Instead, you have to specify each
role individually. A pull request adding this feature is already
merged on GitHub und will probably be released with the next version.


# Provisioning with ansible
config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "ansible/inventory"
    ansible.playbook = "ansible/checkbot.yml"
    ansible.sudo = true

But I decided to change those in favor of a couple shell scripts to get more flexibility when calling Ansible. Also it allows me to compensate for certain differences in the way Ansible is integrated with both Packer and Vagrant. As removing any possible differences is key in avoiding subtle bugs in testing vs. production. As an example take our current code for creating a LXC container and configuring some basic settings. I’m sure that, even without any further explanation, you can quite easily figure out what each item is supposed to do.


# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.conf(5)
# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
# Container specific configuration
lxc.rootfs = /var/lib/lxc/{{lxc_container}}/rootfs
lxc.mount = /var/lib/lxc/{{lxc_container}}/fstab
lxc.utsname = {{lxc_container}}
lxc.arch = amd64
# Network configuration = veth = up = lxcbr0 = 00:16:3e:11:f6:6c
# cgroup configuration
lxc.cgroup.memory.limit_in_bytes = {{lxc_memory_limit}}M
# Hooks
lxc.hook.pre-start = /var/lib/lxc/{{lxc_container}}/pre-start


# file: host/defaults/main.yml
lxc_container: codeship
lxc_memory_limit: 15360


# file: host/tasks/lxc.yml
- name: LXC | Installation
    pkg: "{{item}}"
    state: present
    - lxc
    - lxc-templates
    - debootstrap
    - bridge-utils
    - socat
- name: LXC | Check configuration
  command: lxc-checkconfig
- name: LXC | Create new container
  command: "lxc-create -n {{lxc_container}} -t ubuntu creates=/var/lib/lxc/{{lxc_container}}/"
- template: src=lxc/config.j2 dest=/var/lib/lxc/{{lxc_container}}/config
- template: src=lxc/pre-start.j2 dest=/var/lib/lxc/{{lxc_container}}/pre-start mode=0744 owner=root group=root


# setup ssh access for the root user
mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/
cp ~ubuntu/.ssh/ /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/authorized_keys
# setup ssh access for the rof user
if [ -d "/var/lib/lxc/{{lxc_container}}/rootfs/home/rof/" ]; then
  mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/
  cp ~ubuntu/.ssh/ /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/authorized_keys

This is only the beginning and a small step in configuring a whole build system for use by Codeship, but it shows the beauty of Ansible. It is extremely simple to understand. It provides a good abstraction of commonly needed patterns, like package installation, templates for configuration files, variables to be used by playbooks or configuration files and a lot more. And it doesn’t require any software installation on the host except an SSH server, which is pretty standard anyways.

And in combination with Packer we have an environment that let’s us build our production system running on EC2 as simple as a box used for development with Vagrant. And that’s great, because it makes our team more productive.

Try Codeship – The simplest Continuous Integration service out there.

What’s possible with Ansible

Nevertheless we are far from finished. I am just starting to learn what is possible with Ansible and what modules are available. Some of the items on my checklist for the next months include

  • running multiple playbooks in parallel to speed up provisioning
  • getting to know the module system a lot better, and possibly write some modules myself
  • fine tuning the output generated by ansible
  • converting all the remaining shell scripts to playbooks, which is going to be the biggest part

What do YOU think about Ansible? If you have ideas or suggestions to improve our workflow, please let us know in the comments!

Further Information

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Spencer

    In your packer json file you put:
    “role_paths”: [

    I can’t get that to work and have to enumerate every role directory individually. How do you get that to work?

    • Daniel Lang

      See the role paths in your ansible.cfg file.

    • Hi Spencer, you are absolutely correct, you aren’t (yet*) allowed to specify globs in your role_path. Sorry I missed this. For some other reasons I ended up using a file directive to upload the whole directory and running ansible locally and just assumed it would work (like it does for file uploads)

      Thanks for spotting this (and sorry for my late reply!)

      * See for an already merged PR implementing this feature!

      P.S.: I also updated to post accordingly!

  • Just curious, I thought one of the selling points of Ansible was the zero-footprint. Why then are you adding an apt-get repository and installing it in the inline section within the provisioner section?

    • You’re right, you wouldn’t need to install it on the machine we are building via packer. This is something I’m definitely going to work upon and improve in the future, but I wanted to get started quickly and running locally was the path of least resistance. (We are using packer to build AMIs for EC2 and therefore we don’t know the IP in advance)

      • No problem. I’ll always put getting things working ahead of trying to follow some sort of dogma. Just wanted to make sure I wasn’t missing something.

  • Noah F. San Tsorbutz

    Re: ” I’m sure that, even without any further explanation, you can quite easily figure out what each item is supposed to do.”
    Uh, no.
    You’re assuming that your readers are experience admins, not newbies trying to learn, and being thwarted at every turn by the assumptions of the folks that are trying to help them. C’est la vie.

  • Pingback: 1 – Using Packer and Ansible to Build Immutable Infrastructure | Offer Your()

  • Pingback: TechNewsLetter Vol:11 | Devops Enthusiast()

  • feniix

    I think you may have a typo in all your code examples as it seems that you are adding a javascript shebang “#!javascript” even on non javascript examples

    • Whoops, you’re right. An update to our blogging system must have messed with the code blocks. I’ll get this fixed, thanks for letting us know!

  • Pingback: issue #35: cgroups, sqlmap, etcd3, Elasticsearch, Charles, Packer & more! - Cron Weekly: a weekly newsletter for Linux and Open Source sysadmins()