About Immutable Infrastructure

IndustryOperations

Update: We have released a free ebook about our workflow: Efficiency in Development Workflows.

Last week we talked about Deployment Pipelines and Zero Downtime Deployment.


After reading Chad Fowlers excellent blogpost about immutable deployments at 6Wunderkinder, we wanted to share our views on immutability in infrastructure.

Our definition of Immutable Infrastructure:

  1. Automate the setup and deployment for every part and every layer of your infrastructure.
  2. Never change any part of your system once it is deployed. If you need to change it, deploy a new system.

For example, instead of deploying into an existing EC2 instance, start a new server, deploy there and point your load balancer to the new server. Then remove the old server.

Replacing a system at the lowest level you can forces you to automate every deployment step.

Immutable infrastructure and Continuous deployment work great together. Completely replacing, instead of updating, an existing part of your infrastructure makes your deployments less complex.

Test driven development, Continuous Deployment and Immutable Infrastructure are strategies we have been using on Codeship for a long time.

For Immutable Infrastructure you need cloud servers and a virtualised environment.

Cloud servers are building blocks

In his AWS re:Invent Keynote Werner Vogels talked about Cloud servers as building blocks for larger systems. Jamie Begin wrote a great blog post on cloud serves as building blocks, based on the Keynote.

Today cloud instances are still used like physical hardware in the past. You set it up once and update it whenever necessary. The problem is that cloud servers are not meant to be reliable or durable.

Their advantage is that they are standardised and easy to replace. Cloud servers are like Lego pieces that can be changed whenever necessary. If you want to have a different color or the lego piece breaks, just put in a new one. You wouldn’t repair a lego piece, would you?

Immutable Infrastructure is like building with lego blocks

You wouldn’t repair a lego piece. Just grab a new one.

Our Experience with Immutable infrastructure

Our web application, the Mothership, is hosted on Heroku and has therefore always been immutable. Whenever we deploy a new version, Heroku builds the Slug and replaces current instances with it. We have enabled Herokus Zero Downtime support.

Our test server infrastructure, the Checkbot, is hosted on AWS since August 2012. Whenever we want to change the test servers, we build a completely new Amazon AMI, test it and replace the old machines with the new AMI. We will go into more detail about this in our next blogpost.

By replacing every part of our infrastructure, often several times a day, we feel very comfortable with releasing changes. This workflow allows us to improve our service very quickly.

Advantages of an Immutable Infrastructure

There are many more advantages to Immutable Infrastructure than the following, but we have found these to be the most important ones to us:

  • Going back to an old version is easy, as you have the old image available.
  • Every change to the infrastructure needs to be in a script. Any server can be removed at any time and will take manual changes with it.
  • It’s easy to have a production-like system on development machines.
  • You have an incentive to speed up the time your servers need to be built. We will talk about this in future blogposts.
  • Setting up staging systems is easy and can be automated.
  • Testing the new infrastructure in isolation is possible.

Challenges with Immutable Infrastructure

Of course this approach also has its challenges. Especially around tooling.

  • Better and standardised tooling is necessary, although new tools like Packer make it easier.
  • Setting up automation for immutable infrastructure has higher costs at the beginning.
  • Fixing problems is slower as you can’t just SSH into an existing server. It needs to be redeployed.
  • There needs to be a way to reliably replace a server without impacting the whole system. Queuing and proxies that can store requests for a while are helpful.
  • Replacing databases continuously is hard.

Try Codeship – The simplest Continuous Delivery service out there.

Conclusions

Fixing broken servers instead of replacing them is a waste of time. It slows down the development and deployment cycle.

Test-Driven Development, Continuous Deployment and Immutable Infrastructure are practices every team should use. Together these practices help build reliable and high quality software that can be changed at any time. Being able to go back to an old version of your system in seconds allows you to experiment and innovate at a much faster pace.

Over the last months different tools like Packer or Docker have been released that make Immutable Infrastructure a lot easier.

In our next blog post we will show you in detail how we deploy our testing infrastructure several times a day. In future blog posts we will introduce Packer, Docker and other tools and show you how to rebuild your infrastructure constantly. Stay tuned!

Further Info:

Subscribe via Email

Be sure to join 36,336 subscribers of our newsletter to receive updates on software development best practices, Continuous Delivery and tips and tricks to start shipping your product faster.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Sunil
    • https://www.codeship.io/ Florian Motlik

      Where do you mean it is missing?

    • https://www.codeship.io/ Manuel Weiss

      I’ve added the link to part 5 at the end of this blog post. Thanks Sunil!

  • Pingback: Exploring the potential of AWS EC2()

  • Pingback: Immutable Infrastructure and My Experience in Cloud-Enabling Magento | CDS Global Blog()

  • Pingback: Immutable Infrastructure and My Experience in Cloud-Enabling Magento | CDS Global()

  • Adam

    Doesn’t replacing broken servers end up costing more than performing repairs? Sure, if an external company like Heroku is hosting on such a large scale it’s clearly going to be benneficial, but what about a small startup using there own infrastructure/colocation?

    • https://www.codeship.io/ Florian Motlik @codeship

      As a small startup you shouldn’t have your own infrastructure or colo in the first place. It’s never going to pay off unless you are the 0.1 percent of companies that need all the bare metal they can get. And even then it’s probably wrong to do it.

      And if you have your own metal (and you shouldn’t) going with docker to provide a PaaS on top of your servers is the way to go imho

      • Adam

        Cool, thanks for the reply Florian.

      • ferakpeter

        Agree. Maybe it was explained too well why this makes so much sense. In the modern dev world you want to be as flexible as you can, and that means especially being flexible with infrastructure. Scaling up and scaling down on demand. The easiest way to do this is by utilizing the cloud tools. Think about it, if you run a prototype on a single machine and it works out, you get customers you increase load, you’ll have to scale up quickly! Especially as a startup. And you don’t want to wait around for additional machines.

  • rashid noman

    Is your ageing IT infrastructure hampering your progress? Are you worried about losing your business data? Are you worried about failed computers causing loss of productivity and valuable resources? If yes, then 403tech IT services has got the perfect solution for all your IT needs.
    http://www.403tech.com/

  • Pingback: 吐槽:Docker真的好吗? | 我爱互联网()

  • Pingback: 对于 Docker 以及容器技术的“再检讨” | 月松博客月松博客()

  • Pingback: 对于 Docker 以及容器技术的“再检讨” - memleak.in | memleak.in()

  • ferakpeter

    Hi Flo. Great articles. I’m interested in your thoughts on where the continuous delivery is going. What are the challenges that lie ahead and need to be solved? Are there new trends that are becoming easier to use and useful?