Reading Time: 4 minutes
Codeship was at DockerCon 2015! This week, we’ll be providing summaries on our blog of some of the talks we attended at this two-day conference in San Francisco. If you are interested in Docker support from Codeship click here.
As a site reliability engineer at GrubHub, Valeo walked his audience through the workflow that enables the restaurant delivery service to continuously deploy in a repeatable, safe, and quick manner. It’s an achievement, he said, that represents not only a huge technical change but a cultural change as well.
Valeo stated that GrubHub’s goal was to improve its automated continuous deployment. Devs should be able to deploy their code as safely and quickly as possible to production in a repeatable way.
Their measures of success would be a quick turnover time for issues and bugs and deploy times of less than one hour. This way, they’d be able to keep their focus on their consumers — get things out faster and keep everything stable.
But a few questions had to be cleared up. How, exactly, to achieve the goal of quick and safe deployment? Which tools to use? Valeo said they looked at several options, including rsync, SSH, AMIs/images, and Docker.
Docker was one of the most interesting options. But before they fully bought into the new technology, Valeo’s team needed to evaluate it:
- Is there a performance overload using Docker?
- The team experimented with running multiple Docker containers. They did extensive testing of Docker and the services running on it to gain more insights into the new system.
- The result was very satisfying: no to minimal overhead added
- Are Docker and its tools production ready?
- For GrubHub’s purposes, yes.
- Container orchestration tools are just about where they need to be.
But perhaps the biggest question that GrubHub threw at Docker was how, exactly, does Docker enable continuous development?
Fortunately, Valeo explored that question at length.
Valeo’s team managed to come up with a continuous deployment pipeline that enabled developers not to worry about deployments anymore but focus on their code instead.
Each push to master kicks off the build and deployment pipeline and builds and tests the Docker containers.
- Images are pushed to S#
- Deployment job is kicked off a. Artifacts stored in S3 b. Starts EC2 instances (one per container) c. Starts instances with local registry, pulls containers, and runs
- Executes instance tests, service level test, and “big test”
- Outputs results of tests back to user, as well as collects all the service logs
Valeo pointed out that GrubHub is able to run the entire stack locally in the same way they run in production. There’s less concern about a host operating system; different teams have different services that have different requirements. The Docker pipeline makes it easier to deal with those disparities.
Valeo said that with Docker, it doesn’t matter where you want to run your environment. Docker APIs make it easy to manipulate physical and cloud data centers in the same way, which just makes it easier all around to use.
GrubHub’s Docker-centric workflow also helps devs move away from focusing on deployments and gets them back to focusing on code. For example, a lot of small teams were worrying about deployment. Now, automated deployments simply happen; if there are problems, Valeo said, then there are problems with the pipeline, which can be addressed and fixed quickly.
Of course, Valeo admitted, there are always lessons to learn with such a big change in technical process. Like, how to manage Dockerfiles and images (one Dockerfile per service, one Dockerfile for service type, and tests to cover failures/issues with Dockerfiles), registry considerations (local vs. S3), and how not everything needs to run on Docker.
And then there’s the challenge of troubleshooting in production. Debugging is a different animal with Docker than it was with a more traditional infrastructure.
But, with proper testing and safety nets in place, GrubHub has increased its confidence in deployments with its continuous deployment pipeline based on Docker. With the right workflow in place, continuous deployment is achievable for any size company, enabling the company as a whole to pick up its pace and add value for its users.