Let’s learn how to set up continuous deployment to Kubernetes for your Docker apps. Specifically, we’re going to look at automating the management, deployment, and scaling of your containerized applications.
In a nutshell, Kubernetes is an open-source system for automating the management, deployment, and scaling of containerized applications like Docker. It’s an incredibly powerful tool which we’ll have a closer look at over the next few weeks.
What Is Kubernetes?
According to its website, Kubernetes is a system that groups containers into logical units, which makes management of containers across multiple nodes “as simple as managing containers on a single system.” Kubernetes essentially acts as a digital datacenter, allowing you to seamlessly manage hundreds of servers across as many nodes without ever having to step foot inside an overly air-conditioned clean room.
Beyond simply managing a complex container architecture, Kubernetes also packs some powerful automated deployment and scaling functionality, giving you the ability to roll out new code and resize your datacenter with minimal configuration.
Because Kubernetes introduces a relatively new way to interact with a cluster of containers, there are likely some new terms that I will mention in this series. These can be somewhat ambiguous when you’re just starting out, so to help visualize their definitions, I’ll borrow a diagram from the Kubernetes documentation.
In a nutshell, here’s what this diagram is showing:
- A Cluster is a collection of physical and/or virtual machines called Nodes.
- Each Node is responsible for running a set of Pods.
- A Pod is a group of networked Docker-based containers.
Outside of the parent-child chain are Deployments (which I’ll get to below) and Services. Services are logical sets of Pods with a defined policy by which to access them (read, microservice). A service can span multiple Nodes within a Kubernetes Cluster.
Deployments (Uppercase) Versus deployments (Lowercase)
When it comes to actually launching containers, Kubernetes provides tools to automatically roll out new code by updating Deployment definitions.
It’s important to mention here that “Deployment” in Kubernetes-speak is really just a fancy (and a bit ambiguous) word for a recipe that describes how containers should be configured and launched. Because this series deals with delivering and launching updated Docker images to Kubernetes using Codeship, there is bound to be some confusion over terminology. So, to keep things clear(ish), I’ll be using the lowercase deployment to refer to the act of delivering product, and the uppercase Deployment to refer to the Kubernetes definition of the word.
In Kubernetes, updating a Deployment involves rolling out an updated Docker image to a previously defined Deployment. Kubernetes makes it clear in their documentation that an automated rollout to a Deployment is only triggered when the defined label or container image is updated. This means that simply updating a Docker image in the registry won’t trigger a Deployment update unless we specifically tell it to.
Don’t worry if this seems a bit confusing at first; I’ll be going into more detail about how this whole process works later.
I should point out that, even though Deployment updates need to be triggered in a specific way, there is very little risk of downtime in a multi-container environment. Thanks to the way Deployments are built, Kubernetes will ensure that no downtime is suffered by bringing down only a fraction of the Pods at a time.
While the load won’t necessarily be as efficiently distributed during these updates, the consumers of your applicaton won’t suffer any outages.
Check in next week, when we’ll get started with integrating Codeship into the workflow.
This has been Part One of a series about Kubernetes, Docker and Codeship. Can’t wait for Parts Two and Three? Download our free ebook, Continuous Deployment for Docker Apps to Kubernetes.