In the first post of this series, we introduced using Kubernetes for deployments; in the second, we got started with integrating Codeship into the Kubernetes CI workflow. Now, we’ll wrap things up with how to update Kubernetes Deployments.
Updating Kubernetes Deployments
Once our push step is defined, we need to tell Kubernetes to update the appropriate Deployment to roll out the new image. This is where the previously defined
google_cloud_deployment service comes into play. Thanks to this service, we are able to easily run authenticated commands against Google Cloud Platform without any additional overhead, which means that manipulating our Kubernetes platform from within Codeship is no different than working with it directly.
Before we set up the Codeship step, though, let’s take a look at how updating a Kubernetes Deployment actually works. According to the Kubernetes documentation (and as touched upon above), triggering a Deployment update is as simple as updating the Deployment’s defined label or container image.
For now, let’s assume that we already have a defined Deployment for an Nginx server as per the documentation. All we have to do to roll out an updated Docker image to the Deployment is to change the defined image using the kubectl command like so:
Because we went through the work of tagging our image pushes earlier, this type of update will be relatively easy for us to set up. But, this is just a command — it doesn’t show us how to actually update Deployments from Codeship.
All it takes to accomplish this is a small script to run the few necessary commands to authenticate to the Google Cloud Platform and trigger a Kubernetes Deployment update.
Thankfully, due to the work put in by Codeship already, the script we need to write consists of only a small handful of commands:
You can find the complete repository including the
codeship-steps.yml and the
codeship-services.yml files here: https://github.com/codeship/codeship-kubernetes-demo.
Let’s step through the above script really quick. The first important command is the authentication piece. The
google_cloud_deployment service needs to be authenticated with the Google Cloud Platform before we can run any commands.
Since we set up the necessary environment variables already, all it takes to authenticate is to run the
codeship_google authenticate command at the beginning of our script.
Next, we need to set the compute zone. This example shows
us-central1-a, but you should change this to suit your needs. The next set of commands is the actual Kubernetes interactions.
The first sets the Kubernetes cluster that we need to interact with, while the second is the actual Deployment update command. As you can see, it’s not very different from the example provided by Kubernetes itself. It’s important to note here that Codeship provides an environment variable of the current build’s timestamp, which allows us to correlate the Kubernetes command with the registry push step above.
Now that we have our deployment script set up (I’ve saved mine to the root of my project as
deploy.sh), all we have left to do is add a step to the
codeship-steps.yml file that calls it:
Fortunately for us, most of the heavy lifting for this integration has been done by Codeship already, which means that interacting with the Google Cloud Platform during our CI/CD process is as simple as running a command.
Thanks to the flexibility of Codeship, interacting with any cloud platform that we choose is an incredibly straightforward process. Because we are only limited by the capabilities of Docker itself, our deployment workflows are completely customizable, so we can get our process just right.
Interested in learning how you can deploy your apps with Kubernetes and Codeship Pro? Learn more here.
This has been Part Three of a series about Kubernetes, Docker, and Codeship. Want to read all three parts together? Download our free ebook, Continuous Deployment for Docker Apps to Kubernetes.