Deploying Docker Containers to AWS using CloudBees CodeShip, CodeDeploy and Chef

Codeship ProDevelopmentDocker CommandsIntegrations

Reading Time: 8 minutes

In this article, we will see how we can deploy a web application as a Docker container to Amazon AWS EC2 instances using CloudBees CodeShip’s integration with AWS CodeDeploy. We will integrate Chef configuration management with AWS CodeDeploy to perform the deployment.

To follow along with this article, I recommend cloning the Git repository. In addition, creating a fork of the repository will allow you to deploy the sample web application on your own AWS infrastructure via your own continuous integration (CI) setup. That, of course, assumes you have:

  • An AWS account setup
  • A CloudBees CodeShip Pro account (Free tier is sufficient)
  • Terraform 0.11.x installed locally

Let’s get started.

Sample web application

Our web application is a simple HTTP server written in Golang, listens on port 8080 and returns a simple text response upon requesting the index page. If you have Docker installed, you can build the Docker image and run the container binding the host port 80 to the container port 8080.

In one terminal:

$ <repository root>
$ cd webapp
$ docker build -t amitsaha/webapp-demo:golang .
..
$ docker run -d -P 80:8080 amitsaha/webapp-demo:golang 

In a different terminal:

$ curl 127.0.0.1
Hello, world! I am a web application

Our goal for this article is to deploy this Docker container to AWS EC2 instance. We will push the built image to Docker Hub as part of our CI setup.

CodeDeploy configuration

To be able to deploy the web application container using AWS CodeDeploy, we specify a appspec.yml – stored in the webapp/deployment sub-directory and is as follows:

version: 0.0
os: linux

hooks:
  ApplicationStop:
    - location: ./cd_eventhandler.bash
  AfterInstall:
    - location: ./cd_eventhandler.bash
  ApplicationStart:
    - location: ./cd_eventhandler.bash
  ValidateService:
    - location: ./cd_eventhandler.bash
 ```

Hooks in AppSpec file specify various actions to be taken during stages of the deployment lifecycle. The lifecycle stages and their order vary based on the deployment platform that we select which in our case is EC2. We will be using in-place deployment without a load balancer. Details on the hooks and various lifecycle stages can be found in AWS documentation.

The cd_eventhandler.bash stored in the same deployment sub-directory is as follows:

# !/bin/bash
set -eu

# We need to do the below steps once per deployment and not every lifecycle stage
# Hence, we use the first lifecycle event, APPLICATION_STOP to do this.

if [ "$LIFECYCLE_EVENT" == "ApplicationStop" ]; then 
   pushd /etc 
   aws s3 cp s3://aws-codedeploy-chef-demo/chef.zip /etc/ 
   rm -rf chef 
   unzip -o chef.zip 
   pushd chef chef-solo -c ./solo.rb --override-runlist "recipe[webapp::base]" 
   popd 
   popd 
fi

# For the first deployment to an EC2 instance, the ApplicationStop lifecycle
# hook is not executed, hence, we download the chef cookbooks if the /etc/chef
# directory does not exist and execute the base recipe

if [ ! -d "/etc/chef" ]; then 
   aws s3 cp s3://aws-codedeploy-chef-demo/chef.zip /etc/ 
   pushd /etc 
   unzip -o chef.zip 
   pushd chef 
   chef-solo 
   -c ./solo.rb --override-runlist "recipe[webapp::base]" 
   popd 
   popd 
fi

pushd /etc/chef 
chef-solo -c ./solo.rb --override-runlist "recipe[webapp::CDHook_$LIFECYCLE_EVENT]" 
popd

For the lifecycle event, ApplicationStop which is the first event that will be executed during a deployment, we download a zip file containing the Chef cookbooks, unzip it and execute the base recipe in the webapp cookbook via chef-solo. For all other lifecycle stages, we execute the recipe named CDHook_<lifecycle event> from the same cookbook. As you can see in a comment above, for the first deployment to an EC2 instance, the ApplicationStop lifecycle hook is never executed so we check that for any lifecycle stage. If we don’t have the /etc/chef directory, we download the zip file, extract it and run the base recipe from the webapp cookbook.

Chef cookbooks

The chef cookbook for deploying our sample web application can be found in the chef/cookbooks/ sub-directory. The webapp/recipes sub-directory contain the recipes for our application:

- `base.rb`
- `CDHook_ApplicationStop.rb`
- `CDHook_AfterInstall.rb`
- `CDHook_ApplicationStart.rb`
- `CDHook_ValidateService.rb`

The base recipe installs the Docker engine (if not already installed), with the subsequent steps corresponding to an AWS CodeDeploy lifecycle hook we care about. The CDHook_ApplicationStop recipe currently stops the container named service and then removes it. In a more practical scenario, before the container is stopped, we would take the application instance out of the application pool so that new requests do not get sent to this instance and existing requests are processed.

The CDHook_AfterInstall recipe which runs after ApplicationStop pulls the latest Docker image for the web application and creates a new service container from this image.

The CDHook_ApplicationStart recipe as it stands now just has a sleep for an artbitrary number of seconds to give our application to be ready. We then validate whether our application is ready by making a HTTP request to port 80 in the CDHook_ValidateService recipe.

Setting up an AWS infrastructure

We will need to set up a few things in our AWS infrastructure before we can run a trial to deploy our application:

- VPC and subnets
- CodeDeployment application and associated resources
- S3 bucket for storing deployment artifacts
- AutoScaling group, launch configuration and security groups
- IAM profiles, roles and policies

The infra sub-directory has the Terraform code for creating the entire infrastructure needed for this article. You will need AWS admin level privileges to be able to create all the resources successfully. Once you have Terraform installed and your AWS credentials have been supplied via one of the supported means, run terraform apply, like so:


$ <repo root> 
$ cd infra 
$ terraform apply --var-file=input.tfvars</repo>

There are two variables defined in variables.tf:

variable "ssh_pub_key_path" {} 
variable "ssh_whitelist_cidrs" { 
    type = "list" default = [] 
}

The ssh_pub_key_path, must be set to the path of your SSH public key that you may want to SSH into your EC2 instances with. By default, it is set to ~/.ssh/id_rsa.pub. You can change it in input.tfvars or provide it by one of the other supported means. The ssh_whitelist_cidrs is a list of IPv4 addresses that will be added to the security group to allow SSH connections from. Since we can change this at runtime without needing to recreate an EC2 instance, this is left by default to be an empty list, and we can add it if we need to SSH into an instance.

Once the terraform apply has completed, it will output the public IPv4 address of the instance that is created. We can always retrieve it by running terraform apply again at a later time.

Now that our infrastructure is set up, let’s create our continuous integration setup on CloudBees CodeShip Pro.

Setting up continuous integration in CloudBees CodeShip Pro

The key files for CloudBees CodeShip Pro are codeship-services.yml and codeship-steps.yml. The codeship-services.yml file defines the docker containers that we will use in the codeship-steps.yml file, which is where all the different steps are defined.

We will go through the different sections of the codeship-services.yml next. First, we configure our application container building:


myapp: 
  build: 
    image: amitsaha/webapp-demo:golang 
    context: webapp 
    dockerfile: Dockerfile

The context specifies the webapp directory which is where our web application’s source code and the Dockerfile is.

Next, we define how we want to build the container for building our chef artifact:


chef_builder: 
  build: 
    image: amitsaha/chef-builder context: . 
    dockerfile: Dockerfile.chef 
  encrypted_env_file: aws-deployment.env.encrypted 
  volumes: 
    - ./:/deploy

The Dockerfile.chef builds a docker image based on Ubuntu 18.04 with chef workstation installed. We specify our encrypted AWS access credentials file via the encrypted_env_file specification. To create this file, we follow the instructions described here using the jet cli.

The access keys above correspond to the user webapp which we specifically create for this application. We can generate a access key using:


$ aws iam create-access-key --user-name webapp 
..

The IAM policy that is attached with this user allows it to create a deployment on the code deployment group and perform all operations on the S3 bucket we create for storing the deployment artifacts. The IAM user and policy creation is configured in infra/codedeploy_user.tf and looks as follows:


{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Sid": "1",
        "Effect": "Allow",
        "Action": "codedeploy:CreateDeployment",
        "Resource": "arn:aws:codedeploy:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:deploymentgroup:${aws_codedeploy_app.webapp.name}/${aws_codedeploy_deployment_group.deploy_group.deployment_group_name}"
    },
    {
        "Sid": "2",
        "Effect": "Allow",
        "Action": "codedeploy:GetDeployment",
        "Resource": "arn:aws:codedeploy:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:deploymentgroup:${aws_codedeploy_app.webapp.name}/${aws_codedeploy_deployment_group.deploy_group.deployment_group_name}"
    },
    {
        "Sid": "3",
        "Effect": "Allow",
        "Action": "codedeploy:RegisterApplicationRevision",
        "Resource": "arn:aws:codedeploy:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:application:${aws_codedeploy_app.webapp.name}"
    },
    {
        "Sid": "4",
        "Effect": "Allow",
        "Action": "codedeploy:GetDeploymentConfig",
        "Resource": "arn:aws:codedeploy:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:deploymentconfig:CodeDeployDefault.OneAtATime"
    },
    {
        "Sid": "5",
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "${aws_s3_bucket.deployment_artifacts.arn}"
    },
    {
        "Sid": "6",
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": "${aws_s3_bucket.deployment_artifacts.arn}/*"
    }
]
}

We provide the AWS credentials so that we can publish the chef cookbooks to the S3 bucket setup for the deployment.

The next container is the curl container which we use for running a sanity check on the built web application container:

curl: image: pstauffer/curl:latest depends_on: [“myapp”]

Finally, we configure the codeship/aws-deployment container, which we use to create a CodeDeploy deployment:


awsdeployment: 
   image: codeship/aws-deployment
   encrypted_env_file: aws-deployment.env.encrypted
   volumes: 
     - ./:/deploy 
   environment: 
     - AWS_DEFAULT_REGION=ap-southeast-2

Once again, we specify the aws-deployment.env.encrypted file for this container since we will be performing a deployment using this Docker image.

The codeship-steps.yml file defines the various build steps. First, we run the basic sanity check on our web application container:


- service: myapp
  name: Push docker image for application
  type: push
  image_name: amitsaha/webapp-demo:golang
  image_tag: golang
  encrypted_dockercfg_path: dockercfg.encrypted

In the next two steps, we push our chef_builder Docker image which we then use to build the Chef artifact:


- service: chef_builder
  name: Push chef builder
  type: push
  image_name: amitsaha/chef-builder
  encrypted_dockercfg_path: dockercfg.encrypted

- service: chef_builder
  name: Build and deploy chef artifact
  command: bash /deploy/build_chef.sh--

You should replace the above docker image names and the Docker Hub credentials with your own to be able to perform these steps successfully.

In the final step, we use CloudBees CodeShip’s manual approval feature to wait for an approval before deploying our web application:


- type: manual
  tag: master
  steps:
    - service: awsdeployment
      name: Deploy
      command: codeship_aws codedeploy_deploy /deploy/webapp/deployment webapp webapp-test aws-codedeploy-chef-demo

aws codeship sample Example deployment blocked on user approval

The parameters to the codedeploy_deploy command are:

  • /deploy/webapp/deployment: This is the deployment directory for our web application. A zip file will be created from this directory and uploaded to the S3 bucket aws-codedeploy-chef-demo
  • webapp-test: The CodeDeploy deployment group
  • aws-codedeploy-chef-demo: S3 bucket used for storing the CodeDeploy artifacts.

Setting up your own project

We have now created the AWS infrastructure and set up continuous integration and deployment in CloudBees CodeShip Pro. If you have created a fork of the demo repository, you will need to update a few things to match with your own setup.

  • The Docker hub image name and credentials should be replaced to match with your account.
  • The Chef recipes must be updated to use the updated docker image.
  • The AWS credentials must be updated to match your account’s credentials.
  • Create your own CloudBees CodeShip Pro project.

Once you have done the above, you should be all set and if you go the url http://<IP> which matches the IP address of your EC2 instance, you should see, Hello, world! I am a web application.

Conclusion

In this article, we saw how we can take advantage of CloudBees CodeShip Pro’s AWS CodeDeploy integration to deploy a web application to AWS EC2 instances using AWS CodeDeploy. We saw how we can use chef cookbooks with AWS CodeDeploy allowing us to integrate configuration management into our deployment pipeline. In a practical scenario, the chef cookbooks and infrastructure code will be managed separately, of course.

The GitHub repository has all the resources to help you achieve the same for your own projects.

Additional resources

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.