Automate Your Hugo Site with Codeship and Terraform


Reading Time: 10 minutes

A couple months ago, I was looking at my personal blog ( and realized even though I was using WordPress I wrote so infrequently that it might as well be a static site. I’d heard of some hot new crazy-fast static website generator called Hugo and thought it’d be worth checking out.

I’ll briefly walk you through my experience setting up Hugo, and then we’ll make sure everything is configurable with Terraform, and finally, we’ll set up Git and Codeship so you can easily push updates to your website.

Getting Started with Hugo

So I ran through the really hard installation process of running brew install hugo, followed by the create new site command, hugo new site, and just like that I had a skeleton site in place that I could start working with. If you’re brand new to Hugo, I recommend reading the quick start guide at

With Hugo, you author content in Markdown files, and with a single command, it renders your full site in less than a couple seconds (even for hundreds or thousands of pages). Hugo also has a nice built-in web server that rebuilds and serves your site on every file save so you get immediate feedback and can preview your updates as you make them. Thankfully my site wasn’t too large, so porting my content over from WordPress to Hugo only took a couple hours of copy/paste/tweak to fully convert it.

Moving From Shared Hosting to AS3

Once I’d converted it, I wanted to get it off a shared hosting service and onto Amazon S3 so I wouldn’t have to maintain servers and would pay next to nothing to run it. S3 supports serving static content and even has a basic ability to configure redirects, so when my URLs changed a little from WordPress to Hugo, I was able to configure permanent redirects (HTTP 301) from old to new URLs.

Configuring S3 for static website hosting is fairly simple using Amazon’s web interface, but if you want to support the friendly URLs of Hugo or redirect old URLs to new ones, it takes some special configuration.

Without boring you with the details, you can access S3 content using at least three different domains, but only one of them supports redirects, so ensuring you configure CloudFront to access content on the right origin is important and not intuitive.

After going through the process and figuring out what I had done wrong multiple times, I realized this configuration should just be templatized so I (and others) don’t have to figure it out again.

You do not need to use Route 53 for DNS, but for this example, I will use it to keep everything on AWS and configurable by Terraform. The AWS services I’ll use are S3 for content hosting and web serving, CloudFront for global CDN and SSL support for custom domain name, Amazon Certificate Manager for an SSL certificate, and Route 53 for DNS.

Rather than explain every step of the Terraform code, I’ll just provide it here with inline comments. If you’d like to learn more about Terraform and how I use it at work to manage Docker-based workloads, check out my three part article series starting here.

Establishing Infrastructure with Terraform

I keep my Terraform code in a terraform/ folder in the same directory as my Hugo project. Here is an example of my folder structure:

Here are the contents for each of the Terraform files:

#Change bucket name to your own bucket. I recommend not using same bucket as your
#website to prevent accidental exposure of Terraform state. Also change profile to 
#the AWS credentials profile you want to use.
terraform {
 backend "s3" {
   bucket         = "fillupio-terraform"
   key            = "terraform.tfstate"
   region         = "us-east-1"
   encrypt        = true
   dynamodb_table = "terraform-lock"
   profile        = "fillupio"

#Create S3 bucket and CloudFront distribution using Terraform module 
#designed for S3/CloudFront configuration of a Hugo site.
module "hugosite" {
 source         = "fillup/hugo-s3-cloudfront/aws"
 version        = "1.0.1"
 aliases        = ["${var.aliases}"]
 aws_region     = "${var.aws_region}"
 bucket_name    = "${var.bucket_name}"
 cert_domain    = "${var.cert_domain_name}"
 cf_default_ttl = "0"
 cf_max_ttl     = "0"

#Create IAM user with limited permissions for Codeship to deploy site to S3
resource "aws_iam_user" "codeship" {
 name = "${var.codeship_username}"

resource "aws_iam_access_key" "codeship" {
 user = "${}"

data "template_file" "policy" {
 template = "${file("${path.module}/bucket-policy.json")}"

 vars {
   bucket_name = "${var.bucket_name}"

resource "aws_iam_user_policy" "codeship" {
 policy = "${data.template_file.policy.rendered}"
 user   = "${}"

#Add record to Route 53
resource "aws_route53_record" "www" {
 zone_id = "${var.aws_zone_id}"
 name    = "${var.hostname}"
 type    = "CNAME"
 ttl     = "300"
 records = ["${module.hugosite.cloudfront_hostname}"]

output "codeship_access_key_id" {
 value       = "${}"
 description = "AWS Access Key ID for Continuous Delivery user"

output "codeship_access_key_secret" {
 value       = "${aws_iam_access_key.codeship.secret}"
 description = "AWS Access Key Secret for Continuous Delivery user"

output "cloudfront_hostname" {
 value       = "${module.hugosite.cloudfront_hostname}"
 description = "CloudFront DNS hostname to create a CNAME to with DNS provider"

#Change profile to the AWS credentials profile you want to use.
provider "aws" {
 region  = "${var.aws_region}"
 profile = "fillupio"

variable "aliases" {
 type        = "list"
 default     = ["", ""]
 description = "List of hostname aliases"

variable "aws_region" {
 default = "us-east-1"

variable "bucket_name" {
 default = ""

variable "codeship_username" {
 default = "codeship"

variable "cert_domain_name" {
 default = "*"

variable "aws_zone_id" {
 default     = ""
 description = "AWS Route 53 Zone ID for DNS"

variable "hostname" {
 default     = ""
 description = "Full hostname for Route 53 entry"

Running it

The way I’m demonstrating use of Terraform here will use S3 to store its state file and DynamoDB to provide a state file locking mechanism. So before you can run Terraform, you need to create a different S3 bucket and a DynamoDB table.

I do not recommend using the same S3 bucket for storing your Terraform state as well as your Hugo website. It would be fairly easy to mess up on permissions and expose your private Terraform state file to the world.

Create an S3 bucket that does not have static website hosting enabled and default permissions of owner-full-control. I would also enable versioning for extra backups. In my example code above, I named it fillupio-terraform. Next, create a DynamoDB table named terraform-lock with a Primary Key named LockID:

Create DynamoDB table

If you have not already installed Terraform, go ahead and do so. Next, open your Terminal and change into the terraform directory. Terraform needs to initialize its state before you can apply your configuration, so run terraform init:

$ terraform init
Initializing modules...
- module.hugosite

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* version = "~> 1.5"
* provider.template: version = "~> 1.0"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Next you can run terraform plan to ensure everything looks okay for applying (trimmed for brevity):

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.policy: Refreshing state...
data.template_file.bucket_policy: Refreshing state...
data.aws_acm_certificate.cert: Refreshing state...


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_iam_access_key.codeship

  + aws_iam_user.codeship

  + aws_iam_user_policy.codeship

  + cloudflare_record.www

  + module.hugosite.aws_cloudfront_distribution.hugo

  + module.hugosite.aws_s3_bucket.hugo

Plan: 6 to add, 0 to change, 0 to destroy.

If you have any errors, go back and fix them. One potential cause is if you’ve used the name of an existing S3 bucket in the variable bucket_name or if the DNS record to be created already exists. Terraform can only create/modify/destroy resources it knows about, so if you try to create something that already exists it will error out as a conflict.

Assuming you have no errors at this point, you can run terraform apply to create everything (trimmed for brevity):

$ terraform apply
data.template_file.policy: Refreshing state...
data.template_file.bucket_policy: Refreshing state...
data.aws_acm_certificate.cert: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_iam_user.codeship: Creating...
module.hugosite.aws_s3_bucket.hugo: Creating...
aws_iam_user.codeship: Creation complete after 0s (ID: codeship)
aws_iam_user_policy.codeship: Creating...
aws_iam_access_key.codeship: Creating...
aws_iam_access_key.codeship: Creation complete after 0s (ID: AKAI)
aws_iam_user_policy.codeship: Creation complete after 0s (ID: codeship:terraform-20180204174258111000000001)
module.hugosite.aws_s3_bucket.hugo: Creation complete after 5s (ID:
module.hugosite.aws_cloudfront_distribution.hugo: Creating...
module.hugosite.aws_cloudfront_distribution.hugo: Creation complete after 4s (ID: E1HGG)
cloudflare_record.www: Creating...
cloudflare_record.www: Creation complete after 2s (ID: abc123)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.


cloudfront_hostname =
codeship_access_key_id = abc123
codeship_access_key_secret = abc123

That’s it — within a few minutes of that process completing, you would be able to access your website at the domain you specified, assuming your files were there already. Now that the S3 bucket for hosting your site exists, we can set up our Codeship build and deployment pipeline to automate deployment of your site.

Automating Deployment with Codeship

Codeship’s basic continuous integration and deployment (CI/CD) service already has Go available, so installing and running Hugo in a CI fashion to build your site and then deploy it is pretty straightforward.

The first thing you need to do is link your Git repository with a new project on Codeship (if you don’t already have a Codeship account, go get one, it’s free and only takes a moment).

Connect GitHub repository

With Codeship Basic, you’ll be able to configure your “test” setup instructions as well as configure deployment options based on source code branches.

Continuous integration doesn’t always have to mean running unit tests, and in the case of a Hugo site, there are no tests to run. We’re simply building the static version of the site to be deployed and, if successful, deploying it.

So for the Setup Command section, we provide the commands to download and extract Hugo, and for Test Pipelines, we tell Hugo to build the site:

Configure your tests

Setup Commands:

curl -LO
tar -vzxf hugo_0.32.2_Linux-64bit.tar.gz
chmod a+x hugo

Test Pipelines:


With the site built, we can configure automated deployment to S3. Conveniently, Codeship has S3 integration built in.

Go to the Deployment settings page and add a pipeline for whatever branch you want to deploy from; for example, master. You’ll see a list of built-in deployment options — click on the S3 option and then fill in the form with the appropriate values:

Configure your deployment pipelines

Once saved, you should be all set. Make a change to your project source code and commit and push the update. Now you can watch the build kick ]off on Codeship and see Hugo build the site and Codeship push it to S3. After that completes, you should be able to view your website at the URL you configured it for. Here is an example of a build and deployment for my own site:

Sample build and deploy


That’s it — anytime you want to update your website, you just commit and push your changes to Git, and Codeship will take care of the rest.

It is also really convenient to know you can use the web interface of GitHub or Bitbucket to edit your content so you don’t even need to be at your own computer to author content on your site. Of course you can’t use the Hugo CLI to create new pages via GitHub or Bitbucket, but since they are just files in a directory, you can manually create new files in the right directory and go from there.

I hope this quick tutorial is able to help some of you spare yourselves the confusion of which S3 endpoint to use for redirect support and even get started with Codeship for automating your builds and deployments!

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.