Using Middleman to Deploy Static Pages to Amazon S3


We always want to make our system easier to use, so we recently launched our new Documentation. You can read more about this in our last blogpost. In this post we will go into the details of how we implemented our docs and where we deploy them.

Problems with Documentation and Current Infrastructure

In the past our documentation was part of our main application. Every change that gets deployed to our main application needs to be thoroughly tested and goes through code review. This made updating our documentation a major pain and very time consuming. It hindered us from getting updates to our docs out quickly.

When we decided to redo our docs we wanted to have them hosted externally. We had some experience with Jekyll in the past, so static pages looked like a great option. We looked into Jekyll and Middleman, but Middleman looked like the better option to us, as it is easier to extend.

Using Static Pages and Amazon S3

Static pages are easy to build and design. We have full control over the layout and no complicated infrastructure is required to serve the pages . Delivering static pages from S3 is cheap and fast, so a good option for our docs.

Deployment to S3 is very easy and everything can be included in a build on Codeship.

Setting up Middleman App

Our documentation is all open source on Github: For our configuration take a look at our config.rb. We use the blog extension as it enables us to do proper permalinks and adds a few other nice options.

To build the middleman app we run

bundle install
bundle exec middleman build

which creates a build folder that will be later deployed to S3.

Deploying to Amazon S3

Of course here at Codeship everything is continuously deployed, including our docs. We decided early on to go with static pages hosted on S3, as it is fast, easy and cheap. We wanted to have the documentation on, which meant we had to do proxy the pages through our application. Here’s how we did it.

Installing the Deployment Tools

As a first step we wanted to use the official awscli tool provided by Amazon. They can be installed through pip, so pip install awscli is all we needed.

Try Codeship – The simplest Continuous Integration service out there.

AWS Access Keys

To be able to interact with the AWS API we needed to set environment variables. You can do this by simply exporting the following variables (or set them on the Environment tab on Codeship)


Removing Old Content from our Page

As a next step we wanted to make sure all docs were removed from the S3 bucket before we deployed the new ones, so we wouldn’t run into any left overs.

aws s3 rm s3:// --recursive

Now the bucket was ready for the new docs.

Deploying to the S3 Bucket

Now we were ready for the deployment step. The following command syncs the build folder Middleman created to S3.

aws s3 sync build s3:// --acl public-read --cache-control "public, max-age=86400"

As static pages typically don’t change very often we wanted to have caching in place, so the page is loaded quickly. By setting the cache-control header we make sure all parts of our documentation are cached for one day by the browser. The –acl public-read flag makes sure all files in the bucket are accessible for everyone.

Amazon S3 Static Website Hosting

Now that we deployed to our S3 bucket we needed to make sure it is treated as a static website by AWS. In our AWS Console we went to our S3 Bucket and enabled static website hosting.

Settings for our Bucket on S3

You can read more about this in the AWS documentation about static website hosting

Setting up Application forwarding

So now we set up our documentation to be served through the S3 bucket on

Now we needed to move it to

Choosing rack-reverse-proxy

In the end we decided to go with rack-reverse-proxy by Jon Swope. In our we added the following as the first rack middleware:

require 'rack/reverse_proxy'

use Rack::ReverseProxy do
  reverse_proxy /^\/documentation\/?(.*)$/, '$1'

This simply forwards every request to the S3 bucket and returns the result.

Set Asset Host to CDN for HTTPS

We didn’t want assets to be routed through our application, so we set up a cloudfront distribution to server files from the s3 bucket.


It has the added benefit that it works over https as well. In our config.rb we set the asset host:

activate :asset_host, host:''

Using Load Tests

To make sure our reverse proxy works well and doesn’t take down our application we ran a couple of tests through The results were great, especially considering that those requests would be cached the second time.

How we use


Using S3 for static pages made our documentation a lot better. We can change it easily any time while paying absolutely nothing for it. We will definitely use S3 in the future as well for different pages we want to deploy.

Further Information:

Subscribe via Email

Be sure to join over 40.000 subscribers of our newsletter to receive updates on software development best practices, Continuous Delivery and tips and tricks to start shipping your product faster.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Pingback: Using Middleman and Codeship to continuously deploy static pages to Amazon S3 | Open World()

  • Giovani

    Very interesting information. Thank you!
    What about this blog? How it works?.. WordPress?

    • Florian Motlik @codeship

      Hi Giovani,

      our blog is hosted with WPEngine (so WordPress) as we wanted the power of WordPress and for our Marketing guys to have an easier time than with Middleman or other tools.


  • muzammil

    The proxy server is restrictions that might be causing a social
    website be blocked at school or other public regions and client connect to the
    proxy server and then sends request or resources to the different server <a href="“>School Proxy

  • Sam Marley-Jarrett

    You should consider skipping the cloudfront implementation and go straight to serving from S3, if you’re just going to proxy it in anyway – since the proxy negates the CDN benefit.

  • Ryan Hubbard

    You guys have a great blog and appreciate you and CodeShip sharing.

    There’s a 404 on a link to the CodeShip new documenation post in the first paragraph and wasn’t able to find the post Would be interested in reading it.

  • Clemens Helm

    Have you discussed using Why didn’t you use it?

  • Nazar I

    Good article, thanks for posting.
    You have a wrong link to in your Conclusions section. It actually points to Codeship.

    • Manuel Weiss @codeship

      Thank you Nazar! I updated the link.

  • skeller88

    This deployment process will result in 1) a period of downtime when the old docs are removed and the new docs haven’t been uploaded, and 2) potential downtime or broken links if the deployment of the new docs doesn’t work completely. Unless there’s some logic in the CLI `sync` command that rolls back the sync if syncing of any file fails?

    Did you explore how to upload to a new folder in the bucket, point the website to that new bucket, and then rewrite the older folder and point the website back to that folder? I’m trying to figure that out right now to make deployments fail safe.

    • Vo Nguyen Thien An

      I also find down time is a problem when using s3 static website, seem it designed to very static site.
      Instead of purge all old content and push new ones, I think we could use `s3cmd sync –delete-removed …`, and put s3 bucket the same region with build server to reduce sync time.
      For rolling back, seems we have to selfcook our solution.