Using Docker for Rails Development

Development

Reading Time: 8 minutes

Over the first weekend in October, more than two hundred developers gathered in Ghent for ArrrrCamp, a serious-yet-pirate-themed Ruby conference. I was happy to deliver a talk on using Docker for Rails development. Below is a condensed version of the talk, which covers an introduction to containerization and the Docker ecosystem, as well as some examples of running Rails applications in Docker containers.

If you want to follow along with the code examples, you can find the source on GitHub, the slides from my talk are also available, and the video of the talk itself is right here:

Containerization

Chances are that if you’ve just started working with containers, you’ve worked with them via Docker. Containerization has become more widely used over the last two years because Docker has removed barriers and made it much easier to integrate containerization into your development workflow.

One important thing to remember is that Docker != containers, and in fact there are other ways to use containers than with Docker. But since Docker makes it so easy, it’s almost a no-brainer to use their tools.

One of the main benefits of using a container is that they are pretty nimble. It doesn’t take long to boot them up, and there’s relatively low overhead in terms of time and space. Another benefit is their limited scope.

A container is a self-contained execution environment, meaning that all of a container’s dependencies are contained within the container itself. This gives each of your services the autonomy to use the appropriate toolset for its role in your application without worry of conflicting with another component. You’re saved from dependency spaghetti since the dependencies only exist within the container.

Very simply put, containers are just a layer of virtualization. They don’t take the place of virtual machines, and you can even use VMs and containers together.

The point is not to stop using virtual machines altogether, but rather to increase service density by adding containers to the mix. Instead of running three services on three virtual machines, you can run the same three services — in containers — on one virtual machine. This means less money and less time to maintain your infrastructure since there are fewer VMs.

Getting started with containers may initially seem more complex, but they greatly reduce the amount of time and space needed to run your application. Spend less time provisioning, rebooting, and fighting with dependencies, and more time building what you want.

Installing Docker

If you run Linux, install via the official packages.

If you’re on OS X or Windows, you can install Docker via Docker Toolbox. It might help to get acquainted with the Docker tools by reading about how the tools in Docker Toolbox work together.

Note that if you are using Docker Toolbox, you’ll need to run eval $(docker-machine env machine_name) in order to run commands from your terminal.

The Docker Ecosystem

What started as a small project has grown to be a very powerful ecosystem of many different types of tools.

Docker focuses on three main functions: Build, Ship, and Run. The ‘Build’ part is what we’ll focus on for now, as it’s likely to be of the most immediate concern for you as a developer.

The first thing you’ll need to get acquainted with as you start to run Dockerized services is the Docker image. An image and a container are two different things; an image is run inside of a container. You can think of an image being like a class, and a container being like an instance of that class.

You can find public images on the Docker Hub, which houses Docker’s public registry. There are over 15,000 images that you can pull down and use in your own projects. You can have private repos on the hub as well. If you’re concerned about proprietary code, you can run your own registry (and you can find the registry image on the Docker Hub).

There are a few different styles of images available on the Docker Hub. The first is what I’ve called a “service image,” which is an image that you can pull down, run in a container, and have a working service to consume. A good example of this type of image is a database. You can run the database in a container and start working with it.

Another type of image is a “project base image.” These are meant to set up an environment in a container that can then run your own code. Language images (like the Ruby or Golang images) fall into this category. Just running the image won’t get you very far without adding your own code to it. It’s just meant to be a base for your own project.

You can also find official images on the Hub. These are images that are maintained by either companies or open-source communities, and they’re a good starting point for your project.

To pull down an image from the Docker Hub, you can say docker pull image_name.

Sign up for a free Codeship Account

Building Your Own Docker Images

Of course, you can build your own Docker images. To do this, you need a Dockerfile. You can find the Dockerfile reference within Docker’s documentation.

Here’s a simple Rails Dockerfile:

FROM rails:4.2.4
MAINTAINER Laura Frank <laura@codeship.com>
RUN mkdir -p /var/app
COPY . /var/app
WORKDIR /var/app
RUN bundle install
CMD rails s -b 0.0.0.0

Once I’ve written my Dockerfile, I can build it by saying:

docker build -t image_name . # don’t forget the dot!

Each of the uppercase words in the Dockerfile is an instruction. In this Dockerfile, we’re using a Rails base image, and then copying an existing Rails application (in our current directory) to the newly-created /var/app directory in the container. This is a static copy, and once the copy is completed, the only way to update code within the container is to rebuild my image. When I run a container with this image, it will start with the CMD or command of rails s -b 0.0.0.0.

To see the images you have available on your Docker host — either from docker pull or from docker build — run docker images.

Building a Rails Application with Docker

There are a few goals we have as we start a Rails application with Docker.

  • view the app running in the browser
  • edit files in a local environment and see the changes
  • run rake tasks like migrations
  • see log output

Basically, we want to take advantage of the speed and isolation that Docker provides, but still have a development environment that feels natural to us.

In the previous Dockerfile example, we would have to rebuild the image every time we changed the code in order to see the changes running in a container. This is a huge pain, and you can get around it using a volume mount.

Instead of statically copying all of the code inside the container, you’ll mount your working directory as a volume inside the container (think of it like syncing folders), and then you can edit code and see the changes running inside the container without having to rebuild the image.

Here’s what a Dockerfile and docker run string would look like when using a mounted volume for your application directory.

FROM rails:4.2.4
MAINTAINER Laura Frank <laura@codeship.com>
RUN mkdir -p /var/app
COPY Gemfile /var/app/Gemfile
WORKDIR /var/app
RUN bundle install
CMD rails s -b 0.0.0.0

Instead of copying everything, we’ll just copy the Gemfile and Gemfile.lock, then bundle install.

We still have to get the rest of the code inside the container, though. This is done at runtime with a -v flag.

docker run -v local/project/path:/var/app -p 3000:3000 my_image_name

Full docker run reference is available in the Docker docs.

A couple important flags that you’ll use pretty frequently:

  • -p 3000:3000 create a port binding rule; -ip:hostPort:containerPort
  • -v local/path:/path/in/container mount a volume; -v hostPath:containerPath

That docker run string is a little long, and you may not want to type it each time you run your application.

Docker Compose is an application templating tool that allows you to specify all of your applications configurations in a yaml file. Then, instead of running your application directly with Docker, you can simply run it with docker-compose up.

A Rails container identical to the one we ran above with docker run will look like this with Docker Compose.

web:
  build: .
  ports:
3000:3000
  volumes:
‘local/project/path:/var/app’

This instructs Docker to build the image from the Dockerfile in the directory, and it also specifies the port mapping and volume mounting rules just as in the previous docker run string. This application template is stored in a file called docker-compose.yml. Docker Compose is especially helpful when your application has more than one container.

Let’s create a Rails application with an external Postgres database container.

db:
  image: postgres

web:
  build: .
  ports: 
    - 3000:3000'
  volumes:
    - 'local/project/path:/var/app'
  command: rails s -b '0.0.0.0'
  links:
    - db

Running docker-compose up will pull down the Postgres image and run it in a container, as well as run the Rails application (which we’ve named ‘web’) in the same way as the previous examples.

We’ve also declared a dependency on the db container by the web container by specifying a link. This means that the web container will wait to run until the db container is running, and it also adds some special environment variables (like the IP address of the container running the database) and lines in the /etc/hosts file for the web container.

In this example, you’ll still need to monkey with the database.yml file in the same way you would if you were running this outside of Docker container. You can see an example of this in the GitHub repo.

Running One-off Tasks with Docker Compose

What happens when you’re developing and need to run a task against one of your services running in a container? In the Rails world, a common example of this would be running a database migration.

Luckily, you can use docker-compose run to execute one-off commands within a container. To run rake db:migrate in the Rails container, say docker-compose run web rake db:migrate. You’ll see the migration running, and then you can continue on your merry way.

Note that you can only run docker-compose commands in the directory with the corresponding docker-compose.yml file. To run the above command, you’ll probably have to jump into a new terminal tab (and run eval $(docker-machine env default) if you’re using Docker Toolbox).

Ship It!

If your application is running in a container, deploying it to production will involve building a new image and then distributing the new image to your hosts.

Codeship supports Docker, and you can learn more about it on our blog. Happy shipping!

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Pingback: Using Docker for Rails Development | Dinesh Ram Kali.()

  • In this simple example, the server in the container is the one that is started by `rails s -b 0.0.0.0`, which is Webrick by default. Of course, the defaults could be configured for something like unicorn or something.

    I was wondering, being a “Rails in Docker” newbie, does it still make sense to have a fancy server (such as Unicorn) in your production docker container? Or is Webrick good enough, and you then handle availability and rolling restarts at an upper level, by having several docker instances available behind a reverse proxy/load balancer for example?

    I could not easily find any best practices about this on the Internet, and this has been my main concern that prevented me from going from “Docker+Rails as a toy on my local environment” to “having production-grade Docker+Rails on our servers right now”. I am interested in your thoughts on this!

    • Laura Frank

      Hey Jerome, you’re right — using Webrick for a heavy production load is not a super great idea. You could certainly use a different server, but you’ll probably have a better time following the reverse proxy/load balancer pattern you mentioned. At a really large scale, there are orchestration services like Kubernetes that can help take care of the high availability part for you. On a smaller scale, check out https://blog.codeship.com/deploying-docker-rails-app/ for an example using nginx.

  • Nice article! You should check “onbuild” flags / tags. Much smaller images, faster work…

    – rails docker onbuild -> https://github.com/docker-library/rails/blob/9fb5d2b7e0f2e7029855028e07e86ab7ec54abaa/onbuild/Dockerfile
    – more details here -> https://hub.docker.com/_/rails/

  • Vikram

    I have tried this, and it gets stuck with the problem of storing data on a host like Windows, through MySQL or Postgres container.

  • We were discussing this just last night at our monthly Rails.MN meetup!! Great timing. :)

  • Christoph Blank

    I noticed that the Gemfile is always copied in the Dockerfile (also on https://docs.docker.com/compose/rails/) but isn’t this obsolete when mounting the volume?

    • Laura Frank

      Hi Christoph, in order to start the container with “rails s,” you need to run `bundle install` first. Since the Gemfile/Gemfile.lock don’t frequently change, it’s easiest and fastest to `COPY` them into the image so they can be cached, and then run `bundle install`, and the result of the bundle will also be stored in a layer. If you waited until runtime to mount the Gemfile, you would have to manually run the `bundle install`, then manually run the start command. You would also lose the benefit of caching.

      • Christoph Blank

        Thanks a lot Laura, I understood now :)

  • limitarc

    Excellent article! I was just getting started with Docker containers for Rails dev and this helped a lot.

    The only hiccup I had for this was mounting the volume as I am using Docker Machine on Windows. It wouldn’t find the Gemfile, so I did some digging around and found I needed to move my project folder. It would only mount from within C:Users for me. Once I stuck my project in C:Users everything worked. My projects are normally stored on another drive.

    Hopefully this info will be useful to others having the issue.

  • Marvin

    Great article, thanks. But I have some question about hosting a ruby on rails application.

    Which is the best rails server to host in production environment with docker?

    – Nginx + Puma
    – Nginx + Unicorn
    – Nginx + Passenger

    Do you have any examples?

  • blackjid

    Thanks for the article!.. Have you had any problems on speed when using volumes to develop the rails app?? On OSX VirtualBox shared folders are pretty slow. Do you have any alternative so the container see the changes in the app?

  • Chen Kinnrot

    Hi,

    I tried to go through your workflow with my app, the docker build passes,
    but when I try running the server I’m getting
    >/usr/local/bundle/gems/bundler-1.11.2/lib/bundler/spec_set.rb:94:in `block in materialize’: Could not find CFPropertyList-2.3.1 in any of the sources (Bundler::GemNotFound)

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/spec_set.rb:87:in `map!’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/spec_set.rb:87:in `materialize’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/definition.rb:137:in `specs’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/definition.rb:182:in `specs_for’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/definition.rb:171:in `requested_specs’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/environment.rb:18:in `requested_specs’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/runtime.rb:13:in `setup’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler.rb:92:in `setup’

    from /usr/local/bundle/gems/bundler-1.11.2/lib/bundler/setup.rb:18:in `’

    from /usr/local/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:126:in `require’

    from /usr/local/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:126:in `rescue in require’

    from /usr/local/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:39:in `require’

    from /usr/local/bundle/bin/rails:14:in `’

    ————————————————————
    The bundler completed successfully on docker build

    • Chen Kinnrot

      Looks like I just needed to update some gems on my local machine…

  • Really amazing tutorial, thank you Laura.
    Congratulations :)

    http://www.onebitcode.com

  • dorelly2

    Very bad explanation. Everything remained unclear, everything does not not work. Lady, you should not definitely write any articles.