Running Your Phoenix Tests Using Docker

Development

Reading Time: 5 minutes

UPDATE: With January 1st, 2017 we rebranded our hosted CI Platform for Docker from “Jet” to what is now known as “Codeship Pro”. Please be aware that the name “Jet” is only being used four our local development CLI tool. The Jet CLI is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.

One of the many benefits of using Docker is that it can make testing your applications much easier. Want to test out your Phoenix app on the newest version of Elixir? Easy — just change one line in the Dockerfile.

In this post, we’ll walk through setting up a Phoenix project with Docker, which will allow us to run our application’s tests using Docker. We’ll be working with just a basic, scaffolded Phoenix app, as the specifics of the app are not important for the purposes of this post. Extrapolating out to a real-world app should be straightforward, so we’ll be focusing on the Docker side of things.

This post assumes a basic familiarity with Phoenix as well as Docker. You will need to have Docker running on your development machine, but that’s it. Everything else will be taken care of within the context of Docker.

Dockerizing Phoenix

We’ll start from just an empty directory here. Go ahead and create the directory mkdir phoenix_docker && cd phoenix_docker.

Now add a basic Dockerfile with just the following contents:

FROM elixir:onbuild

MAINTAINER Your Name <your.email@example.com>

RUN mix local.hex --force

RUN mix archive.install --force https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez

WORKDIR /app

This Dockerfile builds off of the official Elixir base image. We then install the Hex package manager and Phoenix archive locally before setting the working directory to /app.

Next let’s go ahead and set up Docker Compose. Add a docker-compose.yml file with the following contents:

web:
  build: .
  ports:
    - "4000:4000"
  command: mix phoenix.server
  environment:
    - MIX_ENV=dev
    - PORT=4000
  volumes:
    - .:/app

This will build from the our Dockerfile and run the command mix phoenix.server by default. However, we can also specify a different command to run in the container. This is exactly what we will do in order to bootstrap our Phoenix app:

docker-compose run --rm web mix phoenix.new . --app phoenix_docker --no-brunch

This will create a new Phoenix application in the working directory. Furthermore, since we specified a volume in the docker-compose.yml, this generated app will persist to our local phoenix_docker directory. That is, we were able to generate our Phoenix application without needing to download and install anything other than Docker on our local machine. This is a great start, but in order to actually run the application, we are going to need a database.

Adding Postgres

Phoenix works with PostgreSQL by default, so let’s get that up and running. Docker Compose makes it super easy to set up Postgres and link our application to it.

First, update the docker-compose.yml to include the db service:

web:
  build: .
  ports:
    - "4000:4000"
  command: mix phoenix.server
  environment:
    - MIX_ENV=dev
    - PORT=4000
  volumes:
    - .:/app
  links:
    - db
db:
  image: postgres
  environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgres
    - POSTGRES_HOST=db

Notice that we added the db link in our web service. This will make Postgres available to our app and will also expose the POSTGRES_* environment variables within our web service. Therefore, we can now update our dev Phoenix database configuration in config/dev.exs with the following:

config :phoenix_docker, PhoenixDocker.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: System.get_env("DB_ENV_POSTGRES_USER"),
  password: System.get_env("DB_ENV_POSTGRES_PASSWORD"),
  hostname: System.get_env("DB_ENV_POSTGRES_HOST"),
  database: "phoenix_docker_dev",
  pool_size: 10

Now we can start up the web service:

docker-compose up -d web

install dependencies:

docker-compose exec web mix deps.get

compile:

docker-compose exec web mix compile

create the database:

docker-compose exec web mix ecto.create

and migrate the database:

docker-compose exec web mix ecto.migrate

Finally, let’s go ahead restart the web service:

docker-compose restart web

You should now be able access the running application! Let’s move on to testing the application with Docker.

Sign up for a free Codeship Account

Running Your Tests

Now that we have the basic Docker setup for our Phoenix application, running tests is easy. The quickest way to get started is to just run tests the same way you would locally:

docker-compose run --rm -e "MIX_ENV=test" web mix test

This works perfectly fine but isn’t terribly extensible. Another option is to set up a dedicated test service which we will do here. Update docker-compose.yml to the following:

web:
  build: .
  ports:
    - "4000:4000"
  command: mix phoenix.server
  environment:
    - MIX_ENV=dev
    - PORT=4000
  volumes:
    - .:/app
  links:
    - db
db:
  image: postgres
  environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgres
    - POSTGRES_HOST=db
test:
  image: phoenixdocker_web
  command: mix test
  environment:
    - MIX_ENV=test
  volumes_from:
    - web
  links:
    - db

Then update the test Phoenix database configuration in config/test.exs to be:

config :phoenix_docker, PhoenixDocker.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: System.get_env("DB_ENV_POSTGRES_USER"),
  password: System.get_env("DB_ENV_POSTGRES_PASSWORD"),
  hostname: System.get_env("DB_ENV_POSTGRES_HOST"),
  database: "phoenix_docker_test",
  pool: Ecto.Adapters.SQL.Sandbox

With that, we can now run our tests by simply running:

docker-compose run --rm test

If you want to test a specific file, you can just override the command:

docker-compose run --rm test mix test test/controllers/page_controller_test.exs

Running our tests in this way has the added benefit that if we need some test-only dependencies, we don’t need to add them to our web service. This can be especially useful for something like browser testing.

Where to Go From Here

So far, we’ve set up a Phoenix application with Docker that uses a Postgres database. We use Docker to run the tests for this application in their own service. The application in this walkthrough is very simple, but extending this process to a more complicated app should require very little extra work.

Now that we are using Docker to run our Phoenix tests, extending our test suite to include other types of tests is very easy. For example, we can run acceptance tests using something like Hound together with PhantomJS. All we need to do is install PhantomJS in our Dockerfile (perhaps creating a test-specific Dockerfile) and add Hound to our Phoenix dependencies in mix.exs.

Another natural next step is to utilize Codeship’s Docker-based CI tool, Jet. With the start we have here, it’s very easy to get up and running with Jet. You can read more about it here and here.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • ciastek

    Why it’s necessary to set MIX_ENV when running tests? Phoenix’ guide (http://www.phoenixframework.org/docs/introduction) runs simply mix test.
    Final docker-compose.yml declares test’s service image as phoenixdocker_web, but such image isn’t available at hub.docker.com.

    • I believe `mix test` will implicitly set the MIX_ENV to ‘test’. So it isn’t strictly necessary there. However, we will want to set it explicitly here in case we want to run something other than the default `mix test` command.

      phoenixdocker_web is actually referencing the ‘web’ service in the same docker-compose.yml file. Sorry for the confusion there!

      • Rob Marscher

        And the name of the image and containers with docker-compose is based on the parent directory name by default. Since the directory was named `phoenix_docker`, the image name was prefixed with `phoenixdocker_`. If you run `docker-compose build web` or run the web container by itself first, that should get the image to be available for the test container to use. Other options to naming your parent directory the same way is to use the `-p` switch for the `docker-compose` cli or set the COMPOSE_PROJECT_NAME environment variable.

  • Yevhenii Kurtov

    Hello Jason, it’s a nice article!
    The good option will be to use a shared volume to cache dependencies and build artifacts. Our average speed gain after that is around 2 mins.

  • Rob Marscher

    I think you probably want to add the `–rm` flag to the `docker-compose run` commands. For example, `docker-compose run –rm test`. Otherwise, those containers will remain on the docker host in an exited state. Another option would be to use `docker-compose exec` if you already have an instance of the service running.

    • You are absolutely correct Rob. Thanks for pointing that out! Just updated the post.

  • Donald Piret

    Doesn’t this mean that your tests will be run every time you do a `docker-compose up` though? It doesn’t look like compose has a way to select which services you want to start automatically when using the `up` command, so we’ve had to work with a separate compose file just for tests. Is there a better way of doing this?

  • zubair alam

    When Phoenix Framework makes more sense than a traditional Django/Rails/Laravel?

    • Atinder

      when you need a framework like rails(dev happiness) but 10x faster, more concurrent, realtime.

      • zubair alam

        Those concurrent features can be built as independent micro service using core Erlang/Elixir libraries or Golang. Then why do we make whole application architecture based on Phoenix? Thus making devs happier with more manageable application architecture.

  • Tim Pelgrim

    I have mostly the same setup as in your example, except that I run an umbrella app with 2 databases. The (developmont) server runs fine but every way in which I try to run `mix test` throws a lot of errors like these

    18:04:25.611 [error] Postgrex.Protocol (#PID) failed to connect: ** (DBConnection.ConnectionError) tcp connect: connection refused – :econnrefused

    Any idea what I might be doing wrong?

  • wdiechmann

    lovely write-up

    but :(

    1) docker-compose up -d web will not run before you have mix’ed deps.get

    2) docker-compose run (not exec) –rm web mix deps.get

    3) docker-compose run –rm web mix compile
    (which – on macOS with the Docker dmg – halts at Compiled src/fs.erl

    dang! really was looking forward to trying out ‘installl-less’ – well more or less ;)