Using Docker Compose for NodeJS Development

Codeship ProDevelopment

Reading Time: 9 minutes

Docker is an amazing tool for developers. It allows us to build and replicate images on any host, removing the inconsistencies of dev environments and reducing onboarding timelines considerably.

To provide an example of how you might move to containerized development, I built a simple todo API using NodeJS, Express, and PostgreSQL using Docker Compose for development, testing, and eventually in my CI/CD pipeline.

In a two-part series, I will cover the development and pipeline creation steps. In this post, I will cover the first part: developing and testing with Docker Compose.

Requirements for This Tutorial

This tutorial requires you to have a few items before you can get started.

The todo app here is essentially a stand-in, and you could replace it with your own application. Some of the setup here is specific for this application, and the needs of your application may not be covered, but it should be a good starting point for you to get the concepts needed to Dockerize your own applications.

Once you have everything set up, you can move on to the next section.

Creating the Dockerfile

At the foundation of any Dockerized application, you will find a Dockerfile. The Dockerfile contains all of the instructions used to build out the application image. You can set this up by installing NodeJS and all of its dependencies; however the Docker ecosystem has an image repository (the Docker Store) with a NodeJS image already created and ready to use.

In the root directory of the application, create a new Dockerfile.

/> touch Dockerfile

Open the newly created Dockerfile in your favorite editor. The first instruction, FROM, will tell Docker to use the prebuilt NodeJS image. There are several choices, but this project uses the node:7.7.2-alpine image. For more details about why I’m using alpine here over the other options, you can read this post.

FROM node:7.7.2-alpine

If you run docker build ., you will see something similar to the following:

Sending build context to Docker daemon 249.3 kB
Step 1/1 : FROM node:7.7.2-alpine
7.7.2-alpine: Pulling from library/node
709515475419: Pull complete
1a7746e437f7: Pull complete
662ac7b95f9d: Pull complete
Digest: sha256:6dcd183eaf2852dd8c1079642c04cc2d1f777e4b34f2a534cc0ad328a98d7f73
Status: Downloaded newer image for node:7.7.2-alpine
 ---> 95b4a6de40c3
Successfully built 95b4a6de40c3

With only one instruction in the Dockerfile, this doesn’t do too much, but it does show you the build process without too much happening. At this point, you now have an image created, and running docker images will show you the images you have available:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
node                7.7.2-alpine        95b4a6de40c3        6 weeks ago         59.2 MB

The Dockerfile needs more instructions to build out the application. Currently it’s only creating an image with NodeJS installed, but we still need our application code to run inside the container. Let’s add some more instructions to do this and build this image again.

This particular Docker file uses RUN, COPY, and WORKDIR. You can read more about those on Docker’s reference page to get a deeper understanding.

Let’s add the instructions to the Dockerfile now:

FROM node:7.7.2-alpine

WORKDIR /usr/app

COPY package.json .
RUN npm install --quiet

COPY . .

Here is what is happening:

  • Set the working directory to /usr/app
  • Copy the package.json file to /usr/app
  • Install node_modules
  • Copy all the files from the project’s root to /usr/app

You can now run docker build . again and see the results:

Sending build context to Docker daemon 249.3 kB
Step 1/5 : FROM node:7.7.2-alpine
  ---> 95b4a6de40c3
Step 2/5 : WORKDIR /usr/app
 ---> e215b737ca38
Removing intermediate container 3b0bb16a8721
Step 3/5 : COPY package.json .
 ---> 930082a35f18
Removing intermediate container ac3ab0693f61
Step 4/5 : RUN npm install --quiet
 ---> Running in 46a7dcbba114


 ---> 525f662aeacf
 ---> dd46e9316b4d
Removing intermediate container 46a7dcbba114
Step 5/5 : COPY . .
 ---> 1493455bcf6b
Removing intermediate container 6d75df0498f9
Successfully built 1493455bcf6b

You have now successfully created the application image using Docker. Currently, however, our app won’t do much since we still need a database, and we want to connect everything together. This is where Docker Compose will help us out.

Docker Compose Services

Now that you know how to create an image with a Dockerfile, let’s create an application as a service and connect it to a database. Then we can run some setup commands and be on our way to creating that new todo list.

Create the file docker-compose.yml:

/> touch docker-compose.yml

The Docker Compose file will define and run the containers based on a configuration file. We are using compose file version 2 syntax, and you can read up on it on Docker’s site.

An important concept to understand is that Docker Compose spans “buildtime” and “runtime.” Up until now, we have been building images using docker build ., which is “buildtime.” This is when our containers are actually built. We can think of “runtime” as what happens once our containers are built and being used.

Compose triggers “buildtime” — instructing our images and containers to build — but it also populates data used at “runtime,” such as env vars and volumes. This is important to be clear on. For instance, when we add things like volumes and command, they will override the same things that may have been set up via the Dockerfile at “buildtime.”

Open your docker-compose.yml file in your editor and copy/paste the following lines:

version: '2'
    build: .
    command: npm run dev
      - .:/usr/app/
      - /usr/app/node_modules
      - "3000:3000"
      - postgres
      DATABASE_URL: postgres://todoapp@postgres/todos
    image: postgres:9.6.2-alpine
      POSTGRES_USER: todoapp
      POSTGRES_DB: todos

This will take a bit to unpack, but let’s break it down by service.

The web service

The first directive in the web service is to build the image based on our Dockerfile. This will recreate the image we used before, but it will now be named according to the project we are in, nodejsexpresstodoapp. After that, we are giving the service some specific instructions on how it should operate:

  • command: npm run dev – Once the image is built, and the container is running, the npm run dev command will start the application.
  • volumes: – This section will mount paths between the host and the container.
  • .:/usr/app/ – This will mount the root directory to our working directory in the container.
  • /usr/app/node_modules – This will mount the node_modules directory to the host machine using the buildtime directory.
  • environment: – The application itself expects the environment variable DATABASE_URL to run. This is set in db.js.
  • ports: – This will publish the container’s port, in this case 3000, to the host as port 3000.

The DATABASE_URL is the connection string. postgres://todoapp@postgres/todos connects using the todoapp user, on the host postgres, using the database todos.

The Postgres service

Like the NodeJS image we used, the Docker Store has a prebuilt image for PostgreSQL. Instead of using a build directive, we can use the name of the image, and Docker will grab that image for us and use it. In this case, we are using postgres:9.6.2-alpine. We could leave it like that, but it has environment variables to let us customize it a bit.

environment: – This particular image accepts a couple environment variables so we can customize things to our needs. POSTGRES_USER: todoapp – This creates the user todoapp as the default user for PostgreSQL. POSTGRES_DB: todos – This will create the default database as todos.

Running The Application

Now that we have our services defined, we can build the application using docker-compose up. This will show the images being built and eventually starting. After the initial build, you will see the names of the containers being created:

Pulling postgres (postgres:9.6.2-alpine)...
9.6.2-alpine: Pulling from library/postgres
627beaf3eaaf: Pull complete
e351d01eba53: Pull complete
cbc11f1629f1: Pull complete
2931b310bc1e: Pull complete
2996796a1321: Pull complete
ebdf8bbd1a35: Pull complete
47255f8e1bca: Pull complete
4945582dcf7d: Pull complete
92139846ff88: Pull complete
Digest: sha256:7f3a59bc91a4c80c9a3ff0430ec012f7ce82f906ab0a2d7176fcbbf24ea9f893
Status: Downloaded newer image for postgres:9.6.2-alpine
Building web
Creating nodejsexpresstodoapp_postgres_1
Creating nodejsexpresstodoapp_web_1
web_1       | Your app is running on port 3000

At this point, the application is running, and you will see log output in the console. You can also run the services as a background process, using docker-compose up -d. During development, I prefer to run without -d and create a second terminal window to run other commands. If you want to run it as a background process and view the logs, you can run docker-compose logs.

At a new command prompt, you can run docker-compose ps to view your running containers. You should see something like the following:

            Name                            Command              State           Ports
nodejsexpresstodoapp_postgres_1 postgres   Up      5432/tcp
nodejsexpresstodoapp_web_1        npm run dev                     Up>3000/tcp

This will tell you the name of the services, the command used to start it, its current state, and the ports. Notice nodejsexpresstodoapp_web_1 has listed the port as>3000/tcp. This tells us that you can access the application using localhost:3000/todos on the host machine.

/> curl localhost:3000/todos


The package.json file has a script to automatically build the code and migrate the schema to PostgreSQL. The schema and all of the data in the container will persist as long as the postgres:9.6.2-alpine image is not removed.

Eventually, however, it would be good to check how your app will build with a clean setup. You can run docker-compose down, which will clear things that are built and let you see what is happening with a fresh start.

Feel free to check out the source code, play around a bit, and see how things go for you.


Testing the Application

The application itself includes some integration tests built using jest. There are various ways to go about testing, including creating something like Dockerfile.test and docker-compose.test.yml files specific for the test environment. That’s a bit beyond the current scope of this article, but I want to show you how to run the tests using the current setup.

The current containers are running using the project name nodejsexpresstodoapp. This is a default from the directory name. If we attempt to run commands, it will use the same project, and containers will restart. This is what we don’t want.

Instead, we will use a different project name to run the application, isolating the tests into their own environment. Since containers are ephemeral (short-lived), running your tests in a separate set of containers makes certain that your app is behaving exactly as it should in a clean environment.

In your terminal, run the following command:

/> docker-compose -p tests run -p 3000 --rm web npm run watch-tests

You should see jest run through integration tests and wait for changes.

The docker-compose command accepts several options, followed by a command. In this case, you are using -p tests to run the services under the tests project name. The command being used is run, which will execute a one-time command against a service.

Since the docker-compose.yml file specifies a port, we use -p 3000 to create a random port to prevent port collision. The --rm option will remove the containers when we stop the containers. Finally, we are running in the web service npm run watch-tests.


At this point, you should have a solid start using Docker Compose for local app development. In the next part of this series about using Docker Compose for NodeJS development, I will cover integration and deployments of this application using Codeship.

Is your team using Docker in its development workflow? If so, I would love to hear about what you are doing and what benefits you see as a result.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • CWSpear

    Your Dockerfile could just look like this:

    FROM node:7.7.2-alpine

    WORKDIR /usr/app/

    COPY ./package.json ./
    RUN npm install --quiet

    COPY ./ ./

    There’s no need for the temporary directory. This benefits from the caching you mention in the same way. It also makes for a smaller image (even considering the apk update which doesn’t really seem necessary either way?).

    You should also mention it’s pretty important to put node_modules in the .dockerignore (as well as a build/dist directory if you have a build step for your project).

    • Great feedback. I like the simpler handling, and makes complete sense.

      The apk update is not required here, but I install something on the next part and didn’t delete the line.

      I left out explaining the Dockerfile mostly for length. The repo has it in there already, but adding a notr doesn’t seem out of line.

      Thanks again for reading through.

    • Ben Nadel

      As someone very new to Docker, why do you do the COPY of the package.json first, and then do a full directory COPY afterwards? Why not just do:

      COPY ./ ./
      RUN npm install --quiet

      What is the benefit of doing the npm install first?

      • CWSpear

        Docker caches layers. Each command in a Dockerfile will generate a new layer. If nothing has changed, it won’t re-build that caches.

        So in this example, if no layers before RUN npm install --quiet have changed, then it will used the cached layer from the npm install. Effectively, in this case, it will always be able to use the cached layer unless package.json changes.

        End result: every subsequent build after your first one will be much faster unless package.json changes and it has to re-install packages. If we did it the way you proposed, it’d have to re-install the package when any file changes, making subsequent builds always take a decent amount of time.

        tl;dr: it makes subsequent builds faster thanks to layer caching.

        • Ben Nadel

          Ahhh, ok that makes (some) sense. So, it’s an optimization step, but not a requirement to make it work. Very nice, thanks for the insight.

          • CWSpear

            Yep! Not required, but considered best practices. You’ll thank yourself if you create Dockerfiles to be cache-aware.

            But consider side-effects: as stated: if the packages.json doesn’t change, then it doesn’t re-install packages, so if you have a package defined as ^5.0.0, it will be cache the latest matching version at the time it was originally cached! So it’s a good idea something like:

            COPY ./package.json ./package-lock.json ./

            (or ./yarn.lock instead of ./package-lock.json if you’re using yarn). That will make sure your image always has exactly the desired packages.

          • Ben Nadel

            Oh good to know. I tried taking Kelly’s demo , and adding `nodemon` in front of start script, instead of just `node` and it keeps complaining that it can’t find `nodemon`. I wonder if I’m running a related issue. Anyway, will tinker after work. Thanks again.

          • That’s so weird, since I have nodemon as a dependency. You might try adding the folder path as well “./node_modules/.bin/nodemon” and see if it finds it then. I don’t think node_modules is in the path on the container.

          • Ben Nadel

            It was my bad — my node_modules path had a misspelling in it. So, my root directory was overwriting the node_modules in the container when the volume was mounted. Fixed the misspelling, all is good :)

  • Pingback: Node.js Weekly Update – 12 May, 2017 | Webammer()

  • I find it dangerous to mount the local node_modules to the host. This will seem to work just fine, however, native node modules may misbehave even if you dev machine is a linux box and not specifically alpine. Mileage will vary on Macs, and thinking about Windows makes me nauseous.

    Unless there’s something I’m missing here. I think, development/build concerns should be separated from prod bound concerns (i.e. building the docker image). I’ve a full-stack TypeScript MEAN project that shows a barebones approach to docker-compose, also applying NodeSource security best practices in baking a Node image, I’d appreciate any feedback, if my concerns are off-base here.

    • Thanks for the comment! I did it this way because I wanted to be able to edit my files on my host machine, however when you mount the folder with out mounting the node_module folder, the build time `node_modules` would be overwritten otherwise.

      What I don’t go into detail here, is adding new `node_modules` – which I do directly in the container and not from my local machine. I was saving that for a later post.

      Love your example – really good stuff there.

      • Sounds good! Looking forward to the next blog post. If possible, please emphasize the importance `npm shrinkwrap` when running `npm install` on different environments to achieve repeatable results.

        • Matt Welke

          Wouldn’t using yarn also achieve this?

          • Christian Katzorke

            Yes sure. Beside the lockfile, yarn also provides you consistency/security with using checksums (and an enhanced local registry). But if you can’t or don’t want to (for any reason) use yarn, *please* use at least shrinkwrap. For all projects that last longer then 1 week will end up in unexpected errors on different environments/stages/continuous integration scenarios.

      • marcoandremartins

        you can use .dockerigore for that too

        • Tomaz Strukelj

          using .dockerignore doesn’t seem to provide the same functionality. If I add `/usr/app/node_modules` to .dockerignore and comment out line `- /usr/app/node_modules` in docker-compose.yml the container doesn’t start, because /usr/app/node_modules inside container is empty (as it is empty on the host).

      • Jörn Zaefferer

        Have you ever written that 2nd part? I couldn’t find any links from this post nor find it directly on the blog index.

        I’m specifically interested in your workflow for adding more dependencies. Adding them inside the container sounds interesting – how would you combine that with using a lock file (using yarn or npm@5)?

        • I have a second part to this post, but it doesn’t include how to add additional modules –

          Couple ways you can do this:
          `docker-compose exec web npm install module-name-here` – adds directly to the running container. Probably the easiest method. Run from your terminal.

          When developing, I don’t always run `docker-compose up`, and instead do `docker-compose run –rm web sh` and then I will start the app or update modules right there as if it were my own terminal.

          Since the volumes are mapped, when ever you update something in the container it will also update locally. The next time you build the container it will run everything. I believe you should see the same with the lock files as well.

          Let me know if that helps.

    • Dmitry Kirilyuk

      It’s ok to mount container node_modules to host in most cases. This will allow to debug node_modules in your host IDE. And how often you need to debug binaries? :) But this article is outdated a bit, because node_modules will not be mapped from container by using /usr/app/node_modules as volume in docker-compose file, it will be empty in host. Don’t know nice workaround for now. What I do now is to manually install node_modules on host machine when I need to debug it.

      • I work directly in the containers to debug, because my host machine will have things that are not always available on the production server or containers. I much prefer to jump into the shell and figure things out there if need be.

        You are right – it is an empty folder, and I could change that wording a bit. The important thing is that when you copy over everything in the Dockerfile, you don’t want to lose the installed node_modules. That is why you mount the folder without specifying the host folder. This comes in handy when I’m developing something in, for example, Python. I don’t have all the proper tooling installed locally, so doing everything in Docker makes life easier.

        Happy coding!

  • Alejandro Bar Acedo

    Hi there. I have some problems trying to run a container to use gulp. I have the same content on my Dockerfile and docker-compose.yml. When I run docker-compose run –rm service_name gulp gulp_task_name I get and error saying that the command gulp is not found in $PATH. I thought I can run any command inside a node image as it does with the esw and jest in this tutorial.

    • The local `node_modules/.bin/` folder is not in the path. You can resolve this by adding a line in the Dockerfile to update the path, install gulp globally, or use `docker-compose run –rm service_name ./node_modules/.bin/gulp gulp_task_name`.

      • Alejandro Bar Acedo

        I see. Now I was seeing how is working with your sample project and if I try to run the tests using esw or jest it doesn’t work but if I try with the script from package.json it works

  • mc18

    Thanks for the tutorial! Everything worked great, but I’m stuck on something. If I change something in my main server.js file, I have to actually run “docker-compose build” again to see the reflected changes.

  • Martin Pultz

    I’ve been playing with docker and docker-compose recently and I can do the basics, and this article has helped my understanding even more, but the only thing I can’t seem to figure out is how to migrate the database on `docker-compose -f up` when node and postgres are in separate containers. I’m trying to dockerize a project that already scripts to do this like `npm run db:migrate:dev`, but I can’t figure out where this can be executed even though I’ve bridged the containers it seems to throw an error doing `ENTRYPOINT [“npm”, “run”, “db:migrate:dev”]`. Any suggestions?

  • Rob Brennan

    This is great!! I totally wasn’t expecting the idea of having a temporary container to run tests, but damn – that is 100% spot on. Excellent post. If you’re ever in Seattle, lemme know – I’d love to buy you an IPA (or beverage of your choice)

  • Since Docker 17, you do not need anymore docker-compose. It has now built in stacks support with docker-compose files declaration compatible. Also it offers extra features that haven’t been implemented in compose.

    • Nice – thanks – I’ll have to try it that way as well.

  • Michal Vodička

    – node:7.7.2-alpine doesn’t have git, so it can’t install git dependencies with npm install
    – you really should include package-lock.json before npm install
    – do not use “volume node_modules” for sure!! you should expect node_modules doesnt exist localy because the target device doesnt have npm and node installed
    – you could use .dockerignore where you ignore node modules and source code
    – you could use multi stage build

  • Hurtox

    This was a really helpful and well composed article! No pun intended haha

  • Pingback: A Better Way to Develop Node.js with Docker – Hacker Noon - Coiner Blog()

  • Pingback: A Better Way to Develop Node.js with Docker » @FinTechLog()