Running a MEAN web application in Docker containers on AWS


Reading Time: 11 minutes

The rate of adoption of Docker as a containerized solution is soaring. A lot of companies are now using Docker containers to run apps. In a lot of scenarios, using Docker containers can be a better approach than spinning up a full-blown virtual machine.

In this post, I’ll break down all the steps I took to successfully install and run a web application built on the MEAN stack (MongoDB, Express, AngularJS, and Node.js). I hosted the application in Docker containers on Amazon Web Services (AWS).

Also, I ran the MongoDB database and the web application in separate containers. There are lots of benefits to this approach of having isolated environments:

  1. Since each container has its own runtime environment, it’s easy to modify the environment of one application component without affecting other parts. We can change the installed software or try out different versions of the softwares, until we figure out the best possible setup for that specific component.
  2. Since our application components are isolated, security issues are easy to deal with. If a container is attacked or a malicious script ends up being inadvertently run as part of an update, our other containers are still safe.
  3. Since it is easy to switch out and change the connected containers, testing becomes a lot easier. For example, if we want to test our web application with different sets of data, we can do that easily by connecting it to different containers set up for different database environments.

MEAN Web Framework

The web application that we’re going to run is the framework code for MEAN.JS. This full-stack JavaScript solution builds fast, robust, and maintainable production web applications using MongoDB, Express, AngularJS, and Node.js.

Another great advantage of MEAN.JS is that we can use Yeoman Generators to create the scaffolding for our application in minutes. It also has CRUD generators which I have used heavily when adding new features to the application. The best part is that it is already well set up to support Docker deployment. It comes with a Dockerfile that can be built to create the container image, although we will use a prebuilt image to do it even faster (more on this later).

Running Docker on an Amazon Instance

You might already be aware that you can use basic AWS services free for a full year. The following steps will walk you through how to configure and run a virtual machine on AWS along with Docker service:

  1. To begin, create your free account on Make sure to choose the Basic (Free) support plan. You will be redirected the AWS welcome page.
  2. On this page, click on Launch Management Console. AWS welcome page
  3. We will first go through the steps to create a user with the required credentials to manage our instance. Go to Services and then IAM. AWS Services
  4. Go to the Users link on the left. AWS identity and access management
  5. Click Create New Users. Create New Users
  6. Create a new user by providing a username. Make sure you have the checkbox to Generate an access key for each user checked. Create User
  7. Once the user is created, you will get the option to Download Credentials for this user. Download the file. This file will have the Access Key Id and the Secret Access Key for this user. We will need to use these credentials to connect remotely to AWS. Download Credentials
  8. The recommended approach for using credentials when connecting remotely to AWS is to keep the credentials in a file at the path ~/.aws/credentials. Create this path if it does not exist in your local computer and then create a file with the name credentials. The content of this file should look like this:
aws_access_key_id = [access key from the downloaded credential file]
aws_secret_access_key = [secret access key from the downloaded credential file]

After creating the user and storing his credentials locally, we will also need to give the required permissions to this user in AWS.

  1. Go to Services->IAM->Groups. Click Create New Group. Create New Group
  2. Go through the wizard steps to create the new user group. Enter the groupname and click Next. Set Group Name
  3. You will then see the option to attach a Policy. Check the first option, Administrator Access, and click Next. Attach Policy
  4. You will see a Review screen. Click Create Group to finally create the group. Review and Create Group
  5. Once the group is created, click on the name of the group to see a screen to manage users, permissions, etc. Add the user that you previously created to this group by clicking Add Users to Group. Add Users to Group
  6. We will also need a vpc-id to launch our instance. While still logged on to AWS Console, click Services->VPC. Click on the link to see your VPC details. Copy the vpc-id from the grid view that appears. Now we are all set to launch our instance. Resources

To launch a new instance on AWS remotely from your computer, open a new shell window. Then use the Docker Machine create command to create a new EC2 instance. Use the amazonec2-vpc-id flag to specify the vpc-id. Use the amazonec2-zone flag to supply the zone in which your instance should exist. Finally, specify the name by which you would like to refer to the instance.

$ docker-machine create --driver amazonec2  --amazonec2-vpc-id [vpc-id-copied-in-previous-step] --amazonec2-zone c aws07
Running pre-create checks...
Creating machine...
(aws01) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env aws07

Once this instance is launched and ready for use, we can tell docker execute commands against aws07 as follows:

$ eval $(docker-machine env aws07)

Running MongoDB Database as a Container

Now that we have Docker running on our Amazon instance, we can go ahead and run our containers.

As I mentioned before, we’re going to run our MongoDB database and our web application on separate containers. I chose the official repo for Mongo on the docker repository. We can pull this image and run it as a detached container in one simple step:

$ docker run --name mymongodb -d mongo

The last argument mongo is the name of the image from which it should create the container. Docker will first search for this image locally. When it doesn’t find it, it will go ahead and download it and all the base images that it is dependent on. Convenient!

Docker will then run this image as a container. -d flag ensures that it is run in detached mode (in the background) so that we can use this same shell to run our other commands. We can do a quick check after this to make sure that this container is up and running by using the docker ps command:

$ docker ps -a 
2f93a31d4a3d mongo:latest "/ mong About a minute ago Up About a minute 
27017/tcp mymongodb

The startup script script for this image already runs the mongo service listening on 27017 port by default. So there is literally nothing else we had to do here except for that one docker run command.

Running the MEAN Stack Container

The next phase of this project is to run our web application as a separate container.

The MEAN stack code base has a lot of dependencies like Node, Bower, Grunt, etc. But once again, we don’t need to worry about installing them if we have an image that already has all these dependencies. Turns out there is an image on the Docker Hub that already has everything we need.

Once again, we will pull it in and run it with just one command:

$ docker run -i -t --name mymeanjs --link mymongodb:db_1 -p 80:3000 maccam912/meanjs:latest bash 
Status: Downloaded newer image for maccam912/meanjs:latest 

Now there is a lot going on with this single command. To be honest, it took me some time to get it exactly right.

  1. The most important piece here is the --link mymongodb:db_1 argument. It adds a link between this container and our mymongodb container. This way, our web application is able to connect to the database running on the mymongodb container. db_1 is the alias name that we’re choosing to reference this connected container. Our MEAN application is set to use db_1, so it’s important to keep that name.

  2. Another important argument is -p 80:3000, where we’re mapping the 3000 port on the container to port 80 on the host machine. We know that web applications are accessed through the default port of 80 using the HTTP protocol. Our MEAN application is set to run on port 3000. This mapping enables us to access the same application from outside the container over the host port 80.

  3. We of course have to specify the image from which the container should be built. As we discussed before, maccam912/meanjs:latest is the image we’ll use for this container.

  4. The -i flag is for interactive mode, and -t is to allocate a pseudo terminal. This will essentially allow us to connect our terminal with the stdin and stdout streams of the container. This stackoverflow question explains it in a little more detail.

  5. The argument bash hooks us into the container where we will run the required commands to get our MEAN application running. We can bash into a previously running Docker container, but here we are doing all that with just one command.


Building and Running our MEAN Application

Now that we’re inside our container, running the ls command shows us many folders including one called Development. We will use this folder for our source code.

cd into this folder and run git clone to get the source code for our MEAN.JS application from GitHub:

root@7f4e72af1cf0:/# cd Development/ 
root@7f4e72af1cf0:/Development# git clone meanjs 
Cloning into 'meanjs'... remote: 
Checking connectivity... done.

cd into our MEAN.JS folder. We can run npm install to download all the package dependencies:

root@7f4e72af1cf0:/Development# cd meanjs 
root@7f4e72af1cf0:/Development/meanjs# ls 
Dockerfile Procfile app bower.json config fig.yml gruntfile.js karma.conf.js package.json public scripts server.js 
root@7f4e72af1cf0:/Development/meanjs# npm install

A couple of hiccups to watch out for: For some reason, my npm install hung during a download. So I used Ctrl + C to terminate it, deleted all packages to start from scratch, and ran npm install again. Thankfully, this time it worked:

root@7f4e72af1cf0:/Development/meanjs# rm -rf node_modules/ 
root@7f4e72af1cf0:/Development/meanjs# npm install

Install the front-end dependencies running by running bower. Since I’m logged in as the super user, bower doesn’t like it. But it does give me an option to still run it by using the --allow-root option:

root@7f4e72af1cf0:/Development/meanjs# bower install 
bower ESUDO Cannot be run with sudo 
You can however run a command with sudo using --allow-root option
root@7f4e72af1cf0:/Development/meanjs# bower install --allow-root

Run our grunt task to run the linter and minimize the js and css files:

root@7f4e72af1cf0:/Development/meanjs# grunt build 
Done, without errors.

Now, we are ready to run our application. Our MEAN stacks looks for a configuration flag called NODE_ENV, which we will set to production and use the default grunt task to run our application. If you did all the steps right, you should see this final output:

root@7f4e72af1cf0:/Development/meanjs# NODE_ENV=production grunt 
MEAN.JS application started 
Environment: production 
Port: 3000 
Database: mongodb://

Validating Our Application from the Browser

Our application would have given errors if there was some problem running it or if the database connection failed. Since everything looks good, it’s time to finally access our web application through the browser.

But first, we’ll need to expose our virtual machine’s port 80 over HTTP.

  1. Go back to the EC2 dashboard.
  2. Click on the security group link for the given instance. You should see the settings page for the security group. launch_wizard_3
  3. Click the “Inbound” tab at the bottom, and then click the “Edit” link. You should see that SSH is already added. Now we need to add HTTP to the list of inbound rules. Edit
  4. Click “Add Rule.” Add Rule
  5. Select HTTP from the dropdown menu and leave the default setting of port 80 for the Port Range field. Click “Save.”
  6. Pick up the URL to our instance from the Public DNS column and hit that URL from the browser. Instances You should see the homepage of our fabulous application. You can validate it by creating some user accounts and signing in to the app. Congrats

So that’s it. We’ve managed to run our application on AWS inside isolated Docker containers. There were a lot of steps involved, but at the crux of it all, we really needed to run only two smart Docker run commands to containerize our application.

PS: If you liked this article you can also download it as an eBook PDF here: Running a MEAN Web Application in Docker Containers on AWS

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • EnvyAndroid

    Would it not be better to use coreos for hosting docker containers?

    You could also use the –restart=always flag to automatically restart your mongo container on reboot.

    • Haven’t worked with Coreos. Can you elaborate a few key points on why they would be a better choice than AWS?

      I haven’t covered the topic of keeping the containers running even after crashes or restarts, but sure –restart=always or using a process manager can make that happen. Thanks for bringing it up.

      Apart from the container, it would be prudent to keep the node applications running at all times as well. The forever node package is one option to achieve that.

    • Haven’t worked with Coreos. Can you elaborate a few key points on why they would be a better choice?

      I haven’t covered the topic of keeping the containers running even after crashes or restarts, but sure –restart=always or using a process manager can make that happen. Thanks for bringing it up.

      Apart from the container, it would be prudent to keep the node applications running at all times as well. The forever node package is one option to achieve that.

  • Jota Feldmann

    Excellent article. Really nice. My first time with AWS, Docker, and, for a FE starting a BE path, Im feeling great! Thanks for the words.

  • Kevin Truckenmiller

    What about using persistent data?

    • Not sure what you mean. The data here is persisted to the MongoDB database.

      • Kevin Truckenmiller

        I guess I’m wondering what happens to the data if the ephemeral instance goes away. Is there a good way for other containers in other instances to connect to that data?

        • Proper backup can ensure we do not lose the data if the container instance is somehow lost. Of course we can save the whole container state as an image with one command, but the appropriate thing to do here is to backup the files where the mongod service is flushing the data. You would want the files to be on a mounted shared folder that the docker container and its mongod service has access to. This way the persisted data is independent of the container and it’s state. You can then spin up a new container or a different mongod service to access the same file data as need be.

  • Sarath

    Great article.. very nice step by step illustration.. thank you.. In final step when running grunt I am getting this error.. What could have been wrong ?

    Could not connect to MongoDB!

    Error: failed to connect to [localhost:27017]

    • Hi Sarath,
      It clearly means that the connection to the MongoDB instance could not be established. Things to check for: 1) Is the MongoDB container up and running. 2) Check again on the command you used to pipe your connection between the MeanJS container and the Mongo one when you started the MeanJS container. 3) The MeanJS site can be run in dev and production. Make sure the connection info in the config for the environment you are running in matches with the MongoDB instance.

  • Guy Ellis

    Thanks! Looking at the docker file for MEANJS it looks like the MEANJS container installs MongoDB in its container as well. I completely agree with you that Node and Mongo should be in separate containers. I’m guessing that maccam912 is just making it easier for users to get started with that container as a completely self contained solution to running MEANJS. I’m wondering if that was part of your space problems? i.e. 2 installations of MongoDB.

    • Didn’t realize that his docker file had MongoDB as well :)
      It definitely increases the space, but can’t confirm if it is significant enough.

  • Wilson Novido

    Got this error when running npm install

    > utf-8-validate@1.1.0 install /Development/meanjs/node_modules/

    > node-gyp rebuild

    gyp ERR! configure error

    gyp ERR! stack Error: “pre” versions of node cannot be installed, use the –nodedir flag instead

    • Hmm. The Docker images still work. I used them couple of days back. So node-gyp should work. Unless it errored out because of a resource problem (most probably storage). How much volume space did you allocate to your instance?

      • Wilson Novido


        • Sorry for the delayed response Wilson. I had to find the time to run these steps again and see what’s going on.
          Turns out you are right. Meanjs project that I have referenced in this blog is undergoing heavy development and the source code has changed since this blog.
          The good news is that, even though I got that node-gyp error, I was still able to run the application with some workarounds. Seems like it is an optional dependency. After the npm install, I did a bower install. One more thing. It looks like Ruby and Sass have also been added as new dependencies. So if you do a grunt as the next step, it will complain and ask you to install them. I just bypassed that by running “node server.js” directly instead of running the grunt command. Hopefully the source code will stabilize in the near future.

          • GameKyuubi

            I got it working by installing nvm and then forcing a stable version of node instead of a pre build, and then installing ruby-compass.

          • Wilson Novido

            Thanks, I will try to do this.

          • Wilson Novido


  • Pingback: Node Weekly No.93 | ENUE Blog()

  • Pingback: Running a MEAN web application in Docker containers on AWS | Dinesh Ram Kali.()

  • Edgar Rios Navarro


  • Thomas McKay

    Ignoring the npm ‘gyp’ errors and starting via ‘node server.js’ seems to work. I’m new to AWS so could be missing something obvious but I cannot connect to neither the IP nor the public DNS. Are the instructions complete above?

    • Steps are complete. Did you open port 80 for the instance. Did you map port 3000 on container to port 80 on host?

      • Thomas McKay

        Yes, as part of the docker run command listed in steps.

  • GameKyuubi

    Alright so this is all well and good, but now can you recommend a way to develop for MEAN once it’s set up like this?

    • All the individual technologies of MEAN take a bit of time to master. site recommends the best resources for that. You can maybe start with an overall introduction. Here is a two hour tutorial I had created couple of months back: You will also find a lot of good blog posts explaining it.

      • GameKyuubi

        I mean more specifically how to work with it in a familiar dev environment once it’s set up via Docker like this.

        • I keep the code in my git repository. And my code editor is outside of the Docker container. After my changes, I checkin the code. Then SSH into the Docker container and do a git pull from the appropriate folder.

  • Pingback: | Twitter Flight Recap | Blog | SitePen()

  • Pingback: Javascript Weekly No.238 | ENUE()

  • darren

    Thanks for this. I started to work through a tutorial with all MEAN components running on my host but it seems far preferable, and not that much more difficult, to put it up on someone else’s hosted servers – especially given that I want to write an actual single page app at some point. Some questions:

    1. this was totally free on aws for one year? When I looked I got the feeling that there were some other costs for hosting mongodb etc

    2. in terms of maintenance do you have a feel for how much work is needed in the background to ensure that the various component frameworks are kept secure and up to date?

    3. is it fairly straightforward to move content off of aws to another hosted solution when the year is up? what aspects should one bear in mind when moving to a different provider’s platform (equivalency of MEAN components etc)?

    4. I assume it’s fairly trivial to buy a domain name and then register it with DNS to point to my free aws server?

    Sorry if these are a bit naive as I’m new to this whole MEAN area. I’ve run a small website on a service provider’s hosting platform before. The details are different but the concepts seem pretty much the same, albeit I’m taking care of more of it myself.


  • Chip Pinkston

    I just tried to get this up and running last night but I ran into a bunch of issues with npm and bower. Based on the comments here, I went the route of installing nvm, and tweaking a couple of other things to make the whole process work. The following are all of the commands I ran after the docker command to create (and connect to) the mymeanjs container. I did all of these before the git clone.

    //install nvm to fix issues with node
    curl -o- | bash

    //exit to pick up nvm

    //restart mymeanjs container
    docker start mymeanjs

    //connect to the container
    docker exec -i -t mymeanjs bash

    //Get node 5.4.1
    nvm install 5.4.1

    //install rubygems
    sudo apt-get install rubygems-integration

    //install compass to fix issue with Ruby / Sass
    sudo gem install compass

    //Bower was failing to get files over git - switched the protocol to https - source:
    git config --global url."https://".insteadOf git://

    I think the exit/restart/connection bit to pick up nvm could probably be replaced with:
    source ~/.bash_profile
    But I didn’t take the time to try it out.

    Hope this helps someone else.

  • Pingback: Running a MEAN web application in Docker containers on AWS | via @codeship – Tech Stuff()

  • sanju

    Thanks for sharing this useful information, It is very useful and who are searching for Custom Web Application Development Services

  • Pingback: Running a MEAN web application in Docker containers on AWS | Rapid Application Development with MEAN Stack()

  • DeanC

    Great article thanks.

    FYI: –link is deprecated. Should use network port mapping now.

  • deepak papola

    what if we update the docker image, should we new instance every time or something else..

  • deepak papola

    i cant login/sign up

    console says that


  • Sam Sam

    Hi, can you please explain what will be the mongodb connection string looks like declared in the node.js/express app (which is running in the different container wrt to the mongodb container) .

    This confused me a lot as we are migrating from traditional MEAN apps deployment to containerize environment.

    Also we have to take note that in docker swarm mode … the multiple web instances run in multiple node hosts !

  • deepak papola

    after deploying successfuly how can i connect to that instance? because it doesnt give me a pem file while creating a instance.

  • Mahesh Chemmala

    Thank you for posting the valuable information about MEAN stack,Keep it up.MEAN stack Online Training

  • shakunthala rani

    Thanks for posting the valuable information about MEAN stack,
    Meet the best training institute in Bangalore, trained 5000+ people,
    Get personalized training from industrial experts with Hands on experience.
    Clic here to visit