A Nonprofit Case for Docker


Reading Time: 5 minutes

Even in discussions with people pretty familiar with Docker, I find many are convinced it is only needed for those with significant scaling issues. It’s true that Docker is fantastic for scaling services to any level, and it can be ideal for microservice architectures. But those aren’t the only valuable use cases for it.

In this article, I’ll present the value of Docker from the opposite perspective. Rather than the scaling up use case, let’s talk about the case for scaling down.

Monoliths vs. Microservices Separation of Concerns

I work for a nonprofit organization called SIL International where I lead our IT Application Development team. As the IT department, we’re responsible for several dozen applications used primarily by an internal user base distributed all over the globe. As a result, many of the applications we develop use the same technologies.

Since we have a very small budget (as most nonprofits do), we found it cheapest and most convenient to run many of our applications on the same servers. This led to a tangled messes of monolithic servers that ran several web applications, their databases, and other random services on the same server.

By today’s standards, everyone would agree that is a bad way to operate.

Since one application would have problems and affect other applications, we decided to invest in separating our concerns and move each application to its own set of servers. At the same time, we thought it would be more cost effective for us to move to the cloud rather than run and manage our own data centers. After some price comparisons and consulting with other nonprofits, we decided to go with Amazon and their OpsWorks service to help with automation.

Initially this was great. We moved several of the applications to their own “stacks” in OpsWorks, and everyone was happy. Our developers enjoyed the automated deployments we were able to achieve, and our Ops folks were happy to not support the servers and to have the apps separated. However, this turned out to be more costly than we anticipated, as well as a poor use of resources.

With a small user base distributed across all timezones, we saw a consistently low level of server utilization. Each of our OpsWorks stacks included one elastic load balancer, one NAT server, one micro EC2 instance, and one micro RDS instance. This issue was compounded because we also wanted a staging environment, so each app really got two of these stacks. We’d be fine sharing some of the resources for a staging environment, but with OpsWorks, each stack went into its own virtual private network, so the resources were segmented away from others.

With one year reserved pricing, this configuration cost almost $800 per stack or $1600 per app. While it solved our separation concern, it started to get costly and felt wasteful as our servers sat nearly idle 24/7.

Sign up for a free Codeship Account

Server Density

Wouldn’t it be nice if there was a way we could keep our apps separated yet take better advantage of the resources available to us?

Enter Docker.

Like good nerds, when we caught wind of the Docker buzz a couple years ago, we started checking it out. Everything sounded great, but it didn’t seem applicable to us because we don’t have scaling issues, and most of our applications were not using a microservice architecture. Eventually, we realized that the means by which Docker can scale up so well could just as easily be used to scale down. Thus began our move to Docker.

Now that we’re running many of our apps with Docker on AWS using their EC2 Container Service, our average application uses between 1/6 to 1/4 of a micro instance for compute and memory and runs just fine. We now have the added benefit of running a second instance of each application behind a load balancer for improved availability with minimal cost.

So we’re getting much better utilization of the servers we’re paying for, AND we’re able to separate our applications to the degree we wanted. We’re now able to run a cluster of Docker host servers in a single virtual private network so we can share services where appropriate and separate where needed.

For example, my team’s staging cluster has a single elastic load balancer, a single NAT instance, a single RDS instance, and four small EC2 instances to support 19 applications. All that costs about $1200 a year. So the per app cost for staging is roughly $63. That is a LOT better than $800 for a dedicated OpsWorks stack.

In our production environment, each app gets a dedicated elastic load balancer and RDS instance to avoid single points of failure. So the per app cost goes up a little, but it is still less than it was with dedicated stacks because they can share a NAT instance.

Not only are we saving money now, we have the benefit of redundant instances for increased availability.

It’s Not Just About Money

While I understand that money talks, and changes like adopting Docker will most certainly include costs, there are other intangible reasons to consider it.

As an employee of a nonprofit, I work for the organization because I believe in its mission, not because the pay is good. But as a nonprofit, my organization also has the motivation and freedom to explore new technologies to find ways to improve our services and reduce costs. It is this motivation and freedom that leads the company to investing in its employees in a way I’ve never experienced before, coming from a large tech company in Silicon Valley.

As a developer, I love to learn and try new things, and as a manager I want to give my staff the ability to do the same. So I believe not only is there opportunity to save money by adopting Docker like we have, there is also the opportunity to invest in employee personal growth by encouraging them to learn something as valuable as Docker.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Pingback: Docker: A Nonprofit Case for @Docker by @phillipshipley: https://blog.codeship.com/non-profit-case-docker/ … via @codeship – DonationStation.us()

  • Pingback: API Developer Weekly #115 | Restlet - We Know About APIs()

  • Pingback: "Crush your enemies. See them driven before you. Hear the lamentations of their links." - Conan - Magnus Udbjørg()

  • Thanks for the article. I have not used ELB or NAT before. I think I understand ELB (distribute traffic incoming to one address among several instances of an app?) but what does NAT do? One thing I could use is the ability to run multiple web app containers, all using port 80 or 443, on the same EC2 instance. Does NAT handle that routing?

    • pshipley

      The NAT (network address translation) instance is for routing outbound traffic from a private subnet in a VPC (virtual private cloud). If you only have a public subnet you do not need a NAT, but if you want to put your servers in a private subnet so they are not publicly addressable you’ll need a NAT so they can route to the outside world.

      As for your question about running multiple web apps on the same instance, if you want them to all be accessible on a standard port like 443 for https you’ll either need multiple ELBs so each can proxy the traffic to the different container ports, or you’ll need to run your own reverse proxy to do that. We use multiple ELBs for production, one for each application and in staging we run an instance of haproxy to to route based on the hostname to a pre-configured port.

      • I think I get it. Is this correct: for staging you have an haproxy (in a container?) listening on 80 and 443, and you point multiple domains at it. The haproxy looks at the hostname in the request, and requests for staging.app-a.com go to port 3000, staging.app-b.com to port 3030, and so on? Is the haproxy container on the same box as the apps? I’m keen to get something like this set up, as it would enable some other stuff, like the Amazon WAF. Thanks again for your help!

        • pshipley

          You are right. We have haproxy behind an ELB. We use the ELB to terminate the SSL and route to port 80 of haproxy container. haproxy is configured to know about our various apps and what host ports they are listening on. We have a script that on startup of the haproxy service it scans the ECS cluster to find all the EC2 instances in it and then updates the haproxy configuration to check them for each app on appropriate port.

          We use Codeship for CI/CD of the haproxy configuration so we only need to push updated config to git repo and Codeship checks syntax and pushes to AWS and restarts the haproxy service.

          It is a bit complicated but solves the problem for us. Hopefully with new Docker 1.12 features we’ll be able to remove some of this complexity, but we’ll see how AWS implements support for the new service features in 1.12.

  • Roger Leite

    Hi Phillip, great post.

    Deis Workflow https://deis.com/workflow seems a good fit to your case. If your applications follow “The Twelve-Factor App” http://12factor.net, a simple “git push” can build and deploy your app. It also supports docker images.

    We’re running Deis v1 (old version before Workflow) on production more than 1 year and we’re happy with it.

    Have you tried Deis or something similar? For a nonprofit case, Deis seems cool because you can try different providers, like Digital Ocean, and cut even more the costs.