Reading Time: 5 minutes
When it comes to application infrastructure, containerization is king. What was once the hot new thing in the DevOps wild west is now maturing into a more stable and widely adopted technology.
But despite this growing popularity, it’s still a young technology and as with all things new, adoption can be overwhelming — especially for large companies that already have well-established workflows built around alternative technologies, like virtualization.
But that does not mean that Docker containers are fit only for use by smaller organizations. On the contrary, enterprises are already using containers. And when they take advantage of the cloud, adopting containers becomes even easier. This article explains what enterprises can gain from migrating workloads to Docker containers, and how the cloud can help them make the move.
Benefits of Using Docker
Why are large companies like ADP and Spotify using Docker? We’ll cover their specific use cases at the end of this post, but right now let’s cover a few general advantages that are important for the enterprise.
Return on investment
The biggest driver of most management decisions when selecting a new product is the return on investment. The more a solution can drive down costs while simultaneously raising profits, the better a solution it is — especially for large, established companies, which need to generate steady revenue over the long term.
At a high level, Docker can help facilitate this type of savings by dramatically reducing infrastructure resources. Because of the reduced infrastructure requirements Docker has, organizations are able to save on everything from server costs to the employees needed to maintain them.
By bringing a more developer-oriented methodology to infrastructure management, Docker allows engineering teams to be smaller and more effective. But even when you factor in training costs, there are a number of additional ways Docker can help the bottom line.
One of the biggest advantages to a Docker-based architecture is standardization. One of the reasons for Vagrant’s popularity is its ability to provide repeatable development environments. Because of the reduced footprint containers provide, Docker takes that concept ten steps further by providing repeatable development, build, test, and production environments.
Standardizing service infrastructure across the entire pipeline allows every team member to work on a production parity environment. By doing this, engineers are more equipped to efficiently diagnose and fix bugs within the application. This reduces the amount of time wasted on defects and increases the amount of time available for feature development.
One of the direct benefits of environment standardization is the increased reliability and efficiency of the CI/CD pipeline.
As seen in the Spotify case study, through the use of services like Codeship, Docker enables you to build a container image and use that same image across every step of the deployment process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel.
Through careful tuning, the length of time it takes from build to production can be sped up to one tenth of the time.
A more direct effect of utilizing Docker is the reduction of necessary resources. The streamlined nature of Docker containers means that fewer resources are necessary to run the same application. When more resources are needed, services like Kubernetes or Docker Swarm can be used to automatically scale the infrastructure to meet demand.
This is in stark contrast to running bare-metal or virtualized services, which effectively waste money on underutilized servers.
If you want to take things further, a datacenter operating system like Mesosphere can be used in conjunction with Kubernetes to optimize the load-balanced infrastructure and distribute containers across a datacenter in the most efficient way possible.
Parity means maintainability
One of the benefits that the entire team will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which server or whose laptop they are running on. For your developers, this means less time spent setting up environments, debugging environment-specific issues, and a more portable and easy-to-set-up codebase.
It’s not just developers that benefit from parity, though. Parity also means your production infrastructure will be more reliable and easier to maintain. You can recreate problems faster and easier by having reusable Docker images that execute the same way, decreasing downtime and the cost of production setup.
Real-World Examples of Docker in the Enterprise
While early critics have claimed that container technology like Docker is better suited to smaller organizations with limited resources, the reality is that there is a significant number (and growing) of enterprise-level organizations that are using Docker to streamline everything from CI/CD pipelines to developer onboarding. Here are a couple of examples.
A great example of an organization that is using Docker to better manage their application infrastructure is ADP. ADP is the largest global provider of cloud-based human resources (HR) services. From payroll to benefits, ADP handles HR for more than 600,000 clients, which has presented a unique challenge in terms of security and scalability.
To address these challenges, ADP utilizes Docker Datacenter and Docker Swarm to create signed servers and automatically scalable infrastructure that can adapt to the needs of the application as the load changes.
One thing worth mentioning is that ADP is utilizing a hybrid Docker methodology. Because ADP is more than 60 years old, they have a significant amount of legacy code that is still running in a production environment.
By slowly isolating services into separate containers, ADP is able to slowly grow into a microservices architecture using Docker, rather than doing it all overnight.
On the flip side of the enterprise coin is Spotify. A digital music service with millions of users, Spotify is running a microservices architecture with as many as 300 servers for every engineer on staff.
According to a Docker case study, the biggest pain point Spotify experienced managing such a large number of microservices was the deployment pipeline. With Docker, Spotify was able to pass the same container all the way through their CI/CD pipeline.
From build to test to production, they were able to ensure that the container that passed the build and test process was the exact same container that was spun up on production.