Creating a Consistent Cross-platform Docker Development Environment

Development

How many times have you read this statement:

“The great thing about Docker is that your developers run the exact same container as what runs in production.”

Docker is all the hype these days, and with statements like that, many are wondering how they can get on board and take advantage of whatever it is that makes Docker so popular.

That was us just six months ago when we started playing with Docker and trying to fit it into our processes. After just a few months we knew we liked it and wanted to run apps this way, but we were struggling with some of the Docker development workflow.

As the manager of a development team, I like using the same processes and technologies through the whole lifecycle of our applications. When we were running apps on AWS OpsWorks using Chef to provision servers and deploy applications, we used Vagrant with Chef to run the same recipes locally to build our development environment.

New Call-to-action

Challenges with a Docker development environment

It didn’t take long for us developing with Docker to realize that the common statement isn’t as easy to achieve as it sounds.

This article highlights the top six challenges we faced when trying to create a consistent Docker development environment across Windows, Mac, and Linux:

  1. Running Docker on three different platforms
  2. Docker Compose issues on Windows
  3. Running minimal OS in vagrant (boot2docker doesn’t support guest additions)
  4. Write access to volumes on Mac and Linux
  5. Running multiple containers on the same host port
  6. Downloading multiple copies of docker images

Running Docker on multiple operating systems

Docker requires Linux. If everyone runs Linux this really isn’t an issue, but when a team uses multiple OSes, it creates a significant difference in process and technology. The developers on our team happen to use Windows, Mac, and Linux, so we needed a solution that would work consistently across these three platforms.

Docker provides a solution for running on Mac and Linux called boot2docker. boot2docker is a minimal Linux virtual machine with just enough installed to run Docker. It also provides shell initialization scripts to enable use of Docker command line tools from the host OS (Windows or Mac), mapping them into the Docker host process running inside the boot2docker VM. Combined with VirtualBox, this provides an easy way to get Docker up and running on Windows or Mac.

While boot2docker works well for simple use cases, it makes certain conditions difficult to work with. I’ll get into those in the following challenges. This topic can be hard to understand at first, so here’s a simple illustration of the three main options for running Docker locally:

Three options for running Docker locally

Using Docker Compose on Windows

Docker Compose is a fantastic tool for orchestrating multiple container environments and in many ways actually makes Docker usable for development. If one had to run all the normal Docker CLI commands and flags to spin up their environments and link them properly, it would be more work than many of us are willing to do.

Compose is still relatively new though, like Docker itself really, and as a result it does not work very well on Windows yet. There are so many issues on Windows in fact, that there is an epic on the project just to deal with them: https://github.com/docker/compose/issues/1085. Some good news though is Docker Toolbox claims Compose support for Windows is coming soon. Also: If you want to learn more in general about how to work with Docker Compose check out this free Codeship eBook: Orchestrating Containers for Development with Docker Compose

(re)Enter Vagrant

I mentioned earlier that boot2docker works well to create a Linux VM for running Docker in but that it did not work well for all conditions.

Vagrant has been a fantastic tool for development teams for the past few years, and when I started working with Docker I was even a little sad to be moving away from it. After a couple months of struggling to get everything working with boot2docker though, we brought Vagrant back into the equation.

We liked how small boot2docker was since we didn’t need a full featured Docker host, but unfortunately it doesn’t support the VirtualBox guest additions required for synced folders. Thankfully though we found the vagrant box AlbanMontaigu/boot2docker that was a version of boot2docker with guest additions installed and weighs in at a light 28M. Compare that with a minimal Ubuntu 14.04 box at 363M.

Write access on volumes

Docker can mount the host filesystem into containers as volumes. This is great when the container only needs to read the files, but if the container needs to write changes to the files there can be a problem.

On Windows VirtualBox, synced folders are world-writeable with Linux permissions of 777. So on Windows, write access is not an issue. However on Linux and Mac, there are file ownership and permissions to work with. For example, when I’m writing code on my Mac, my username is shipley, and my uid/gid is 1000. However, in my container, Apache runs as www-data with uid/gid of 33.

So when I want Apache to generate files that I can access on my host machine to continue development with, Apache is not allowed to write to them because it runs as a different user. My options would either be to change ownership/permissions of files on the Mac filesystem, or change the user and uid/gid apache runs as in the container to shipley and 1000. However, that option is pretty sloppy and does not work for team development.

With VirtualBox, you can change the user/group and permissions that synced folders are mounted as, but it’s not really easy or convenient by default. Vagrant provides a very convenient way to do this though. This was one of the biggest motivators for us to go back to Vagrant. With Vagrant, all we need to add to our Vagrantfile is:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33"]

With the extra mount_options, it will own the /data folder inside the vm as uid/gid 33 which, inside an Apache container based on Ubuntu, will map to user/group www-data.

Funny thing though — as I mentioned earlier, by default the filesystem permissions on Windows are 777. So write access there isn’t an issue. However, we found that when using volumes in Docker to mount a custom my.cnf file into a mariadb container that mariadb doesn’t like it when the configuration file is world writeable. So again, Vagrant helps us out by making it simple to also set file permissions in the mount:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33","dmode=755","fmode=644"]

Running multiple containers that expose same port

My team primarily develops web applications, so for us each of our projects/applications expose port 80 for HTTP access during development.

While boot2docker for Windows/Mac and native Docker on Linux makes getting started quick and easy, you can only have one container bound to a given port on the host. So when we’re developing multiple applications or multiple components of an application that expose the same port, it doesn’t work. This really isn’t a show stopper, but it is an inconvenience as it requires running apps on non-standard ports, which just gets awkward to work with and hard to remember.

Running each app however in its own VM via vagrant solves this problem. Of course it introduces a couple more issues though; like now you have to access the app via an IP address or map a hostname to it in your hosts file. This really isn’t that bad though since you should only have to do it once per app.

Another problem this solution introduces is running multiple VMs requires a lot more memory. It also seems a bit counterproductive since Docker is supposed to remove the burden of running full VMs. Anyway, it’s a small price to pay to have multiple apps running at the same time and accessible on the same ports.

Downloading multiple copies of Docker images

The most annoying problem created by this solution though is, now that Docker is running in multiple VMs, each one needs to download any dependent Docker images. This just takes more time and bandwidth, and if we developers hate one thing, it’s waiting.

We were able to get creative though, and on bootup of the VM, we check the host machine folder defined by environment variable DOCKER_IMAGEDIR_PATH for any Docker images, and if found it will docker load them. Then after docker-compose up -d is completed, any new images that have been downloaded are copied into the DOCKER_IMAGEDIR_PATH folder.

Bingo, only need to download each image once now.

Conclusion

After running into all these challenges and finding solutions to them, we now have a simple Vagrantfile we can copy into each of our projects in order to provide a consistent development experience regardless of what operating system the developer is using. We’re using it in multiple projects today and have even gotten to the stage of continuous integration and blue/green deployment to Amazon Elastic Container Service in production (but those are topics for another article).

I expect we’ll face more challenges as time goes by and our projects change, and as we do we’ll continue to evolve our solutions to account for them. Our Vagrantfile is open source, and we welcome suggestions and contributions.

Feel free to post your questions and comments below to make this article more meaningful, and hopefully we can address issues you’re facing too.

PS: If you liked this article you might also be interested in one of our free eBooks from our Codeship Resources Library: Download it here: Understanding the Docker Ecosystem.

References

Try Codeship Pro

Codeship is offering a CI and CD platform based on Docker. You can learn more about it here: https://codeship.com/features/pro

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Alain Sollberger

    Hi Phillip

    Thank you for sharing, I’m about to dig into Docker and this was a great look into it.

    Blessings,
    Alain Sollberger

    • pshipley

      Thanks Alain! As you experiment with it please let me know if we can improve it.

  • James Green

    Yet sharing folders on windows using vagrant is well documented as fraught with difficulties (that caused us to give up). We’ve been trying docker on windows to overcome this, but have hit other problems. Developing on windows for Linux is not easy.

  • Pingback: Creating a Consistent Cross-platform Docker Development Environment | The Boring Blog of Phillip Shipley()

  • Artur Roszczyk

    Hey,

    Thanks for writing about docker!

    I don’t completely understand why would you need to spin up multiple VMs, since as you admitted it’s huge cost.

    As far I understand, you want to expose to the host machine multiple applications on port 80.

    My first question would be: are all these apps really facing customer? Do you really need to expose them or are they just APIs used by other microservices?

    For the rest of my comment I will assume these apps are all facing the end-user.

    Maybe it would be possible to create another docker container with nginx proxy which routes to internally exposed container ports* using vhosts. Then you could use vagrant-dns plugin to point all hostnames to the single VM. Would this solution work for you?


    * the option “expose” in docker-compose.yml file

    • samuelroze

      Or simply have a direct access to the containers, via a simple route configuration :)

      https://github.com/inviqa/dock-cli/blob/master/src/Installer/DNS/Mac/DockerRouting.php#L82

      • Artur Roszczyk

        As far as I understand, in that scenario you can not route requests to many apps using different hostnames

        • Artur Roszczyk

          Or you can, but you have to have knowledge of internal IP addresses outside of the docker-VM

        • samuelroze

          You can, and basically using DnsDock and you have a predictable DNS name for all the running containers.

    • pshipley

      Hi @arturroszczyk:disqus ,

      The setup described in this article is just for development, it is not a production configuration. Sorry I wasnt clear about the multiple apps scenario. As an IT development team we work on several applications all the time, so we often want to have more than one running locally at one time. That is why the port binding was a challenge for us using native Docker or a single VM solution like boot2docker. Another potential solution to get to a single VM could be to use a service like Interlock to proxy requests to the various random ports: https://github.com/ehazlett/interlock.

      DnsDock mentioned by @samuelroze:disqus also looks cool but doesnt look like it helps with the port issues.

      Docker Machine, now easily available via Docker Toolbox, is also a potentially manageable solution for managing multiple Docker host VMs, but it still doesnt provide synced folder permissions management, so I’m not convinced it is a replacement for our usage of Vagrant with a minimal VirtualBox base box. As for running multiple VMs we also are adding to our Vagrantfile a memory limit on the VM to use less host system memory to keep it lighter weight.

      Great questions and comments, I appreciate the conversation, keep it going and share your tips and tricks for successful Docker based development, I’d love to learn about more tools like DnsDock.

      Thanks,
      Phillip

  • Pingback: Creating a Cross-platform Docker Development Environment | Software Engineer InformationSoftware Engineer Information()

  • Pingback: Why Always Docker?-IT大道()

  • Jady LIU

    Hi,

    This is really a good article from the real world.

    I am reviewing docker for our DEV environment at the moment, from my understanding I consider Docker as the replacement of Vagrant but the reality is not, I have found a few inconvenience popping up and it is requested some workarounds involving Vagrant.

    We are currently using Ansible+Vagrant to build DEV environment from scratch, and it works well. I think this is the same scenario as you were using Vagrant+Chef. I would be more than interesting to know what are the big benefits if changing to use the way Vagrant+Chef+Docker. Also, I think it is a good point to convince my manger to make the change.

    Thank you,
    Jady

    • pshipley

      Hi @jadyliu:disqus thank you for the feedback. If you’re already using Ansible just stick with it, I don’t think Chef has any advantages from a dev environment perspective. I think their differences come more into effect when you need to scale significantly. We only used Chef because we used AWS OpsWorks, but we don’t use OpsWorks anymore so we don’t use Chef anymore either. The Docker world continues to change quite rapidly so I’ve almost written updates to this article a couple times but things keep changing so we keep changing too. Most recently we’re testing out the new Docker for Mac/Windows to see if it could replace our need for Vagrant and VirtualBox all together. So far it is working nicely on Mac but we haven’t had a chance to test the Windows version yet, hopefully in the next few days though. Sign up at https://beta.docker.com and let me know how it works for you.

  • Jeff Kilbride

    Hi Phillip,

    Thank you for that beta link. I’ve heard about the new Docker for Mac / Windows, but didn’t know how to get it. I’m working with a small team and trying to migrate from Vagrant / Puppet to Docker. It seems like the beta is working well for you, so I’ll give it a try.

    Being new to Docker, I would love to find an article on the nuts and bolts of day to day development — best practices for setting up the dev environment, mounting host file systems to reflect local code changes, debugging containers, etc… Maybe there are some on the Docker site I haven’t seen, yet. Any pointers would be great!

    And just a small typo in the article:

    Docker provides a solution for running on Mac and Linux called boot2docker.

    I think that should be “Mac and Windows”, right?

  • nsorrell

    Great article! Very helpful for our team! One thing I ran into was I created our setup on Linux and when the Windows users checked out the repo we got snagged by CRLFs when calling out to services… if anyone runs into that “No such service” nonsense check this answer for help: http://stackoverflow.com/questions/37260593/docker-compose-no-such-service-via-vagrant-windows-shells-only/37261875#37261875