Released: Docker 17.05 on Codeship Pro

Codeship News

Reading Time: 4 minutes

At Codeship, it’s important to us that you have access to the latest Docker features during your Codeship Pro builds. Today, we updated our Codeship Pro build machines to Docker 17.05, the latest edge release.

If you were at DockerCon (or if you read my roundup post), you know that 17.05 comes with a couple new things. To begin with, it’s the first Docker release since the internals of the project moved outside of the Docker GitHub organization and into the Moby Project, an open-source collection of building blocks to allow developers to build containerized systems.

Docker now depends on components of the Moby Project, but the end product of Docker CE and Docker EE remains unchanged from a user perspective.

Use build arguments in FROM instructions

One of the sacred rules of a Dockerfile used to be that it had to start with the FROM instruction, where you declare a base image. When build arguments were introduced, lots of Docker users quickly noticed that it would be really convenient to be able to use build arguments in that FROM instruction in order to, for example, use the SHA of your custom base image during CI/CD, or to quickly test your applications with multiple language versions without having to maintain separate Dockerfiles for each specific label of the base image.

In Docker 17.05, it’s possible to do that. Keeping with the usage pattern of build arguments, you must declare the build argument before consuming it.

ARG GOLANG_VERSION

FROM golang:$GOLANG_VERSION
…

Multi-stage builds

Docker 17.05 also comes with one awesome new feature I’m personally excited about: multi-stage builds. This change allows you to “ship artifacts, not build environments” a bit easier. The pattern of builder images has existed for a long time, and multi-stage Dockerfiles standardize this pattern and roll the functionality into Docker itself, instead of it being a workaround.

Multi-stage builds don’t make sense for every language or framework, and generally have the most impact for languages like Go, where all you’re looking to do is build and ship a binary. There’s no need to ship your entire build environment with that binary.

With a multi-stage build, you can separate your Dockerfile into explicit stages:

# first stage does the building
# for UX purposes, I'm naming this stage `build-stage`

FROM golang:1.8 as build-stage
WORKDIR /go/src/github.com/codeship/go-hello-world
COPY hello-world.go .
RUN go build -o hello-world .

# starting second stage
FROM alpine:latest

# copy the binary from the `build-stage`
COPY --from=build-stage /go/src/github.com/codeship/go-hello-world/hello-world /bin

CMD hello-world

When building the above image, only the portion after the last FROM instruction is tagged and saved as the final Docker image. The layers from the build stage will stay on the build machine but are untagged and not associated with the final image.

If you’re interested in a quick and dirty example, you can check out a simple Hello World program in Go to compare the differences.

!Sign up for a free Codeship Account

Caching tradeoff

When using an ephemeral build machine, take care to know that there is a caching tradeoff when using multi-stage builds. We expect this feature to continue to evolve from Docker’s side, and the Codeship engineering team is working on improvements to our caching system that will dramatically improve overall caching performance.

The adage of “good, fast, and cheap; pick two” is definitely relevant here. Multi-stage builds allow you to have small images, but that can come at the expense of slightly longer build times due to caching tradeoffs. Since the build stage layers are untagged and not associated with the final image, they are not part of the cached image. This means that, for now, it’s possible that your build may take a bit longer if you relied on caching all of the layers to speed up the build.

If build speed AND small images are both critical for your application, you may choose to create a separate service in your codeship-services.yml file and cache that service. It’s a bit of an anti-pattern as you’ll need to maintain an extra Dockerfile, but it will let you get the best of both worlds. Keep your eyes peeled for improvements to our caching system; we’ll keep all of our customers informed when those changes are released.

Now that Docker has standardized around a monthly release cycle, Codeship Pro customers can expect that Codeship Pro build machines will be updated to the latest edge release once we’ve had time to test and validate that no breaking changes have been introduced.

If you have any questions, feel free to reach out to us on the Community Forum or via Twitter. Always keep shipping!

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Jan Hoogeveen

    Hi,

    This is a great article and we’re very anxious to start using this for our projects. However, I can’t figure out if it would be possible to use the intermediary image (the first build step / builder image) as a basis to run my CI tests on.

    Ideally I would run tests on the first image and after the tests pass we move on to create the final, smaller production image.

    Is this possible or would I be better off creating a new service for just the tests?

    • Jorrit

      That would be perfect indeed!