Cloud Native Part 5: Microservices


Reading Time: 4 minutes

This article was originally published on Heptio’s blog by Joe Beda. With their kind permission, we’re sharing it here for Codeship readers.

This is the fifth part in a multi-part series that examines multiple angles of how to think about and apply “cloud native” thinking.

Microservice is a new name for a concept that has been around for a very long time. Basically, it is a way to break up a large application into smaller pieces so that they can be developed and managed independently. Let’s look at some of the key aspects here:

  • Strong and clear interfaces. Tight coupling between services must be avoided. Documented and versioned interfaces help to solidify that contract and retain a certain degree of freedom for both the consumers and producers of these services.
  • Independently deployed and managed. It should be possible for a single microservice to be updated without synchronizing with all of the other services. It is also desirable to be able to roll back a version of a microservice easily. This means the binaries that are deployed must be forward and backward compatible both in terms of API and any data schemas. This can test the cooperation and communication mechanisms between the appropriate ops and dev teams.
  • Resilience built in. Microservices should be built and tested to be independently resilient. Code that consumes a service should strive to continue working and do something reasonable in the event that the consumed service is down or misbehaving. Similarly, any service that is offered should have some defenses with respect to unanticipated load and bad input.

Sizing of microservices can be a tricky thing to get right

I’d say avoid services that are too small (pico-services) and instead aim to split services across natural boundaries (languages, async queues, scaling requirements) and keep team sizes reasonable (ie, “two-pizza” teams).

The application architecture should be allowed to grow in a practical and organic way.

Instead of starting with 20 services, start with two or three and split services as complexity in that area grows. Oftentimes the architecture of an application isn’t well understood until the application is well under development. This also acknowledges that applications are rarely “finished” but rather always a work in progress.

Are microservices a new concept?

Not really. This is really another type of software componentization. We’ve always split code up into libraries. This is just moving the “linker” from being a build-time concept to a run-time concept. (In fact, Buoyant has an interesting project called linkerd based on the Twitter Finagle system.)

This is also very similar to the SOA push from several years ago but without all of the XML. Viewed from another angle, the database has almost always been a “microservice,” in that it is often implemented and deployed in a way that satisfies the points above.

Constraints can lead to productivity

While it’s tempting to allow each team to pick a different language or framework for each microservice, consider instead standardizing on a few languages and frameworks. Doing so will improve knowledge transfer and mobility within the organization.

However, be open to making exceptions to policy as necessary. This is a key advantage of this world over a more vertically integrated and structured PaaS. In other words, constraints should be a matter of policy rather than capability.

The services spectrum

While most view microservices as an implementation technique for a large application, there are other types of services that form the services spectrum:

  • Service as Implementation Detail. As described above, this is useful for breaking down a large application team into smaller teams that stretch from development to operations.
  • Shared Artifact, Private Instance. In this scenario, the development process is shared across many instances of the service. There may be one dev team and many ops teams or perhaps a unified ops team that works across dedicated instances. Many databases fall into this category where many teams are running private instances of a single MySQL binary.
  • Shared Instance. Here, a single team provides a shared service to many applications and teams inside of an organization. The service may partition data and actions per user (multi-tenant) or provide a single simple service that is use very widely (serving HTML UI for a common branding bar, serving up machine learning models, etc).
  • Big-S Service. Most enterprises won’t produce a service like this but may consume them. This is the typical “hard” multi-tenant service that is built to service a large number of very disparate customers. This type of service requires a level of accounting and hardening that isn’t often necessary inside of an enterprise. Something like SendGrid or Twilio would fall into this category.

As services shift from being an implementation detail to a common infrastructure offered up within an enterprise, the service network morphs from being a per-application concept to something that can span the entire company. There is an opportunity and a danger in allowing these types of dependencies.

In the next part of this series, we will look at how Cloud Native creates both problems and opportunities in the security domain.

Check out the rest of the series:

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.