Microservices – Old dog … new tricks

Software architecture is founded on key principles. Principles (made up of name, statement, rationale and implications) are then translated or mapped into key architectural patterns and qualified amongst a number of criteria (robustness, completeness, consistency and stableness to name a few) to justify its quality.

A lot of attention and hype is on the use of microservices architecturally and its implementation with more cutting edge technologies is becoming more and more popular with modern software stacks. Why is this appealing now? Have we not come across this before but failed to give it full attention cause we couldn’t realise its benefit.

“Microservices” – yet another new term on the crowded streets of software architecture. Although our natural inclination is to pass such things by with a contemptuous glance, this bit of terminology describes a style of software systems that we are finding more and more appealing.

Did we believe we could ignore what microservices really stood for 15-20 years ago? The frightening thing for the software industry is that its taken a full lifecycle turn of evolution to truly appreciate a pattern that was always there. Some might argue it was before its time, and others will argue only now are we seeing the benefits…but I think its got more to do with we can no longer accept mediocre adoption to change and we can’t afford to re-invent the wheel again and again.

Ever heard of the pattern “design by contract” (Dbc). It prescribes that software designers should define formal, precise and verifiable interface specifications for software components, which extend the ordinary definition of abstract data types with preconditions, postconditions and invariants.

Dbc is a metaphor on how elements of a software system collaborate with each other on the basis of mutual obligations and benefits.

The pipe and the painting

In his book on Component Software, Clemens Szyperski rightly points out that software components differ in some intrinsic ways from engineering components in (for example) electronics. These differences are natural since the analogy with electronics is only a metaphor, and not all properties from one field apply to the other. A metaphor is not an identity. But one property of components applies to any field of engineering: we cannot seriously have component-based development without developing, for every component, a specification of the component, distinct from the component itself, and serving as the sole basis for users (‘clients’) of the component.

Why is this rule, so obvious in all other fields, not universally applied in software? The answer, I think, lies in the specific nature of software and the fuzziness that it creates between description and described. The just stated requirement that the specification should be “distinct from the component itself” is a platitude in electronics or construction engineering: no one would confuse a circuit with a circuit diagram, or a bridge with a map of the bridge. You can fall from the bridge, but not from the map. You won’t usually get an electric shock from the circuit diagram. Remember the famous Magritte painting of a pipe, entitled “Ceci n’est pas une pipe” (this is not a pipe.)


But with software the platitude turns into a tricky question, because separating specification from implementation was not necessarily top of mind since the traits of using and maintaining software allowed for this misgiving – that is no more.

You can only trust components to the extent that you know what they do (without needing to know how they do it). This is where past approaches didn’t deliver…and we now find ourselves embracing an age old pattern with new technology implementations…all because we didn’t appreciate the true sense and value of modular componentization irrespective of the technical constraints.

The monolith

Since the earliest days of developing applications for the web, the most widely used enterprise application architecture has been one that packages all the application’s server-side components into a single unit. Many enterprise Java applications consist of a single WAR or EAR file. The same is true of other applications written in other languages such as Ruby and even C++.

Let’s imagine, for example, that you are building an online store that takes orders from customers, verifies inventory and available credit, and ships them. It’s quite likely that you would build an application like the one shown below.


The application consists of several components including the StoreFront UI, which implements the user interface, along with services for managing the product catalog, processing orders and managing the customer’s account. These services share a domain model consisting of entities such as Product, Order, and Customer.

Despite having a logically modular design, the application is deployed as a monolith. For example, if you were using Java then the application would consist of a single WAR file running on a web container such as Tomcat. The Rails version of the application would consist of a single directory hierarchy deployed using either, for example, Phusion Passenger on Apache/Nginx or JRuby on Tomcat.

This so-called monolithic architecture has a number of benefits. Monolithic applications are simple to develop since IDEs and other development tools are oriented around developing a single application. They are easy to test since you just need to launch the one application. Monolithic applications are also simple to deploy since you just have to copy the deployment unit – a file or directory – to a machine running the appropriate kind of server.

This approach works well for relatively small applications. However, the monolithic architecture becomes unwieldy for complex applications. A large monolithic application can be difficult for developers to understand and maintain. It is also an obstacle to frequent deployments. To deploy changes to one application component you have to build and deploy the entire monolith, which can be complex, risky, time consuming, require the coordination of many developers and result in long test cycles.

A monolithic architecture also makes it difficult to trial and adopt new technologies. It’s difficult, for example, to try out a new infrastructure framework without rewriting the entire application, which is risky and impractical. Consequently, you are often stuck with the technology choices that you made at the start of the project. In other words, the monolithic architecture doesn’t scale to support large, long-lived applications.

A new horizon

In a microservice architecture, the patterns of communication between clients and the application, as well as between application components, are different than in a monolithic application. Let’s first look at the issue of how the application’s clients interact with the microservices. After that we will look at communication mechanisms within the application.

API gateway pattern

In a monolithic architecture, clients of the application, such as web browsers and native applications, make HTTP requests via a load balancer to one of N identical instances of the application. But in a microservice architecture, the monolith has been replaced by a collection of services. Consequently, a key question we need to answer is what do the clients interact with?

An application client, such as a native mobile application, could make RESTful HTTP requests to the individual services as shown below


On the surface this might seem attractive. However, there is likely to be a significant mismatch in granularity between the APIs of the individual services and data required by the clients. For example, displaying one web page could potentially require calls to large numbers of services., for example, describes how some pages require calls to 100+ services. Making that many requests, even over a high-speed internet connection, let alone a lower-bandwidth, higher-latency mobile network, would be very inefficient and result in a poor user experience.

A much better approach is for clients to make a small number of requests per-page, perhaps as few as one, over the Internet to a front-end server known as an API gateway, which is shown below


The API gateway sits between the application’s clients and the microservices. It provides APIs that are tailored to the client. The API gateway provides a coarse-grained API to mobile clients and a finer-grained API to desktop clients that use a high-performance network. In this example, the desktop clients makes multiple requests to retrieve information about a product, where as a mobile client makes a single request.

The API gateway handles incoming requests by making requests to some number of microservices over the high-performance LAN. Netflix, for example, describes how each request fans out to on average six backend services. In this example, fine-grained requests from a desktop client are simply proxied to the corresponding service, whereas each coarse-grained request from a mobile client is handled by aggregating the results of calling multiple services.

Not only does the API gateway optimize communication between clients and the application, but it also encapsulates the details of the microservices. This enables the microservices to evolve without impacting the clients. For examples, two microservices might be merged. Another microservice might be partitioned into two or more services. Only the API gateway needs to be updated to reflect these changes. The clients are unaffected.

The most interesting characteristics of microservices include:

  • Componentization, the ability to replace parts of a system, comparing with stereo components where each piece can be replaced independently from the others.
  • Organisation around business capabilities instead of around technology.
  • Smart endpoints and dumb pipes, explicitly avoiding the use of an Enterprise Service Bus (ESB).
  • Decentralised data management with one database for each service instead of one database for a whole company.
  • Infrastructure automation with continuous delivery being mandatory.

Although microservice architecture is fairly new, the basic concept behind it is one that will seem familiar to many software professionals. The guiding principles behind microservices include:

  • Using small, single purpose, service-based applications to create a fully functional application (each single purpose application is a microservice)
  • Creating each microservice using the most appropriate programming language for the task
  • Obtaining application data using the most efficient data management technique for the particular microservice
  • Developing some sort of lightweight communication between each microservice
  • Communication occurs using protocols such as REST, so that the pipe is dumb, but the microservice is smart
  • Employing a decentralized approach to managing the application by monitoring each microservice separately
  • Relying on each microservice as needed to build any number of full-fledged applications (desktop, mobile browsers, native mobile apps, and even APIs)