The release of the Reactive Manifesto in 2013 has popularized the term “reactive applications”. The purpose of the manifesto is to describe a new approach to systems architecture and the key characteristics that these systems will need to implement. Why would we need a new approach to systems? A few years ago a large application would typically consist of tens of servers, response times of a few seconds, gigabytes of data and downtime was a generally accepted norm. Today this has changed dramatically. Applications now span hundreds of servers, are dependent on many other systems, response times need to typically be sub second, terabytes of data and 100% up-time. This is further compounded by the fact that organisations are becoming more transparent by allowing customers direct views of internal processes via the web and mobile devices. The organizations that can adapt to this new playing field are the ones that can set themselves up for success tomorrow.
The Reactive manifesto takes many patterns and concepts and packages them together to form a cohesive set of terms and concepts. These together are termed reactive applications. Version one of the reactive manifesto addresses four design characteristics of responsive applications.
In late 2014, version two adapted the characteristics to the following:
A brief description of the four characteristics (taken directly from the manifesto):
Resilient: The system stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.
Elastic: The system stays responsive under varying workload. Reactive Systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs. This implies designs that have no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Reactive Systems support predictive as well as reactive scaling algorithms by providing relevant live performance measures. They achieve elasticity in a cost-effective way on commodity hardware and software platforms.
Message Driven: Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation, location transparency, and provides the means to delegate errors as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.
I will not be touching on the change from scalable to elastic as it is self explanatory.
So why the switch from Event Driven to Message driven?
According to the authors of the manifesto, there are two main reasons for the change. Reason one is that applications might want to communicate more than just events. The second reason is that they wanted to highlight the importance of an asynchronous binary boundary. A boundary that can provide decoupling of language, location, concurrency model, and temporal entanglement. This is also the fundamental level of isolation.
If we step back from what the authors actually say and we look at the changes in the reactive manifesto between version one and version two, it becomes clear that the authors are now taking a descriptive approach to the manifesto rather than the initial prescriptive approach. Message driven still smells very prescriptive to me. It seems that the authors are actually trying to communicate an asynchronous binary boundary that will most probably be distributed. Event driven architecture can still address all of the characteristics of responsive applications.
What is event driven architecture?
Event-driven architecture is an architectural style that builds on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution.
This definition is courtesy of David Chou. There many slightly different definitions available but this is the one I favor.
At a higher level in an organization an event-driven architecture (EDA) allows information to be propagated in near-real-time throughout a highly distributed environment, and enable the organization to proactively respond to business activities. This promotes a low latency and highly reactive enterprise, improving on traditional data integration techniques such as batch-oriented data replication. Modeling business processes into discrete state transitions (compared to sequential process workflows) offers higher flexibility in responding to changing conditions, and an appropriate approach to manage the asynchronous parts of an enterprise. This in turn directly affects the way applications and services are implemented.
Unlike in SOA, where services must be aware of each other before interacting, the software components of an EDA do not need to interact directly. Instead, event producers and consumers interact via an intermediary – often a publish/subscribe event broker, enterprise service bus (ESB), or business process orchestration engine – without knowledge of each other. Event producers publish of events to the broker. Event consumers subscribe with the broker to receive events of interest as soon as they’re published (or any other interval). Event driven architecture by its nature results in very loosely coupled and distributed systems.
EDA consists of the following:
- Event producers
- Event consumers
- Event Channel
- Event processing
So what is an event? An event can be defined as “a significant change in state”. At an enterprise level this will represent a change in state on some object of the business domain. At an application level this can be something that is very fine grained – a communication triggered to a user. Please note that events do not travel, they just occur. However the term event is generally also used to reference the notification message and this can be rather confusing. For clarity I will refer to the notification message as the event instance. The event instance should contain all the required information to construct the event it is detailing.
The event channel is responsible for collecting all the event instances from the event producers and publishing the event instances to the event consumers. The event channel must guarantee safekeeping and delivery of all event instances.
When an event occurs, the producer is responsible for creating the event instances and publishing instances to the event channel. The producer’s responsibility ends after publication.
The consumer is responsible for the consuming and processing the event instances that it is interested in from the event channel. It will trigger any logic that needs to occur should it identify an event that it has interest in.
Simple event processing concerns events that are directly related to specific, measurable changes of condition. In simple event processing, an event happens which initiates downstream action(s). Simple event processing is commonly used to drive the real-time flow of work, thereby reducing lag time and cost.
In event stream processing (ESP), events are streamed to information subscribers. Event stream processing is commonly used to drive the real-time flow of information in and around the enterprise, which enables in-time decision making.
Complex event processing (CEP) allows event patterns to be considered to infer that a complex event has occurred. Complex event processing evaluates a confluence of events and then takes action. The events may cross event types and occur over a long period of time. The event correlation may be causal, temporal, or spatial. CEP requires the employment of sophisticated event interpreters, event pattern definition and matching, and correlation techniques. CEP is commonly used to detect and respond to business anomalies, threats, and opportunities.
EDA and SOA
Event-driven architecture can complement service-oriented architecture (SOA). Over the years EDA has had a definite impact on SOA. Business systems can benefit from the features of both SOA and EDA, since an EDA can trigger event consumers as events happen and loosely coupled services can be quickly accessed and queried from those same consumers.
For systems to be most responsive, they must be able to quickly determine the necessary actions when events are triggered. To this end, events should be published and consumed across all boundaries of the SOA, including the layers of the architectural stack and across physical tiers. With SOA 2.0 there is even a reference to the term “event driven SOA”. The impact on SOA is beyond the scope of this posting but feel free to look it up.
There are plenty quite a few event frameworks that are available today. Some of the popular ones are:
Over the next few months I intend to evaluate a few of the frameworks available. I will be starting with Vert.x. Why Vert.x? Well it’s the one I chose to start with. I will be doing an introduction to Vert.x on the 22 April 2105 at the Software Arkitex tech forum that will be hosted at the JSE. I hope to see you there.
- Brenda M. Michelson, Event-Driven Architecture Overview, Patricia Seybold Group, February 2, 2006