reactive-traits
Image

Reactive applications and an introduction to event driven architecture

The release of the Reactive Manifesto in 2013 has popularized the term “reactive applications”. The purpose of the manifesto is to describe a new approach to systems architecture and the key characteristics that these systems will need to implement.  Why would we need a new approach to systems? A few years ago a large application would typically consist of tens of servers, response times of a few seconds, gigabytes of data and downtime was a generally accepted norm. Today this has changed dramatically. Applications now span hundreds of servers, are dependent on many other systems, response times need to typically be sub second, terabytes of data and 100% up-time. This is further compounded by the fact that organisations are becoming more transparent by allowing customers direct views of internal processes via the web and mobile devices.  The organizations that can adapt to this new playing field are the ones that can set themselves up for success tomorrow.

The Reactive manifesto takes many patterns and concepts and packages them together to form a cohesive set of terms and concepts. These together are termed reactive applications. Version one of the reactive manifesto addresses four design characteristics of responsive applications.

Continue reading

Computer internet cable and lock

Threat Defence of the Future

As with the rise of Virtualization, many organizations find themselves asking the question “How secure is my data?”

Each time we read a ZDNET Article or Gartner report, there are innovations in technology which expose endpoints in radical ways. Prominent players in the market are bringing forth innovative ways of securing information. But one must ask “How secure is the evident secure?”

I believe evolution in cloud technologies has developed a new market for both consumers and delinquents to share information. The real challenge is ensuring we identify, suspend and correct any malicious behaviours.

This has led me to one decision “Stop all Online Activity”…. No it’s not that scary.

It does mean that we need to be aware of how we transmit and store our data. Ever take the time to read the end user agreement, for that new Free Application you just have to have? Or scroll down and read the Terms and Conditions of the new Plugin on your favourite web browser? I haven’t, but today I find myself reading every piece of technology written, which may or may not directly affect my information and data, successfully affecting my online presence.

So why go through the hassle of reading it all and setting every option set? I failed to read an end user license agreement from a random plugin placed forth by some web app. Suddenly on my smart device I find new pop-ups and requests coming through, my kid calls me and says “Dad there a piece of software popping up on the home laptop”. What I found is that through my Single Sign on and Sync Across all my devices app, is that I had not only exposed my business environment, but also my home environments through 1 Click! I quickly rectified my online-presence access, and proceeded on a quest to “Clear my Name”.

It is evident that the evolution of technology requires an equal and evolving threat defence capability. That which is able to scale, evolve, convert, elevate and expose any loops, and holes within my information stack.

There are service providers out there who offer data leak prevention solutions against web security techniques, and are indeed protecting the day-to-day service providers. There are developments in risk awareness and security which are pushing service providers to go beyond threat defence and look into advanced mechanisms to cope with evolution…

Stay tuned for more on Threat Defence and our views on Security, following into more tips and tools to enable you..

Disaster Recovery as a Service

cloud-business-continuity

“Enabling Business Continuity by flipping a switch.”

During the time period preceding Y2K, many organizations realized the importance of having a ‘’backup plan’’. Most embarked on journeys to outline, articulate and invest in Business Continuity Plans. The Technically Inclined faculties, pursued research to uncover simpler, faster and better ways to enable BCP in the face of DISASTER.
So where are we today? How are we successfully securing our shared interests both internally and for our stakeholders?
Disaster Recovery Planning seems to be the answer. And with the rise of web socket technology enabling the internet of things, it seems that “Keeping the lights on” has become critical to most.
How do we keep the lights on, continue to grow our business and modernize our existing infrastructure?
We say, by utilizing service providers and technologies that encompass As a Service Offerings. We say Disaster Recovery as a Service.
To understand and enable Disaster Recovery as a Service, let’s step back in time to what legacy techniques were used…

I have a “Server Room” with 10 Physical Servers and should there be a man-made or natural disruption I will relocate to a geographically separate site (>150KM) from my Production site.
The pros of this approach:
1. All my servers were backed up from Production and Restores were taken to the Failover / Disaster Recovery Site
2. There is no performance degradation as I got 10 Servers in my failover site
The cons of this approach:
1. Going to take some time to conduct those restores, based on the amount of data I have probably around 24hours
2. The initial purchasing of all 10 Servers would be costly.

At the present time with the ever evolving enhancements in technology, we have simplified solutions. We still have our 10 Physical Servers and should there be a man-made or natural disruption I will relocate my servers to the “Cloud”, via my DR as a Service enabling Technology.
The pros of this approach:
1. All my servers are replicated from Production to the cloud
2. There is no performance degradation as I got 10 Instances which are scaled exactly as my servers are.
3. It’s only going to take roughly 1 hour to get my servers up and running as my data to a Point in Time (also known as RTO – Recovery Time Objective & RPO – Recovery Point Objectives ) is replicated asynchrously.
4. The initial cost will vary based on what the BCP of the Organization is.
The cons of this approach:
1. You as an organization need to trust that the Service Provider is able to meet your RPO and RTO Objectives.
DR as a Service surely brings a diverse approach of protection, no longer do small to mid-sized companies not have a model to protect their core systems. Larger companies now have a means to balance the cost between Physical Failover for Physically Bound Systems against other Virtualized Systems which can be failed over to the cloud.

So what can I do with this Cloud Computing POWER?

I am now able to recover from disaster faster and more accurately. I am able to save valuable pennies for my small to mid-sized companies which I support. I am able to utilize the strengths of cloud computing without the hassle of tiresome admin. I am able to extend my network from inside to out with all the security necessary. I am able to upscale and downscale as my business grows and shrinks. I am able to rest assured that within a fixed period of time, my service provider will get my business back online.

I choose Disaster Recovery as a Service for my Business, I choose to remain online in whatever event. I choose that should my dedicated Annual “Server Room” clean-up go pear shaped – I’ll failover to the cloud. More importantly I choose to be a part of a trend which is going to drive business continuity UP and reduction in revenue streams DOWN.

Watch this space for an outlook on Modernization, and how we see the End of Server 2003 amongst other Sun-Setting Technologies….

Silver Bullet
mobile