Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Evolution of software architecture

Software architecture has been an evolutionary discipline, starting with monolithic mainframes to recent microservices. It’s easy to understand these software architectures from an evolution standpoint, rather than trying to grasp them independently. This post is about that evolution. Let’s start with mainframes.

The mainframe era was of expensive hardware, having powerful server capable of processing large amount instructions and client connecting to it via dumb terminals. This evolved as hardware became cheaper and dumb terminals paved way to smart terminals. These smart terminals had reasonable processing power leading to a client server models. There are many variations of client server models around how much processing a client should do versus the server. For instance, client could all the processing having server just act as centralized data repository. The primary challenge with that approach was maintenance and pushing client side updates to all the users. This led to using browser clients where the UI is essentially rendered from server in response to a HTTP request from browser.

Mainframe Client Server

As server started having multiple responsibilities in this new world like serving UI, processing transactions, storing data and others, architects broke the complexity by grouping these responsibilities into logical layers – UI Layer, Business Layer, Data Layer, etc. Specific products emerged to support this layers like Web Servers, Database servers, etc. Depending on the complexity these layers were physically separated into tier. The word tier indicates a physical separation where Web Server, Database server and business processing components run on their own machines.

3 Tier Architecture

With layers and tiers around, the next big thing was how do we structure them what are the ideal dependencies to have across these layers, so that we can manage change better? Many architecture styles showed up as recommended practice most notably Hexagonal (ports and adapters) architecture and Onion architecture. These styles were aimed to support development approaches like Domain Driven Design (DDD), Test Driven Development (TDD), and Behavior Driven Development (BDD). Themes behind these styles and approaches were to isolate the business logic, the core of your system from everything else. Not having your business dependent on UI, database, Web Services, etc. allows for more participation of business teams, simplifies change management, minimizes dependencies, and make the software easily testable.


Next challenge was scale. As compute became cheaper, technology became a way of life causing disruption challenging the status quo of established players across industries. The problems are different, we are no longer talking of apps that are internal to an organization or mainframes where users are ok with longer wait times. We are talking of global user base with sub second response time. Simpler approach to scale was better hardware (scale up) or more hardware (scale out).  Better hardware is simple but expensive and more hardware is affordable but complex. More hardware meant your app would run on multiple machines, and the user data would be distributed across these machines. This leads us to famous CAP (Consistency, Availability and Partition tolerance) theorem. While there are many articles on CAP, essentially it boils down to – network partitions are unavoidable and we have to accept them. This is going to require is to choose between availability and consistency. You can choose to be available and return stale data to be eventually consistent or you can choose to be strongly consistent and give up on availability (i.e. returning error for the missing data – e.g. you are trying to read from a different node than where you wrote your data). Traditional Database servers are consistent and available (CA) with no tolerance for partition (active DB server catering to all requests). Then there are NoSQL databases with master slave relationship, configurable to support strong consistency or eventual consistency.

CAP Theorem

Apart from scale challenges, today’s technology systems have to often deal with contention. E.g. selecting an airline seat, or a high discount product that everyone wants to buy on a Black Friday. As multitude of users are trying to get access to the same piece of data, it leads to contention. Scaling can’t solve this contention, it can only make it more worse (imagine having multiple records of the same product inventory within your system). This led to specific architecture styles like CQRS (Command Query Responsibility Segregation) & Event Sourcing. CQRS in its simple terms is about separating writes (command) from reads (query). With writes and reads having separate stores and model both can be optimally designed. Write stores in such scenarios typically use Event Sourcing to capture entity state for each transaction. Those transactions are then played back to the read store to make write and read eventually consistent. This model of being eventually consistent would have some implications and needs to be worked with business to keep the customer experience intact. E.g. Banana Republic recently allowed me to order an item. They took my money and later during fulfillment they realized they were out of stock (that is when things became eventually consistent). Now they refunded my money, sent me a sorry email and allowed me 10% discount on my next purchase to value me as a customer. As you would see CQRS and Event Sourcing come with their own set of tradeoffs. They should be used wisely for specific scenarios rather than an overarching style.


Armed with above knowledge, you are now probably thinking can we use these different architecture styles within a single system system? For instance, have parts of your system use 2-tier other use CQRS and other use Hexagonal architecture. While this might sound counterproductive, it actually isn’t. I remember building a system for healthcare providers, where every use case was so different – appointments, patient registration, health monitoring, etc. Using a single architecture style across the system was definitely not helping us. Enter Microservices. The microservices architecture style recommends to break your system into a set of services. Each service can then be architected independently, scaled independently, and deployed independently. In a way, you are now dealing with vertical slices of your layers. Having these slices evolve independently, will allow you to adopt a style that’s more befitting to the slice in context. You might ask, while this makes sense for the architecture, but won’t you just have more infrastructure to provision, more units to deploy and more stuff to manage? You are right, and what really makes Microservices feasible is the agility ecosystem comprising of cloud, DevOps, and continuous delivery. They bring automation and sophistication to your development processes.


So does this evolution make sense? Are there any gaps in here which I could fill? As always, will look forward for your comments.

Image Credits: Hexagonal Architecture, Microservices Architecture, 3-Tier architecture, client server architecture, CQRS architecture, Onion architecture, CAP Theorem

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: