Cloud Native ComputingContributory Expert VoicesCXOsDevOpsFeatured

The 4 Defining Principals of ‘Next Architecture’


Organizations looking to meet heightened user expectations and support dynamic, high volume business processes increasingly look to a combination of cloud, containers, orchestration, and decomposition as their directional architecture—a shift called Next Architecture. The Next Architecture components all work together to support building the distributed systems that provide the flexibility, scale, and resilience required to meet these new demands and expectations.

A primary tenet of Next Architecture is that software architecture should be agile, resilient, and tolerant of change. Think of software upgrades as akin to changing the tires on a moving car—or rather a car designed to have its wheels changed while in motion. Next Architecture approaches software architecture as a locus of continuous transformation.

The core components and practices that define Next Architecture—cloud computing, container virtualization, orchestration, microservices, serverless or functions-as-a-service (FaaS) computing—have become more widely used the last few years. One of the foundational principles, decomposition—breaking up large tasks into granular, function-specific units that can be combined, upgraded-in-place, and integrated to create highly complex, dynamic systems—while difficult to implement is well understood.

To start reaping the benefits Next Architecture promises, organizations must first acknowledge the systems and practices that make this concept possible. This article explores the four principals of Next Architecture—decomposition, cloud, container, and orchestration—so enterprises can future-proof their technology stack.

  1. Decomposition

The cornerstone of Next Architecture is the concept of decomposition—the idea of breaking things into small, loosely coupled components. To get an idea of how this works, think of the hundreds or thousands of pieces in a Lego set. Most pieces are generic, a few perform specific functions, but they can all be connected via a common interface. The Lego kit can be used to build what the set specifies or to create any form or object the architect can imagine. The pieces are reusable and reconfigurable.

Decomposition is vital to Next Architecture because it makes it easier to rapidly reconfigure an application or system. For example, adding a new feature to or changing the behavior of an existing application becomes a matter of building one or more new services—or of exploiting already deployed services. This is integral to the flexibility requirement at the heart of Next Architecture. It’s what makes it possible to add new capabilities or services without impacting an entire application. Like Legos, it gives you the freedom to customize components or services that you can join together into a bigger app or platform, whether it be for customers, partners, internal employees, and other consumers.

The most popular implementation of this concept is microservices, although serverless architecture is an even more aggressive take on decomposition. The availability of FaaS-oriented offerings such as AWS Lambda, Google Firebase, IBM BlueMix OpenWhisk, and Azure Functions makes serverless computing a viable complement to microservices.

  1. Cloud

Organizations expect to be able scale resources and the software that runs on them elastically, so they can grow or terminate instances of applications, systems, or services as needed. This entails the use of three commodity technologies: cloud, containers, and orchestration. Fortunately, the growing traction of components like Docker and Kubernetes ease the transition to cloud native implementations for open source developers.

To accommodate the scaling required to spin up or down services on demand, the distributed architecture model of the cloud is required. Next Architecture is about building systems that can be more fluid and adaptable, and the cloud enables capacity to scale with demand. By decoupling compute, storage, and networking connectivity from one another, cloud infrastructure permits granular capacity scaling. In the context of Next Architecture, this means you can put things where you want, quickly gain resources and give up those resources when you don’t need them, and analyze what you’re doing. Every single part of this is responsive.

  1. Containers

Containers facilitate the portability required for Next Architecture. They provide a lightweight path to the modularity necessary for decomposition and cloud deployment. Container technology like Docker makes automating the deployment of microservices that much easier: all the necessary development and run-resources are consolidated into a single package that can get portably deployed to run anywhere, including on any cloud platform. The emergent serverless computing models make use of containers, too. But whereas container virtualization is platform- or service-agnostic, serverless computing is dominated by platform-as-a service (PaaS) offerings.

In the serverless model, code executes in ephemeral containers that are spawned as needed and terminated when they’re no longer needed. In the conventional container model, a management daemon or service (usually Kubernetes) is used to orchestrate the starting and stopping of (as well as the scheduling of dependencies between and among) containers; in a serverless scheme, usually some variant of a cloud-specific service, AWS Step Functions for example, is used to provide similar capabilities.

  1. Orchestration

Deploying Next Architecture systems requires tooling to control, monitor, and manage how the various components and services that make up the system work together. This is where orchestration comes in. Orchestration tools like Kubernetes can manage the thousands, or tens of thousands of components deployed in large-scale distributed systems.
Kubernetes is one of the fastest growing topics on the O’Reilly Online Learning. This is not a tool one takes for a casual spin—those engaged with the topic are likely implementing Kubernetes in some context, providing evidence that organizations are building large-scale, distributed, Next Architecture systems.

The demand for orchestration capabilities in the serverless cloud suggests that people are exploiting FaaS for more than one-off uses. In other words, some users are creating the kinds of complex and interdependent flows that are characteristic of applications. Another related sign is a spike in interest in Knative, a Kubernetes-based platform optimized for serverless computing. Still more evidence comes via the availability of FaaS orchestration services, such as AWS Step Functions, Azure Logic Apps, or Sequences in IBM’s BlueMix OpenWhisk.

Today, many organizations are already thinking about and developing software in consonance with Next Architecture’s foundational priorities and principles, even if they are not consciously doing so. In adopting software development practices and methods that emphasize decomposition, they’re producing software that is highly adaptable to change. In decomposing core infrastructure services into function-specific services, they’re designing flexible, adaptable infrastructure software, too. In designing function-specific microservices that are optimized to address customer-focused priorities, they’re addressing both the essential purpose (supporting and enabling essential business processes) and the overriding goal (customer satisfaction) of software architecture.

While Next Architecture comes with challenges—training/hiring, culture, distributed data integrity, new cost regimes, migration, managing complexity, and keeping up with new tools and technologies, to name a few—the benefits far outweigh the hurdles. For example, a software architecture that is flexible and adaptable, and cost less to build and maintain. It gives organizations the capacity to respond to dynamic conditions by only spawning the resources required to handle a given load at any given time. Efficiencies such as increased speed of development are another advantage.

Next Architecture doesn’t just apply just to new systems, either. Decomposition allows support for adding features to legacy systems as microservices, helping enterprise organizations manage the migration of legacy application to meet new needs. Over the next year, we can anticipate a new layer of expectations around what software and software architecture can deliver—both from the customer’s perspective and from the perspectives of the people charged with deploying new products and services.

In Cloud Native, Container Scanning is Not Enough

Previous article

The 5 Pillars of Distributed Tracing

Next article
Login/Sign up