One of the key lessons learned from COVID-19 is the importance of having a flexible IT environment that can be quickly adapted to changes in business needs. Organizations that were more advanced in their transition to modern digital platforms were better equipped to continue operations amidst lockdowns, social distancing, remote work, and supply chain disruptions.
But let’s not fixate only on pandemics. The same lesson — about the need to remain constantly flexible — has been learned as we’ve progressed from one technological advancement to another. As IT and communications environments have evolved, applications have advanced in ways that could not have been foreseen a generation ago. Can anyone predict the applications, for example those that will take advantage of 5G, that will make the world spin in 2030?
Building a resilient and agile IT environment that easily adapts to changes requires flexibility to constantly update and upgrade in step with changing needs and opportunities. It also requires organizations to reconstruct their IT infrastructures with agile development methodologies so they can easily release and update their applications.
At Your (Micro) Service
To support agility, and to exploit the tremendous potential of the cloud-native applications, many organizations have adopted microservice-based architecture. Microservices introduced the ability to break an application into various functions, each of which can be developed, deployed, and managed independently.
Containers are particularly important as enablers of these flexible microservice-based application architectures. By packaging microservices into containers, organizations update and scale each microservice independently, with no disruption to other microservices. If and when one part of the application fails, or needs updating, the other parts will not be affected.
However, things can get complicated pretty fast when you need to deploy and manage an application across multiple machines. Developers must handle scheduling, resource allocation, and other processes for each microservice.
Kubernetes: Complicated, but Manageable
A number of solutions of various levels of complexity were developed to fulfill the growing need to control these containers. Over time, Kubernetes, originally created by Google, emerged from the pack to become the de facto standard for automation and orchestration of manual container processes, such as deployment, management, and scaling. Kubernetes monitors containers 24/7 and ensures that they, and your application, are running optimally.
All systems go? Not so fast. Although its ability to orchestrate containers may seem like magic to some, Kubernetes introduces its own complexities and its administration can become a time and labor-consuming effort. Kubernetes may, and frequently does, determine that physical resources need to be provisioned, configured, and updated. If no one is “listening” to these signals and taking action, your application will get derailed.
The complexity of managing resource heavy cloud-native applications is therefore driving organizations that want to focus on application development — and not on managing infrastructure — to run their Kubernetes deployments through third party cloud-based managed Kubernetes services. A managed Kubernetes service is easy to use and will ensure that the desired states, as requested by Kubernetes, will automatically be implemented on the physical resources.
Salvation is Not Always in the Public Cloud
However, while these managed services ease the burden of managing Kubernetes clusters and their underlying infrastructure, developers still face a challenge that stems from the very nature of the public cloud: the computing resources are located in mega data centers. Public cloud provided services may suffer from latency in the common scenario where data is transmitted from an end device to the cloud to be analyzed and then returned to the end-device.
Low latency and throughput are becoming increasingly critical as we progress to cloud-native applications with strict requirements for response time, such as autonomous vehicles, drones, telemedicine, and robotics. They can’t be effectively supported by a cloud many miles away. Furthermore, many applications consume significant bandwidth, further reducing performance and increasing networking costs.
Data sovereignty is another challenge: Over 100 countries have laws that require data and processing to be resident in-country. As workloads with user data are deployed around the globe, it is becoming increasingly difficult to meet each market’s unique data locality requirements.
Of course, an enterprise can circumvent these problems with a DIY solution: bring the compute workloads to a local data center, in proximity to end-users. Assuming that the enterprise would want to dedicate an entire IT staff to managing that single location, it could work. Some of the time.
At the Edge of Something Great
To address this challenge, a new cloud paradigm has emerged: the distributed cloud. The distributed cloud brings computing resources to the network edge, closer to end-users. Such a cloud can be built upon the federation of thousands of data centers and cloud providers, at the resolution of a region or even of a metropolitan area.
The distributed cloud combines the benefits of two worlds: the agility of the public cloud and the high performance of private infrastructure. With it, application developers can deliver modern workloads locally, through managed web services, from a globally distributed data center network. Heterogeneous infrastructure becomes a homogeneous cloud computing platform, which can then be leveraged to support the delivery of cloud services in proximity to end-users.
The distributed cloud enables developers to describe their required resources as they deploy their Kubernetes clusters. It will then automatically move workloads to the local data center(s) that will provide optimal performance. And as a managed Kubernetes service, the distributed platform will adjust workloads by automatically spinning up computing instances wherever needed.
The developer can then focus all of his or her attention on application development.
From Here to There: Cloud Nativity, Managed Kubernetes & the Distributed Cloud
For developers, the big promise in cloud computing was the abstraction of infrastructure complexities that freed them to focus on writing great code. However, today’s containerized, microservices-based cloud-native applications are so complex that developers often find themselves spending more time dealing with infrastructure configuration than with coding.
As the de-facto standard for container orchestration, Kubernetes plays an essential role as an enabler of cloud-native application deployments, offering unprecedented flexibility in moving workloads between environments. However, the full potential of mission-critical cloud-native applications, often with strict latency or throughput requirements, cannot be realized until they can be deployed anywhere to ensure superior performance.
By offering an addition to the public cloud model, the massively distributed cloud model enables developers to seamlessly deploy and infinitely scale their applications anywhere, utilizing a global network of service providers instead of relying on the availability of compute resources in a specific location. Through managed Kubernetes services, even the most complex, resource-intensive applications can become cloud-native.
It’s not just about speed and agility: by enabling the full potential of cloud-native applications, the distributed cloud is changing the way businesses think about growing. For them, cloud-nativity is about leveraging the cloud so that innovation can take place at the speed of thought, without being limited by infrastructure.
Join the cloud native community at KubeCon + CloudNativeCon Europe 2021 – Virtual from May 4-7 to further the education and advancement of cloud native computing.