Cloud Native ComputingContributory BlogsDevelopersDevOps

Cluster API – Multi-Cloud and Hybrid-Cloud with Kubernetes

complexity
0

This paper provides an overview of the Cluster API project and how it simplifies the provisioning and maintaining of Kubernetes clusters across different infrastructure and cloud providers. This takes into consideration the experience we have accumulated working with our customers, from startups to large enterprises, and operating Kubernetes clusters at scale across Oracle Cloud Infrastructure (OCI).

What is Cluster API

With the growth of cloud adoption and the diffusion of distributed environments, the analogy of Pets vs. Cattle has become one of the core concepts to describe how to build and operate cloud native applications. Pets are indispensable computing units that evolve during their lifecycle with patches and updates, while cattle are simply destroyed and recreated whenever they are not responsive or required and updated.

Kubernetes, the world most commonly used container orchestration platform, adopts this principle to provide a declarative way to automate deployment, scaling and management of containerized applications by using APIs. Kubernetes’ abstraction layer allows focusing on higher level services and applications that are ephemeral and infrastructure agnostic, hence are more flexible and portable. Wouldn’t it be cool if we can do the same with Kubernetes clusters as well?

The Cluster API project proposes applying the same principle of using declarative APIs, tooling and lifecycle automation patterns to simplify provisioning, upgrading, and operating multiple Kubernetes clusters and abstracting away the underline infrastructure and infrastructure providers. Cluster API, as well as the managed clusters, can be deployed in different environments, both in the cloud and on-premises and makes it possible to provide a consistent experience when managing Kubernetes clusters across different cloud providers and on-premises.

How does it work?

Ok, great! but how this does actually work?

The Cluster API handles target clusters as resources in Kubernetes and uses a controller to manage its status. The controller is hosted on a management Kubernetes cluster.

Yes you’ve got this right, it’s a cluster of clusters. There are 2 types of cluster:

  • Management Cluster: this is where the Cluster API framework is installed and where resources like Machines and Clusters are stored. The management cluster also hosts the Infrastructure Providers (OCI, VSphere, AWS, and many others) in order to manage Workload Clusters in multi-cloud and hybrid-cloud deployments.
  • Workload Clusters: these are target Kubernetes clusters where workloads (containerized applications ) run. Workload Clusters are controlled by the Management Cluster and can be hosted on different platforms in the cloud and on-premises.

You can find more detailed information about Cluster API concepts, architecture, and configurations in the official documentation .

The Cluster API infrastructure providers implement the integration with different infrastructures both in the cloud and on-premises, providing an abstraction layer for all the vendor-specific management functions and operation details. As an example, the CAPOCI provider allows Cluster API framework to create a Kubernetes cluster on Oracle Cloud.

Multiple levels of resiliency

Going back to the animals analogy, Cluster API handles target clusters as cattle. Those are immutable resources hence if the configuration for a resource changes, then a new resource with the latest config will be created.

We are familiar with this approach. This is exactly the same as containers running on Kubernetes. That’s a good news because it means that with Cluster API, we can now add one more level of resiliency for our containerized applications. Kubernetes already offers a few by treating any non-control plane node as immutable, so each application or process can count on both pods and nodes for high availability.

Clusters managed via Cluster API are immutable as well. They are essentially handled as ephemeral resources thus improving the resilience level by providing high availability for Kubernetes cluster, that can now be deployed on any cloud providers or on-premises, given that an infrastracture provider exists for the target infrastracture.

Simplify Multi-cloud and Hybrid-Cloud

You may wonder why we need Cluster API, when we already can achieve multi-clusters deployment via kubeadm or with other Infrastructure as Code tools?

Kubeadm was designed to be a bootstraping tool to execute administrative tasks, including creating and upgrading clusters. While it can accomplish basic cluster management activities in simple scenarios, kubeadm is not the right tool to handle more sophisticated workloads where the control plane operations have many dependencies and a strict execution order that requires automation.

Furthermore, when deploying at scale using multiple infrastructure providers, an additional layer of complexity comes from heterogeneous deployment mechanism and inconsistent management APIs. So even common lifecycle tasks like creating or destroying operations can be challenging making multi-cloud and hybrid-cloud very hard to maintain.

Luckily Cluster API’s declarative interface and its abstraction layer, brings a much simpler way  to handle cluster lifecycle across different providers. API-driven automation tasks don’t need to be specialized, but can seamlessly run everywhere, because they interact with a single API set.  This allows DevOps team to define the lifecycle of a multi-cluster hybrid-cloud solution without the need to become experts in each specific environment such as public cloud, virtual private cloud, and bare-metal environments.

Conclusion

Cluster API project offers a cloud-native approach to multi-cluster management in multi-cloud and hybrid-cloud scenarios avoiding vendor lock-in. It also hides platforms’ differences and inconsistencies while at the same time simplifying the design and deployment of high-available applications.

For more information, read the OCI announcement blog and the documentation.


Author: Andrea Marchesini, Developer Advocate & Director of Product Management, Oracle Cloud Developer Services
Bio: Andrea is a tech geek, passionate about Cloud and bleeding-edge technologies with a strong developer background. He is a former startup co-founder, developer, network specialist, and since 2010 he has been building and managing Cloud solutions helping partners, developer communities and customers in their cloud journey. In his spare time, he loves snowboarding, cooking, coding, and riding his dirtbike.