CloudContributory Expert VoicesDevelopersDevOps

Difficult to Orchestrate Multiple Clusters? Try Karmada

0

Kubernetes and cloud native technologies are becoming popular. More and more enterprises deploy multiple clusters to run their applications. However, multi-cluster deployments challenge cluster management and O&M, such as repeated cluster configuration and complex maintenance.

Use Karmada for Easy Multi-Cluster Container Orchestration

Karmada is a container orchestration system that supports automated management of multi-cluster applications in multi-cloud and hybrid cloud scenarios. Karmada combines the core practices of HUAWEI CLOUD Multi-Cloud Container Platform (MCP) and Kubernetes Federation, and integrates new technologies, including Kubernetes native APIs, multi-layer high-availability deployment, automatic failover across clusters, auto scaling of multi-cluster applications, and multi-cluster service discovery. You can smoothly evolve from native Kubernetes, single-cluster architecture to multi-cloud, cloud-native deployments.

Karmada encapsulates its capabilities in modules and is highlighted with the following five key features:

  1. Compatible with Kubernetes native APIs

When an enterprise smoothly upgrades from a single-cluster architecture to a multi-cluster (multi-cloud) architecture, the existing application configurations and infrastructure do not need to be reconstructed.

  1. Out-of-the-box usability

Karmada provides policies for multiple scenarios, including two-site three-center, intra-city active-active, and remote DR deployments. It also supports auto scaling, failover, and load balancing across clusters.

  1. Centralized management

Karmada allows you to manage public cloud, private cloud, and edge clusters in all your service regions.

  1. Various multi-cluster scheduling policies

Karmada provides features such as multi-cluster affinity scheduling, cross-cluster application splitting, and resource rebalancing. HA deployments are available for applications across regions, AZs, clusters, and cloud vendors.

  1. Openness and neutrality

Karmada is jointly initiated by Internet, finance, manufacturing, telecom, and cloud vendors. Its open governance is defined the same as CNCF. No application limits or vendor lock-in.

In terms of architecture design, the control panel of Karmada consists of the API server (Karmada API Server), control manager (Karmada Controller Manager), and scheduler (Karmada Scheduler), and provides controllers for clusters, policies, binding, and execution.

Karmada and ICBC: Enhanced Multi-Cluster Banking Services

Banking services are being migrated to clouds. In Industrial and Commercial Bank of China (ICBC), banking services were once deployed on the financial ecosystem cloud, branch cloud, application platform cloud, and infrastructure cloud. There were many types of clusters, the performance of Kubernetes was limited, and services were extended to too many sites and fault domains. ICBC tried containerizing its cloud management platform, but there were still many challenges, for example, no cross-cluster auto scaling/scheduling/failover is supported and users need to decide they should run their applications in which cluster.

ICBC solved these challenges by using Karmada, a multi-cloud container orchestration and management system, to enhance resource scheduling, DR, cluster management, and resource management.

  • Resource scheduling: Karmada allows custom cross-cluster scheduling, auto selection on clusters to run applications, and two types of resource binding scheduling.
  • DR: Karmada supports dynamic binding adjustment. ICBC can automatically distribute resource objects based on cluster labels or fault domains.
  • Cluster management: ICBC uses Karmada for cluster registration, full lifecycle management, and unified standard APIs.
  • Resource management: ICBC supports native Kubernetes objects, obtaining of sub-cluster resource deployment status, and resource object distribution in pull and push modes.

ICBC is experimenting Karmada for cross-cluster scheduling based on cluster resources and volumes, cross-cluster scaling and fault recovery, and high availability.

Karmada and VIPKid: Smoother Online Education for More Students

VIPKid is positioned as an online English learning platform for teenagers and children. Along with the growing businesses, VIPKid needed to flexibly control costs, provide efficient and fast project release support, and develop technologies such as AI computing, multi-language stack, and audio/video encoding and decoding.

With Karmada, VIPKid successfully containerized stateless applications. More than 80% services run independently without requiring O&M personnel. The AI offline computing cost of the existing management system is reduced by 43%, and the online service cost is reduced by 10% to 70%. In addition, containerized audio and video services can be quickly deployed. The time required for deploying and rolling out the entire system is shortened from 3 to 5 days to 3 to 5 hours.

In this process, VIPKid realizes fast product iteration and supports multiple technology stacks after microservice-based reconstruction, storage decoupling, and saving status data in external services. Multiple release modes are supported, such as blue-green deployment and grayscale release. Multi-cluster management is possible, flattening the learning curve for R&D and O&M engineers.

In terms of auto scaling, VIPKid leverages cloud-native technologies and combines the native HPA and CronHPA to upgrade its AI services, so that computing jobs can be executed on any cloud and costs are effectively reduced. In deploying audio and video services, the scenario-based solution erases the bad use cases on VMs and solves unbalanced persistent connections.

Cloud-native and multi-cluster services are attracting more and more attention. Combined with big data and AI, these services can be more intelligent and efficient. Karmada is here to help more enterprises go cloud-native more easily.

More info about Karmada:

GitHub

Slack


Author: Kevin Wang, Lead of Cloud Native Open Source Team, HUAWEI CLOUD

Bio: Kevin Wang is a senior maintainer of the Kubernetes community, Co-founder of Karmada and a CNCF TOC contributor. He has in-depth insights into cloud native technologies and open-source ecosystems.

To hear more about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021
Don't miss out great stories, subscribe to our newsletter.

Login/Sign up