CloudContributory Expert VoicesDevelopersDevOps

Here’s Why Enterprises Should Back up their Kubernetes Clusters

0

Software containers have been created to support the development and deployment of cloud-native applications. They are at the heart of cloud-native business transformation initiatives, and they are a natural evolution from virtual machines to a more granular and portable application environment in clouds.

As applications grew to span multiple containers across multiple servers, open source Kubernetes emerged to become the leading container orchestration platform – bringing together the deployment, management and ability to scale clusters of containers. Kubernetes enables these clusters to run at scale while eliminating the infrastructure management complexities typical of cloud applications.

Even though Kubernetes has been extremely valuable in automating cloud application development, deployment lifecycles and ensuring high availability of application services, business protection use cases are not included. Kubernetes lacks sophisticated backup, recovery and data management features like snapshots, version management and multi-site replication that enterprises require.

Don't miss out great stories, subscribe to our newsletter.

Multi-cluster Kubernetes management products like Red Hat OpenShift, SUSE Rancher and others address only part of the monitoring, maintenance and operations tasks, including cluster provisioning, developer self-service, container image registry and management, Kubernetes upgrades, multi-cluster/multi-site HA and load balancing. Existing Kubernetes services and management software focus on multi-cluster operations and treat data protection as a separate problem. Neither managed Kubernetes services nor third-party Kubernetes management software provides complete data protection for containerized applications and their infrastructure-as-code, and instead are limited to rudimentary backup of configuration files and do-it-yourself open source applications.

Here are some of the scenarios where data recovery is important for containerized applications running in multiple Kubernetes clusters:

  • Accidental Deletion

Accidental deletion or overwriting of data is a common error. And sometimes it can be difficult to quickly recover the data or the environment, making backup copies of utmost importance.

  • Ransomware

Ransomware wreaks all kind of havoc on application data. It can penetrate a system or network and encrypt data -requiring the payment of a ransom to unlock it. Unlike accidental deletion, changes caused by security breaches can be subtle and difficult to detect and sometimes linger or have quietly existed in a system for some time.

  • Natural Disasters

Natural disasters – tornados, wildfires, earthquakes, floods can weaken or terminate the built-in redundancies and protection against hardware or cloud service failures. Backups can bring lost data back and an application to its original state. In the case of natural disasters is to have copies of data in different regions, even with different cloud providers.

  • Data Migrations

Whether moving physical equipment or data, make sure there are comprehensive backups in place for the current application and environment. An application could span several Kubernetes clusters with data outside of Kubernetes in cloud databases or other cloud repositories. Even if an old environment is left in place, don’t rely on it being the “golden copy.”

Backups, especially in the form of application-consistent snapshots with Kubernetes resource data, can facilitate replication of existing production environment to development, test, or disaster recovery environments.

  • Compliance

Maintaining the backup of application data and resource configurations is necessary for operational purposes, as well as for satisfying regulatory guidelines and compliance directives (such as HIPPA, SOX and others). Backups should support retention locks to make them permanent in supporting retention requirements.

Kubernetes and Container Data Protection

Data protection has not always been a concern for containers as early adopters were usually stateless web applications or lift-and-shift applications with storage outside the container environment, on systems that already running backup software. However, cloud-native, container-based applications using persistent storage became important in Kubernetes environments and a design with silos of storage and data protection is difficult to manage.

Most data protection products were designed for traditional server- and virtual machine based applications and consolidated SAN storage systems, not container data and Kubernetes clusters. Organizations adopting Kubernetes should employ data protection designed for container-based applications and their cloud infrastructure including Kubernetes cluster resources.

Regardless of whether an organization uses packaged Kubernetes software for on-premises deployments or cloud Kubernetes services, SaaS data protection services are the best option for data backup as they provide the advantages of as-a-service products and the robustness of SaaS infrastructure that usually runs within one of the big cloud services. Public clouds provide these services with close contact to an organization’s container deployments and data regardless of where they are located.

Cloud-based Kubernetes data backup offers several features that allow faster, more complete recovery from ransomware attacks including a virtual air gap separating Kubernetes infrastructure from data backups stored on independently-operated infrastructure using separate networks and administrative accounts. It also offers support for multi-region backups on different cloud environments and service providers and time-locked recovery points that prevent data deletion for a specified period.

The bottom line is today’s era of cloud-native applications running on Kubernetes environments requires a reassessment of traditional enterprise data protection systems in favor of a cloud-native Kubernetes data protection service. This type of data protection service allows end users to leverage the strengths of public cloud infrastructure and Kubernetes environments to ensure all their data and infrastructure-as-code are protected.


Author: Mike Miracle, Chief Strategy Officer, Catalogic Software
Bio: Mike Miracle is the Chief Strategy Officer for Catalogic Software and its new venture CloudCasa.io. Mike is a strategy, marketing, and operations executive with extensive experience in technology infrastructure including storage, virtualization, networking, and cybersecurity.

To hear more about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021
Login/Sign up