Cloud Native ComputingContributory BlogsDevelopersDevOps

Conquering the Challenges of Kubernetes Deployment and Management

0

According to a recent Red Hat survey, organizations adopt Kubernetes (K8s) en masse, with 88% of organizations using the open-source container orchestrator, and 74% deploying it in production. Developers love K8s because it enables them to deploy development environments that meet their self-service needs quickly.

Beyond defining an environment, developers can replicate configurations, security, and privacy controls in Helm charts and use automation tools, such as Ansible and Puppet, to automate deployments. K8s also enables development teams to leverage continuous integration and delivery (CI/CD) practices by spinning up new container instances, building artifacts, and then spinning them down when the job is complete.

Finally, configuring a single K8s cluster is straightforward but deploying and managing hundreds of thousands of clusters—common in large organizations—creates scale complexity that can become a challenge. Here is what you need to know to solve that complexity:

Complexities of Kubernetes at Scale

Deploying a single K8s cluster often includes installing Docker containers and K8s, assigning unique hostnames to each server, mapping hostnames to IP addresses, initializing the master node, assigning it an IP address, joining nodes to the master, and deploying a pod network.

Managing K8s at scale requires deep domain expertise, even for routine operations like applying patches and upgrades and maintaining cluster health. It also helps to be aware of other challenges that can arise, such as:

  • System constraints: As the number of clusters grows, latency can become an issue. Similarly, it gets increasingly complex to manage multiple accounts, configurations, and credentials. Scripts often need to be written from scratch, and even shifting workloads within a single cloud provider—such as between Amazon EC2 to EKS—can cause headaches.
  • API surges: Critical components such as domain name services, user dashboards, and monitoring that should be integrated through APIs, are deployed as add-ons; and even then, surges in API calls can trigger error messages or crash the API server.
  • Tracking issues: Clusters often span multiple teams, regions, and availability zones, making tracking and troubleshooting difficult. Load balancing can also be a challenge when numerous teams share the same resources. The variety of options for managing multiple clusters, which is usually one of Kubernetes’ virtues,  is often a problem for teams lacking deep technical skills.
  • Security incidents: With such complexity, misconfigurations are all too easy, and misconfigurations are the leading cause of security incidents. 55% of the Red Hat survey respondents said they had to delay the rollout of a K8s-based application because of security concerns, and 94% have experienced at least one security incident in their K8s environment in the last 12 months.

Solving Challenges With Automated Solutions

Kubernetes’ high level of automation allows developers to describe the desired state, and the software takes care of applying the automation needed to achieve it. Automation can remove much of the complexity of scaling to large volumes of clusters, in particular when connecting K8s deployment management to CI/CD, something that Spinnaker provides through its Clouddriver  service.

Spinnaker uses the concept of applications, each with one or more pipelines. Each pipeline has a series of stages that contain tasks that can be stitched together.

Further, Clouddriver polls an organization’s cloud infrastructure at regular intervals and stores the result in a shared cache, taking advantage of the following automation services:

  • Orca is a Spinnaker orchestration engine that works from execution definitions to manage stages and tasks and coordinates other Spinnaker services.
  • Fiat exposes a RESTful interface for querying access permissions for accounts, applications, and service accounts.
  • Igor provides a single point of integration with continuous integration and source control management services.
  • Rosco produces machine images with Hashicorp Packer and rendered manifests with templating engines Helm and Kustomize.
  • Front50 acts as the system of record for all Spinnaker metadata, including applications, pipelines, and service account configurations.

The distributed, optimized caching model of Spinnaker and Clouddriver reduces latency issues. At the same time, the decentralized account management design enhances security and empowers individual teams to manage permissions for their specific cluster and onboard clusters on a self-serve basis.

Developers keep complete security control of Kubernetes configurations, which can be rotated or removed as needed. Clouddriver’s logging functionality tracks agents as they register with Clouddriver and maintains continuous communication.

The combination of Spinnaker and Clouddriver abstracts away the complexities of deployment targets, enabling teams to use a single, flexible pipeline to deliver workloads to any combination of local and cloud target environments. That frees developers and operations engineers from worrying about infrastructure and instead focusing on what they do best: building great code.

Contact Armory to learn more about how you can leverage enterprise-grade Spinnaker with Kubernetes.


Author: Carl Timm, Head of Product Marketing, Armory
Bio: Carl Timm is a dad, husband, science fiction reader and beer drinker who lives in the Pacific Northwest. Oh, he’s also the Head of Product Marketing at Armory, so he occasionally writes about progressive delivery, Kubernetes, developers and marketing stuff.

To hear more about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021