Cloud Native ComputingContributory BlogsDevelopersDevOps

Your Two Key Hacks To Overcome Cloud Native Migration Challenges

migrate birds
0

A recent survey by Flexera (formerly RightScale) ‘State of the Cloud Report’ shows that the top challenges faced in cloud native transformation is the need to migrate traditional applications to cloud native architecture and deciding on a container platform.

DevOps theologians may wrestle back and forth with the best way to overcome these challenges, but from my experience – the solutions are simple, and simmer down to two key steps:

1. Automation-First approach vs Transformation-First approach via Kubernetes Service Broker API.

With the ‘automation-first approach’ we first integrate our cloud native and traditional applications under a common automation scheme. This will allow us to gradually migrate the application that best fits – avoiding the migration of said applications that do not make sense. By doing this we can still adopt cloud native practices such as managing ‘everything as code’ across all environments without having to wait for a complete transformation.

One of the ways to achieve the automation-first approach is via the Kubernetes Service Broker API.

A service broker is a service that could be running in or out of Kubernetes that provides a REST API to Kubernetes – as defined in the Open Service Broker API.

The Open Service Broker API defines a generic way to expose a service catalog or marketplace to Kubernetes. The service catalog concept as defined by the API supports describing service capabilities, bindings, and service variations called ‘Plans’. Plans define a particular service configuration and an optional related cost. Once Kubernetes is linked to the broker, services can be requested via the ServiceInstance kind.

The clusterServiceClassExternalName and clusterServicePlanExternalName are provided to Kubernetes via the broker API, accessed by an operator via a kubectl command. Architecturally, the picture is as follows:

The Service Broker API can bridge between traditional applications and Cloud Native services and thus avoid the need to migrate these applications as a first step.

Products like Cloudify have the power to integrate with the likes of Kubernetes to provide an open source ‘Service Broker’ implementation that can achieve this two-point integration as described here.

This implementation provides a generic Service Broker implementation that can integrate with many external services from within the same broker- in other words, we no longer have to create a broker per service.

2. Working with many platforms (rather than one) using a top level Multi-Cluster Kubernetes Broker.

Having top level orchestration that integrates with all major cloud platforms (GKE, EKS, AKS, OpenShift and KubeSpray) will allow organizations to achieve interoperability and portability between these platforms and avoid any potential vendor lock-in. In other words rather than selecting a specific platform design, your system will work with many platforms.

A ‘Multi Kubernetes Clusters Broker’ addresses many other use cases. According to the 2019 CNCF SURVEY, most organizations are using more than one Kubernetes cluster. This has been driven by the rise of private Kubernetes cluster adoptions, the need to use separate clusters between applications and teams, and separating between development and production environments.

The survey also indicates that only 20% of organizations are using a single cluster. The majority (43%) are using 2-5 clusters. Interestingly enough, 10% are using more than 50 clusters! This seems to be driven by the need for separation between applications and teams.

As previously stated, there are multi-cloud platforms out there – but from my personal experience, using Cloudify to provision KubeSpray as the bare metal Kubernetes cluster, it was indeed possible to run 70+ Kubernetes Clusters with thousands of Kubernetes services running; each serving different business units/ development environments within one organization – it CAN be done!

Kubernetes has indeed gained a wide adoption and the fact that it is supported by all major cloud providers delivers a real opportunity to minimize the lock-in concern. Many enterprises are still struggling with their cloud native transformation journey.

Using the pragmatic approach of the above two steps, cloud-native management becomes significantly easier – factoring in traditional application management as code (without having to transform everything into Kubernetes as a first step.

Having a neutral top level orchestration that can act as a broker between all the Kubernetes clusters, providing a simple answer to the second challenge which is the ability to choose a Kubernetes platform. Rather than selecting a platform design, your system should be working with many platforms. One of the other benefits of this approach is that it also provides a way to manage multiple Kubernetes clusters. As per the CNCF survey- having more than one Kubernetes cluster is becoming a pretty common use case so in this way we can many problems at once.

To learn more about containerized infrastructure and cloud native technologies, consider joining us at KubeCon + CloudNativeCon NA Virtual, November 17-20.