Cloud Native ComputingContributory Expert VoicesDevelopersDevOpsKubernetes

The Gateway API and the Great Kubernetes Blurring

0

All hail the Kubernetes Gateway API – finally, a more flexible and expressive template for managing APIs in Kubernetes! But let’s unpack. While the Gateway API is rightly celebrated, take the long view and it might seem like the Gateway API is just the latest in what I call the Great Kubernetes Blurring – a steady stream of evolving definitions, categories, roles, and responsibilities for the Kubernetes ecosystem as seen through the lens of cloud native. Forward movement is a good thing, to be sure, but it does demand a periodic mental reframing if you want to see beyond the hype and take best advantage of new elements in the container orchestration firmament.

First Came Containers Managed Via APIs

The Great Blurring has been going on for some time. It started with the move to VMs and containers, which decoupled applications from hardware. Next came container orchestration and the emergence of Kubernetes as the de facto standard. In that shift, APIs became the prime mover and shaker for service traffic, security, networking, scaling, and more.

APIs were already mature technology at that point in the story (here’s a good history). The descendent of XML, SOAP, and SOA, the RESTful API was firmly established as the common path for connecting applications on the north-south plane long before Kubernetes code hit the screen. Major cloud service providers linked their services via APIs from inception, providing a preview of what Kubernetes would look like. (Remember AWS’s long-held rule that all [its?] services be designed for internal as well as external consumption.) Over time, APIs not only matured but became ever more varied, encompassing additional technologies (gRPC, GraphQL) and structural models (binary versus plaintext, schema versus schemaless).

The Rise of API Gateways

The API gateway product category arose from the need to manage APIs more precisely and effectively – again, before Kubernetes emerged as the most popular container orchestration engine. In the pre-Kubernetes universe, API gateways sat in the load-balancing tier, outside of internal applications. These bespoke gateways managed north-south API traffic and better satisfied the particular needs of APIs with more frequent authentication, more granular and flexible authorization and access controls, rate limiting and retry rules, and more. This was an early acknowledgement that the world was going to become a lot more connected, making the old “IT perimeter” paradigm for security and application design obsolete.

Overlapping Layers and Constructs in Kubernetes

Then came Kubernetes, kicking the cloud-native transition and mental frameshift into overdrive. In Kubernetes, everything rests on an API foundation to a much greater degree than in previous technologies. As they gained experience with Kubernetes, savvy users realized they could use existing cloud-optimized reverse proxy technologies (load balancers, etc) as de facto Kubernetes API gateways. Kubernetes itself created a new construct for managing traffic – the Ingress controller.

This blurring of API management constructs in Kubernetes was pushed further when Layer 4 and Layer 7 traffic began to undertake somewhat overlapping roles.

Given the new design conventions for Kubernetes and cloud native – apps built of loosely coupled microservices – the distinction between north-south and east-west not surprisingly began to blur. Specifically, Ingress controllers placed at the edge of and running inside of Kubernetes clusters were used for observability and to route  north-south traffic.

Service meshes later blurred the constructs used for API management and security. Invented to manage and secure microservices at scale, service meshes were, at their core, tasked with managing and securing API traffic in Kubernetes – albeit, at the microservice control level. Ingress controllers, too, started doing double duty as API gateways: because API traffic increasingly resembled traditional application traffic, proxies running inside or just in front of Kubernetes clusters started to be used to serve API gateway functions as well.

Enter the Gateway API

Although conceived to provide capabilities missing from the Ingress controller, the Kubernetes Gateway API seeks to address a broader challenge in the management of shared infrastructure. As explained by the Gateway API SIG, “Whether it’s roads, power, data centers, or Kubernetes clusters, infrastructure is built to be shared. However, shared infrastructure raises a common challenge – how to provide flexibility to users of the infrastructure while maintaining control by owners of the infrastructure?”

In other words, how can we create a more granular unified control plane for access and behavior that rides above and bridges siloed product categories? The Gateway API will accelerate the Great Blurring, and for the good of all. The landscape had become rather confusing. It seemed like load balancers, service meshes, API gateways, and Ingress controllers were converging. Each category bent over backwards to differentiate its role. In reality, most Kubernetes categories cover similar, and even overlapping, capabilities. The Gateway API serves as a needed catalyst to help us focus on “jobs to be done” rather than “infrastructure we need”.

The Kubernetes Gateway API acknowledges this by providing a better way to closely map product functionality to the different jobs to be done and the different roles of the people who do them. It’s also not limiting – you can use the Gateway API for a security product, for a networking product, for an application delivery product, for a monitoring and observability product. Or you can create one giant product that does all of these as integrated sub-modules, each using the Gateway API to do its job. In other words, the Gateway API is truly role- and construct-agnostic, just as good foundational connection technology should be.

So don’t ask “Where do I need to think about using the Gateway API?” The right question is “What do I want to achieve with my cluster and how do those jobs translate into different elements that may (or may not) be leveraging the Gateway API to get the jobs done?” With Kubernetes, there are many ways to accomplish the same goal and fewer “wrong ways” to do things. The Gateway API  is an expansive canvas for designing more flexible and scalable applications out of many types of services.


Brian Ehlert, Director, Product Management for NGINX (now a part of F5)

Don't miss out great stories, subscribe to our newsletter.

Hammerspace Enables Customers To Further Simplify, Automate and Secure Access To Global Data

Previous article

Analyze This: Kubernetes Can Drive New Insights But You Need a Modern Storage Foundation

Next article
Login/Sign up