FeaturedNews

How Kubernetes Players Define Stateful Application

0

A vast majority of existing enterprise applications are stateful, and they will remain so.

As these enterprise customers embark on their cloud-native journey with Kubernetes, they can very easily use all modern technologies to manage their stateful workloads. Kubernetes is no longer restricted to only running stateless workloads. But there are challenges in that path, and where there are challenges there are opportunities. In this 3-part article series, we will talk about stateful applications and Kubernetes.

Stateful vs. Stateless

What is a stateful application and how is it different from the stateless application? In its basic form, stateful applications are those that store data, and stateless are those that don’t.  Defining stateful workload, Bob Quillin, Vice President, Oracle Cloud Developer Relations said ‘most generally this is running databases / key-value stores in a container.”

There are two sides of the stateful equation: technical and business.

From the technical perspective, “stateful applications are services that require backing storage and keeping state is critical to running the service,” said Radhesh Menon, CMO of Robin Systems. “Databases such as Cassandra, MongoDB, Postgres, and MySQL are great examples. They require some form of persistent storage that will survive service restarts,”

From the business perspective, “[Stateful applications] are typically the applications that businesses rely on to generate revenues, interact with their customers and partners, or analyze data that helps them make important decisions for the business,” explained Brian Gracely, director, Product Strategy, OpenShift at Red Hat.

VMware’s view aligns with Kubernetes definition of stateful workload. “Stateful workloads are made of applications where application data exceeds the life cycle of a POD. If a POD is created and destroyed, data generated by the application needs to persist and be reused by other PODs,” said  Wendy Cartee, Senior Director of Cloud-Native Apps Advocacy, VMware.

“When containers are re-scheduled on a different host, storage should also re-attach and be made available on the new host for the container to start gracefully.  Certain applications have a strong sense of identity and startup order, it is important that if a container with a certain ID gets rescheduled, the disk associated with that ID is re-attached to the new container instance,” Cartee added.

“One of the primary benefits [of stateful applications] is improved response rates – when a client makes a request to a stateful workload, the workload may have all the context in memory already, preventing a round-trip transaction with the datastore, which would delay a response to the client,” said Carmine Rimi, Product Manager for AI at Canonical.

It’s critical to keep track of state in order to ensure proper functioning of many applications. There are different approaches and models to support state management.

“In a ‘stateful’ model, the state is maintained in a persistent data store that is independent of the services that use state information.  Application state can be shared easily across any services that have access to the data store.  Since the data is ‘persisted’ (written to persistent storage) by the data store, the state is not lost when either the stateful services or the data store itself fails,” said Andreas Jaeger, Product Manager at SUSE.

Now we have reached a consensus on what Stateful workloads means, from the perspective of Kubernetes platers. In the next article, we will talk about the business benefits of Stateful workloads on Kubernetes.