Cloud Native ComputingContributory BlogsDevelopersFeaturedSecurity

A Service Mesh Approach to Kubernetes Runtime Security

0

Author: Haim Helman

Last month I explored why container image scanning is simply not enough when running Kuberentes and the challenges it introduces from a technical and cultural perspective. One of the new approaches that’s beginning to gain traction in the industry is by infusing network communications with security at runtime with a service mesh.

Across the cloud native landscape, the term Service Mesh has been increasingly used in recent years as a technology approach that can help scale Kubernetes and reduce its complexity. Service Mesh can also be the key to helping to improve cloud native security as well.

The concept of a service mesh was not created with security in mind.

The original idea behind service mesh was to add a proxy alongside a running workload. Today, most service meshes in Kubernetes run a dedicated proxy for each workload instance, as an additional container in each pod, in an approach known as the sidecar pattern. This pattern ensures that the proxy does not become a bottleneck and does not introduce a failure domain beyond a single pod. It allows the proxies to own the identity of the workload and authenticate it among themselves.

With the sidecar, administrators can offload common network functions such as timeouts, retries and load balancing, rather than requiring each separate container to manage those functions on their own.

There are several different service mesh technologies available today with one of the most popular being Istio. The Istio project was started by Google and IBM and relies on the Envoy proxy which was created by ride sharing service Lyft. Istio was among the first to add security as a tenant of the service mesh architecture.

Adding Security to Istio

Having a common, distributed component which routes all the cluster’s traffic is a logical place to implement security features such as encryption, authentication, segmentation, access control and threat detection in a cloud native deployment. The telemetry gathered by all those proxies can also be used to profile the behavior of workloads and implement policy automation and anomaly detection.

Using a service mesh based approach for security also bypasses the limitations of traditional network security tools such as firewalls and network intrusion detection systems. Among the limitations of traditional security tools are:

  • Scalability – Cloud Native applications are meant to scale out to many instances which can generate traffic that is order of magnitude higher than in traditional, monolithic applications.
  • Identity – Containerized workloads are ephemeral and so is their network identity, which poses a challenge for traditional security systems. The only way for the network security tools to understand the identity of both parties is to be involved in the authentication process.
  • Visibility – Encrypted traffic cannot easily be inspected by a “man in the middle” approach that is often at the core of a traditional security solution. As such, it’s imperative that security tools are able to see the traffic before it is encrypted or after it is decrypted.

 Enabling Zero Trust Networks 

The service mesh provides a substrate on which additional security capabilities can be enabled and delivered. Going a step further by using a service mesh approach it’s possible to create a Zero Trust network that can help to reduce risk and the overall attack surface.

In modern cloud native deployments the perimeter is porous, if it exists at all—and that means that all traffic should be encrypted and authenticated.

A well architected service mesh enabled approach for security can allow for East-West communication between workloads that is all encrypted and mutually authenticated, without the need for deploying a separate encryption product.

Beyond the Service Mesh 

With a properly configured and deployed service mesh it’s possible to overlay additional security controls and inspection capabilities.

Among the things that Kubernetes administrators should consider is how to use the service mesh to ensure security compliance. As an ideal place in a cloud native deployment for inspection, the service mesh is also an ideal place to collect information that is needed to maintain compliance with policy. 

Having a policy engine in place at the service mesh layer can become a critical security control that will help to reduce the overall complexity, while providing full coverage. 

Another thing that organizations should consider is having threat detection capabilities at the service mesh layer, as a way to scale that vital security operation to cloud native requirements.

While service mesh was not part of the original design of Kubernetes, its introduction has changed the way that organizations can scale and manage their cloud native operations. There is no shortage of different approaches for cloud native security, and no shortage of complexity either. By using the service mesh at the foundation, it’s possible to both scale operations and reduce complexity – which is a win-win for cloud native.