Cloud Native ComputingContributory BlogsDevelopersDevOps

Controlling Kubernetes Workload Traffic to Non-Kubernetes Workloads

0

Getting traffic into a Kubernetes cluster is one of the primary goals for organizations adopting Kubernetes. This is generally solved using an ingress or a load balancer to enable external users to access services running in a Kubernetes cluster. Another area that organizations must deal with is restricting traffic coming out of a Kubernetes cluster. Imagine a container with a malicious payload running in an enterprise environment with unfettered access to the rest of the network. The question becomes how can organizations allow Kubernetes workloads to access the external services they need to without it becoming a free-for-all.

Why Traditional Network Isolation Fails

Network isolation or segmentation has traditionally been done at the host, network segment, and perimeter layers. MAC addresses and IP addresses have been used as the identifier for restricting access. A rule would be defined to allow access only from a certain IP address or range of IP addresses. This works well enough in relatively static environments where workloads will retain their IP address for the duration of their existence. Kubernetes workloads by nature are intended to be dynamic and the IP address in such a scenario is essentially a transient identifier. The traditional paradigm becomes even more problematic as Kubernetes clusters typically use a network overlay for pod network communication. This prevents the external service or workload from seeing the real IP address of the pod as Static Network Address Translation (SNAT) is used to facilitate the external communication. This means that several workloads in the Kubernetes cluster appear to be originating from the same IP address to the external workloads. Using traditional ingress traffic control methods results in the external workload needing to accept traffic essentially from every workload running in the Kubernetes cluster.

Controlling Egress Traffic

The solution in most situations is to restrict traffic before it leaves the Kubernetes workload or Kubernetes cluster. This enables the network rules to have awareness of the individual Kubernetes workloads. There are several ways to restrict network traffic destined for services external to a Kubernetes cluster. The continuing maturity of the Kubernetes networking ecosystem has presented us with a handful of solutions for controlling egress traffic.

Kubernetes Network policies

Kubernetes network policies provide a native mechanism for controlling network traffic to (ingress) and from (egress) Kubernetes pods. In the case of restricting access to external services, network policies support granting access based upon an IP address or range of IP addresses. This allows policies to be created that allow access to external services at a pod level and prevent overexposure of the external services. This is the same as what has been done with the traditional network segmentation but instead of ingress filtering, which is used in most cases, egress filtering is being used.

CNI Plugin Network Policies

The native network policies provide basic functionality for controlling egress traffic but there are several Kubernetes Container Network Interface (CNI) network plugins that offer extended filtering functionality. Calico and Cilium are two examples that offer a richer set of features like those found in traditional firewalls.

Service Mesh

Service meshes like LinkerD, Istio, Kuma, Consul and others provide functionality such as end-to-end mTLS, traffic shaping, and traffic control/filtering. Service meshes utilize proxies such as Envoy and NGINX to control the traffic between workloads. Several of the service meshes support non-Kubernetes workloads such as Kuma, Consul and Istio. This requires that the service mesh agent and proxy be deployed on the non-Kubernetes workload. Adding the non-Kubernetes workload to the service mesh enables the service mesh to restrict traffic to the non-Kubernetes workloads at a granular service level. Instead of defining an egress policy, a policy that restricts service to service communication can be defined. This is ideal for environments that are running virtual machines in a more dynamic or ephemeral fashion in which defining an IP address in the egress policy is less than ideal.

Conclusion

While Kubernetes has fundamentally changed how applications are built and run, workloads that don’t run in Kubernetes will be around for the foreseeable future. This means that organizations need to develop strategies for appropriately segmenting or isolation Kubernetes workloads from external services. Each of the methods detailed in this article could be used as a building block in broader network micro segmentation strategy that includes Kubernetes and non-Kubernetes workloads.


Author: Martez Reed Director of Technical Marketing, Morpheus Data
Bio: Martez is the Director of Technical Marketing for Morpheus Data. He has spent the last 7+ years working with enterprise customers to develop automation solutions to accelerate their businesses. His background is in system administration and has fully embraced DevOps methodology as well as the toolchain associated with the practice.

To hear more about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021