Overview and comparison of features of various controllers
What is Kubernetes?
Kubernetes is a highly-integrative, open-source system designed to manage containerized workloads and services. It has increased portability between clouds and on-premise environments, which was difficult if not impossible before Kubernetes. This allowed for cloud-native architectures, which ramped up deployment cycles and business responsiveness to usability needs and desires.
Kubernetes helps with the management and deployment of containers across varied infrastructures, helping automation and infrastructure as code (IaC) keep up with the rate of technological and business growth.
What is an Ingress Controller?
An ingress controller for Kubernetes is a resource that allows the outside world a way into (ie. ingress) a Kubernetes Cluster. The ingress controller is less of a thing and more of a set of rules established to create routing endpoints for applications, endpoints, functions, et cetera.
For example, you can create a rule for your website’s http://store.com/cart connecting it to your existing Kubernetes services named “serviceName”: “shoppingCart”. Now all the inbound API requests to http://store.com/cart will be serviced by the “shoppingCart” microservice.
Why do you need an Ingress Controller for Kubernetes?
Creating routing rules for accessing your applications is the main use-case for an ingress controller. However, there are some additional features you get when adding an ingress controller to a Kubernetes cluster that doesn’t come ready out-of-the-box.
Additional Ingress-enabled Features
- Virtual Hosts
- Single IP address to many apps/hosts
- Load Balancing
- URL Rewrite
- TLS – bonus: AMCE support for LetsEncrypt *
- Easy Helm Installation *
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: store annotations: NGINX.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - store.com secretName: testsecret-tls rules: - host: store.com http: paths: - path: /cart backend: serviceName: shoppingCart servicePort: 8080
What’s underneath ingress?
The ingress is a set of APIs and resources in Kubernetes that control the way a proxy—NGINX, Envoy, ALB, or other proxy—is configured. Each proxy is configured in a different way and has its own set of features and APIs. These unique configurations are updated by the specific controller, which is either built into Kubernetes or built by a vendor or community. For example in an NGINX proxy and in an Envoy proxy, you would use a different configurations to modify how a URI is routed. That means each one needs to be custom built.
Additionally, not all proxies have feature parity. For example, NGINX OSS can’t support sticky sessions even though NGINX Plus can.
The variation that results is why different ingress controllers may suit some applications better than others, depending on your objectives and preferences. So, how do you choose what the best ingress controller is for you?
Let’s explore the types of ingress controllers.
Types of Ingress Controllers
OpenSource Controllers for Kubernetes
Kubernetes comes with a built-in ingress controller, built on NGINX and LUA modules to dynamically update the NGINX configuration without having to reload.
There are still some cases where NGINX reload is required. You will want to avoid reloads as each NGINX master will fork into a new configuration, effectively doubling the number of NGINX workers. The scope for adding additional modules to the instance is limited to what is built into the image, which is at this time ModSecurity and Tracing modules.
NGINX Inc. has its own ingress controllers, both free and paid. While both the controllers from the Kubernetes Community and that of NGINX Inc. do similar things, their philosophy and deployments are somewhat different.
The NGINX ingress uses 100% pure NGINX configurations without requiring additional third-party modules to run. NGINX Inc. subsequently retains full control of all the moving parts, from NGINX to the controller. Consequently, the big downside to using the OSS NGINX Ingress is it has no support for dynamic configurations. Each time a new Kubernetes endpoint is updated or added, an NGINX reload is kicked off.
Envoy (Heptio Contour, Gloo, Ambassador, Istio)
Envoy is the hot new tech everyone wants to get their hands on—and start using, like, yesterday. It has a lot of highly desirable features: gRPC and HTTP2 support; monitoring; dynamic configuration updates; authentication; configuration APIs, etc. Too good to be true? Before you buy-in, just read the docs. Two major areas you should explore first: security and your own company structure and maturity.
Envoy is in its early days. It’s great for being used as an experimental application. But it should be extremely well-vetted by both your security and quality assurance teams for edge case scenarios. Make sure that they have the bandwidth, capability, and resources before adopting.
With any new tech, like Envoy, it requires DevOps cultural maturity. It may also require a level of customization, like a custom built add-on, or may need to rely on external (to Envoy, for example) endpoints (i.e. Istio Mixer) for security, tracing, etc.
Vendor Paid Controllers for Kubernetes
Like some of the other OSS ingress controllers that offer paid support plans, NGINX Plus offers additional features unavailable in their OSS version of the ingress controller.
The commercial version of NGINX offers exclusive NGINX modules, like the dynamic configuration changes, without the need to restart NGINX.
NGINX Plus is well received by pros from both the Kubernetes community ingress and NGINX Inc ingress aficionados. Adding third-party modules doesn’t require modifying the base image; a simple helm installation can install the modules if provided.
Kong works similarly to NGINX Plus. There is a free version with limited features (you didn’t hear it from us) . That said, to really unleash the full potential of Kong you will need the paid version.
Kong is described as an API gateway, but the underlying technology is NGINX. One of the truly great things about Kong is it can be easily customized. Kong has a wonderfully rich ecosystem of third-party modules and a dashboard that makes modules easy to install.
Cloud- and Hardware-Based Controllers
Ingress controllers based on cloud or hardware locks you into whatever underlying infrastructure or hardware you use. That can compromise your flexibility, particularly if your resources are tight. Taking this route means you will lose any Kubernetes portability between clouds and on-premises.
Google Load Balancer Controller—aka. ingress-GCE or GLBC—uses the ingress resources and GCP APIs to build, manage, and control GCLB or Google’s Global Cloud Load Balancers. GLBC is a beta Kubernetes controller that is for use as a managed offering.
GLBC should be used with caution and thoroughly tested by security and quality assurance teams. However, you do get a lot of great features in GCLB. It includes a single IP address to any Kubernetes cluster within your GCP projects; and the ability to enable kubemci or Kubernetes Multi-Cluster ingress. You will have access to any proprietary add-on specifically designed for GCLB, such as Cloud Armor.
AWS has a proprietary Layer 7 managed load balancer called Application Load Balancer, or ALB. ALB is a bit more simplistic than other cloud- or hardware- based controllers. Nonetheless, it can use more of the native ingress APIs and is considered more stable than other cloud controllers.
Citrix also has its own ingress controller. The downside is that it requires additional hardware or virtual appliance—specifically the Citrix ADC CPX. Since Citrix can deploy to on-premises or cloud (GCP and Azure) via a Virtual Appliance, it allows you to experiment with multi-cloud or hybrid Kubernetes deployments.
Add-ons: features, plugins, and modules for ingress controllers
Not all ingress controllers are created equal. You can get a wide variety of supported features for add-ons, plugins, modules, etc.
Wallarm is a cloud-native automated application security platform used to detect and block malicious API requests. It supports NGINX OSS, NGINX Plus and Kong. Wallarm builds right into an Ingress controller and acts as a WAF with an extremely low false positive rate. The platform also bundles in features like vulnerability scanning that gives security and development teams security visibility and peace of mind.
ModSecurity is the resident veteran when it comes to Web Application Firewalls. It comes pre-installed with NGINX-ingress and can be installed as a dynamic module on NGINX-Plus. It supports nginx-ingress, NGINX Plus.
The caveats are that its not very good on parsing advanced API protocols and is based on regular expressions, where each signature needs to be added manually or any preconfigured rules need to be managed and vetted for false positives continuously.
Supports: nginx-ingress, NGINX OSS, NGINX Plus, Envoy, GCLB
Most Ingress controllers support authentication (AuthN) out-of-the-box. Most is the operative word here. Some ingress controllers only support JWT, while others only support external authentication and some others rely on external endpoints for authentication (i.e. Istio Mixer).
Prometheus is a monitoring and time series database that relies on a Prometheus exporter from the proxy/ingress you decide to use. Some exporters gather more data than others. For instance, NGINX OSS only exports stub_status metrics; other proxies export much more. It supports nginx-ingress, NGINX OSS*, NGINX Plus, Envoy.
OpenTracing is a vendor-neutral instrument for distributed tracing. It relies on tracing headers at each microservice and is made easier with a service mesh. It supports Envoy (Istio), NGINX Plus (NGINX controller).
Adding these traces can be easily accessed with platforms such as Istio or NGINX-Controller.
Did you have an interesting experience with any of the Ingress Controllers above or have a specific example for when one is better than another? We’d love to hear from you.