Transcription of the interview
This is Swapnil Bhartiya and welcome to another episode of TFIR Insights. And today we have with us Nick Chase, Head of Technical Content at Mirantis. Kubernetes 1.18 is being released today, and I just wanted to talk to you about if you look at the evolution of Kubernetes, where we are today, we’ll talk about some of the highlights, some of the core features and what do you plan for the next release. So, let’s start with what is the core of 1.18
Nick Chase: So, at the core of Kubernetes 1.18 is sort of improvements on features that we’ve had around for a little while, including specifically the topology manager. So, as Kubernetes gets more into these hardcore environments, it’s becoming more important to have these very low latency operations. And in order to do that, you have to have features like NUMA and you’ve got to be able to access things like SR-IOV, and in order to do that, the topology manager has to be able to manage where to put these workloads. And up until now, the device manager and the CPU manager, they haven’t really been talking to each other, so the topology manager takes care of that and that’s in beta. At the same time, you’ve got things like Windows environments have been around for a while, but now we’re expanding into being able to have CSI support for Kubernetes, so more storage options for Windows environments.
So, what you’ve got is Kubernetes is not new anymore, I mean, it hasn’t been new for a while, but now you’re seeing kind of tendrils going off into areas where we knew we were going to have to get to eventually and now you’re starting to see that happen. There’s also some improvements to Ingress. Of course, with all of the complicated networking going on, we’ve got service meshes and things like that, but you’ve got to be able to have requests coming into workload, and in order to do that you have, it’s called Ingress. And so now they’ve changed things around and they call it Ingress Class so that it’s easier to kind of say, all right, we want to take data from say a public cloud provider or someplace like that instead of what we would normally think of like maybe an NGINX server or something like that. So, those are a few of the things that are at the core.
Swapnil Bhartiya: Now, I am also curious about what kind of Windows support is going to be there with this release.
Nick Chase: Windows has always kind of been a stepchild in terms of Kubernetes, and not out of intention, it’s just that it started on Linux and so the control plane still has to be on Linux. But what we’re seeing is, in this release we have the production release of the ability to add Windows worker nodes. So, even though you still have the control plane on Linux, you can create these Windows-based worker nodes which lets you run Windows containers, and if you have a mostly Windows architecture or infrastructure, you can use those machines.
And at the same time, Kubernetes is also starting to release production versions of more user functions within Windows. So, for example, the runners username function now works with Windows. You have to make sure that you get all the right requirements for the username and the domain and so on, but you can use that within a Windows workload, same with the gMSA workloads. gMSA is a sort of account aggregation that lets you have one account that spreads over multiple machines so you can change your password in one place, change access in one place, and you can now use that with Kubernetes as well.
And also, Windows now has support for Kubeadm or however you’d like to pronounce it, but the ability to easily add a node to your Kubernetes cluster using that script, you can now do that on Windows. That’s in beta but it’s there, which is great.
Swapnil Bhartiya: What are the other key either features or highlight that you are personally interested in from the perspective of Mirantis? Storage is there, security is there, scheduling. I mean, there are so many. So, let’s start with storage.
Nick Chase: Storage is, of course… I mean, you can’t do anything without storage. And on the Windows side actually still, we now have, I believe it’s actually an alpha access to CSI, the container storage interface for Windows machines. So, you can now hook up a much more diverse range of storage to your Windows nodes. Also, in GA, we have the ability to use raw block storage, so that gives you a bit more in terms of range. Also, I like the ability to clone a persistent volume claim.
So, if you have a volume that you’re using, you can basically use that as a data source for another volume, so you can clone that, which is awesome. Or one thing that is now in alpha is the ability to create your own sort of generic data source. So, if you want to create something that just uses your own specific requirements, you can now do that. So, that’s all great. Also, in GA, you can pass pod information to the CSI. So, this is another feature that’s been around for a little while. It’s now GA, so you can use it in your production environments where you can pass the name of the pod and all kinds of things, the ID and so on, right down into your storage.
Swapnil Bhartiya: What about security? How important is the community taking security? I think there are a lot of improvements lately, but how much of that concern is making to the code itself?
Nick Chase: Yes. So, there’s always going to be a lot of concern about security and the coders definitely take it seriously. This release we’re seeing the certificate signing request API. So, this allows you to go in and create a request for the certificates so that you can run secure communications between all of your nodes, which I mean, obviously that’s definitely important. We’re also seeing advances in terms of federated identity and also the ability to do these immutable secrets. So, if you’re used to using secrets for your passwords so you’re not putting your passwords in plain text and all of that, you can now set them up so that you know that nobody’s going to be able to go in and change them and mess everything up for you. So, they’re definitely taking security seriously.
Swapnil Bhartiya: What are the other key areas that you are kind of excited about?
Nick Chase: I’ll tell you what excites me personally. I am really excited about scheduling profiles. So, the way that the Kubernetes scheduler works, the scheduler is the piece that decides what node a workload is going to run on. And normally, what it does is it has a two-stage process. First, it filters out all of the nodes that are not applicable. So, this node doesn’t have enough memory, this node doesn’t have enough storage and so on and so forth. And then it sorts what’s left and then it goes ahead and drops it on whatever is the most appropriate node. But there’s now the ability to do what they call scheduling profiles, which allows you to sort of have a lot more control over how those decisions are made. And there’s a ton of what they call plugin endpoints within the scheduler, and you can decide which ones of those matter to you, you can put them all together. And what’s more, you can use different profiles per workload.
So, if this workload is better with this profile and then this workload is better with that profile, you can just specify that in the YAML that defines your workload and have much greater control over what is going to happen when the workload gets scheduled. So, that’s something that is in progress.
Swapnil Bhartiya: Nick, thank you once again for taking your time out to talk about Kubernetes 1.18, and I look forward to talk to you again when Kubernetes 1.19 is released. Thank you.
Nick Chase: It was my pleasure. Thank you so much for having me.