Cloud Native ComputingContributory BlogsDevelopersDevOpsOpen Source

Kubernetes Operators, Beginners Tips, Kubecon Fun, Open Source Software And More: Q&A With A Senior Developer

Kubernetes, Ship
0
  • Who are you and what is your role at your company?

I’m Adam Wolfe Gordon , a Senior Engineer II working on DigitalOcean Kubernetes

  • What did your Kubernetes journey look like and how did you get involved with Kubernetes?

I’ve been working on DigitalOcean Kubernetes since 2019-present, previously I worked on storage. I didn’t start using Kubernetes or working on it until moving to the DigitalOcean Kubernetes team. I knew what it was and very basic concepts but I wasn’t deep into Kubernetes until I came over to this team and started working on our managed Kubernetes service. The first projects I worked on had nothing to do with the internals of Kubernetes. It dealt more with how we manage clusters. I’ve worked on many parts of Kubernetes since then and now I feel very comfortable with Kubernetes.

  • Out of all the technology you could have worked on, why did you stick with Kubernetes?

When I moved over to this team I saw Kubernetes as a very exciting technology and a big growth opportunity for myself and the cloud that I work at. That has certainly been true so far. It’s a product where we have had lots of feature growth and a ton of user growth. The ecosystem moves really fast, the community is building new things all the time and Kubernetes itself is growing all the time. There has been incredible excitement in the industry in the last few years. I expected Kubernetes to be a growth opportunity and it has certainly been true.

  • What’s your favorite Kubecon memory?

In Kubecon Barcelona for the big launch for DigitalOcean Kubernetes It was really fun being at the booth and hearing from users what their view of Kubernetes is. Back then we got a lot of questions on High Availability and we ended up re-architected the control plane. Kubernetes itself has changed from 1.10 to 1.24 during that time. Some of those features people wanted in 2019 they aren’t asking for anymore. We have to be wise about the features we pick . There’s a lot going on in Kubernetes and the Cloud Native ecosystem that we have to pick and choose what we think the lasting things are going to be and not try to chase down every technology to support because not everything becomes as popular as it looks when it’s got hype at the beginning

  • Any advice you can give anyone new to Kubernetes?

Kubernetes is huge and can be overwhelming to try and ramp up on it. It can feel like you have to learn everything and understand it all. You feel you want to understand everything in order to get something done and it can be paralyzing. The best advice I can give is to jump in on something and start working on it and learn what you need to do just for that specific thing. Don’t worry about knowing every facet there’s always more to learn and there are experts in those areas in the community that you can lean on. Kubernetes isn’t something you can master every aspect of -you have to pick which areas you want to learn.

  • What is the new open source project the do-operator? And how did it come about?

The do-operator is an open source Kubernetes operator or controller for managing resources from DigitalOcean that you want to use with your workloads. Whether you want to manage your resources using manifests or you just want to make it easy to connect your database from Kubernetes.  At this early stage it can manage the MySQL, PostgreSQL, MongoDB, and Redis databases. We’re excited to see how it grows to manage other resources.

We looked at what our users where doing and wanted to make things easier for them. We know our users often have databases and Kubernetes and we wanted to make the process easier for them to use them together. We also wanted to remove steps out of their workflow and simplify their journey. It’s also an effort to make it easy to do things the right way and follow the best practices. making things easier for our users to meet their goals in a secure and well supported manner.

  • Why did you choose to make an operator for Kubernetes and not a helm chart or a packaged app?

We can actually build a helm chart to install the do-operator which we might do. They are very useful to run off the shelf software within your cluster. There are operators that install databases into your cluster. The do-operator is different because it manages, configures, and connects to a managed database. 

Operators can handle more complex situations than a helm chart but that’s tangential to our decision. The bigger choice for us was between using an operator or the Kubernetes API integration Service Catalog.  We initially went down the Kubernetes Service Catalog path and eventually decided to go with an operator.

The decision at the end of the day to go with an operator was because they seem to have won the day in the community. Other cloud providers have also chosen to go with operators. Operators provide more flexibility for change instead of being locked into the API. That said we took many learnings from the service catalog like looking at the discoverability it affords and what your options are when integrating a service into Kubernetes. It’s something we want to take forward into our operator and try to make those operations simple that are not always easy in an operator model.

  • Are there any tools that helped your team build an operator?

We used Kubebuilder on top of the controller runtime. It’s a Kubernetes upstream project built on the Kubernetes SIGs. As long as you are doing normal operator things it makes it pretty straightforward to get started with the tools they have. In particular you don’t have to write much or very minimal yaml by hand. It’s a framework for building Kubernetes APIs using custom resource definitions (CRDs).

One of the reasons we picked it is because we have a lot of organizational experience with it. We’ve used it for other operators and our internal Kubernetes team uses its controller runtime to build their controllers.

  • Why was the do-operator open sourced?

We wanted to open source for a couple of reasons. The do-operator  is built with our public API and anyone could have built it or might even have their own version. The other aspect is we are also asking users to trust it and connect it to their Kubernetes API and run in their clusters. We want to be transparent so users can see what it is and understand what it does so we can build trust. We also wanted our users to contribute and create a community around it which is one of the nicest features of open source.

  • How do you picture startups using the do-operator? How can it help them?

I’m interested to see what else users build with the do-operator. We philosophically don’t want it just to manage a resource, we want the integration piece to be there. For example, we likely don’t want the operator to just manage Droplets (VMs) unless there is a use case to integrate your Droplets with your Kubernetes cluster. That is something we want to learn from our community, what other integrations do they want and how can we make it easier for them?

  • What are the types of architecture that the do-operator can support?

We didn’t want the operator to feel forced or require users to create a Kubernetes cluster to create a database. We were thinking about users’ workloads and wanted to provide them flexibility with how we think about databases and the life cycles of products. The operator supports two architectures to manage and connect to your databases depending on your workload.

Controlled database architecture

In the controlled database architecture Kubernetes helps manage the lifecycle and configuration of your database. This architecture is possible only for databases used by apps running in a single Kubernetes cluster where the database and cluster lifetimes match. For example, this architecture is a good choice when using Redis as a cache for a specific application on a single cluster.

Referenced Database Architecture

The referenced database architecture is preferred in cases where your apps run in several Kubernetes clusters and your database should persist when a cluster is deleted. You manage your databases apart from your cluster using a tool of your choice like Terraform. The operator will not manage the lifecycle of your database in this architecture because you want it to persist beyond your cluster. Rather, the operator manages and simplifies application access by handling separate users for each application.


-Devneel Vaidya, Senior Product Manager, Kubernetes

I’m an entrepreneur who loves to make enjoyable products for people. I’ve worked as Product Manager in start-up and corporate settings and launched several of my own products, including Facebook applications that scaled to over 3 million users and sold.