0

Container adoption is accelerating rapidly. Indeed, research and analyst firm Gartner predicts that “By 2023, more than 70% of global organizations will be running more than two containerized applications in production.” Yet, existing IT infrastructure that has been optimized over a period of many years for virtualized business applications may not efficiently support containers.

If your organization is making a transition to containers and Kubernetes, it’s likely that you’ll need infrastructure purpose-built for Kubernetes. Like with any purchase decision, you’ll want to consider carefully whether you will buy or build your infrastructure stack. This decision can be daunting, paralyzing even.

Make a List, Check It Twice

Before you decide whether you’d like to purchase your infrastructure stack or build it yourself, it will be helpful to make a checklist of the various factors that might influence your decision-making process. This list isn’t exhaustive, and you should ensure that you come up with your own list that is geared toward your company’s specific needs. What follows are common considerations companies make when making purchase decisions:

  • Solution components: What components—hardware and software—make up the solution, and how well do they meet your requirements?
  • Deployment time: How long will the solution take to deploy, and how much expertise will that require? Solutions that exceed your team’s skills may need expensive professional services engagements, adding cost and time.
  • Management: How much time and expertise will the various hardware and software components require to manage?
  • Scalability: How difficult is it to scale the solution as your needs grow?
  • Total cost of ownership (TCO): How much will it cost you to own and operate the solution, including staff time?

A Detailed Approach Will Do Wonders

These are some of the higher-level thoughts you’ll need to process before delving into a more detailed roadmap for your decision. Once you have reviewed your checklist and made decisions about the tack you’d like to take, you can start to get into the nitty-gritty.

For example, would you like to use a bare-metal or virtualized environment? When you deploy containers on top of an existing virtual environment, you are layering one form of virtualization on top of another. The people using and managing Kubernetes are not likely to be the same folks managing the virtualized environment. This can lead to communication problems and other delays.

On the other hand, bare-metal containers reduce the number of layers to manage versus VM-based containers, and because the bare metal is more efficient, you need less hardware to run the same number of containers, reducing the total amount of equipment under management. These are the types of choices you’ll have to make along the way to your purchase decision. Other factors to consider are:

  • Persistent storage: Applications running inside containers need to be able to save data permanently. Kubernetes Container Storage Interface (CSI) is now a generally available standard for extending to third-party storage solutions which can provide a mechanism for persistent storage, even as containers come and go.
  • Networking model: Getting networking right remains one of the most difficult aspects of container environments, and container networking must integrate smoothly with your existing data-center networking.
  • Support: How will you get support for the full infrastructure stack, including both hardware and software? From a single vendor? From multiple vendors? From the open-source community?

To DIY or Not?

As you drill further down into the requirements for your Kubernetes environment, the question of flexibility will inevitably arise. This is the major trade-off between infrastructures you build yourself and vendor-provided solutions. If you build something yourself, you get to decide which features and functions your infrastructure will include. If you opt to purchase a solution, you will have less say in the types of features and provisions the infrastructure provides, but you’ll enjoy a lower cost of operating and a longer time to deployment.

If your team is new to Kubernetes—or just needs to get a project off the ground quickly—there can be a lot to wrestle with. If you choose to build out infrastructure yourself, the deployment will take longer and may require professional services to complete. Ongoing management complexity will be higher than it would be with a more integrated solution since you’ll have to keep up with the open-source community and integrate patches and enhancements as needed.

When it comes to networking, you’re completely on your own. To scale, you add servers until the storage array runs out of performance or capacity. Then you either add a second storage system or replace the original system with a more powerful one. Either way, this can be disruptive and can result in a big, and often unanticipated, incremental expense.

Ultimately, the biggest advantage of a do-it-yourself Kubernetes platform (the complete freedom to choose all the hardware and software components) may also be its biggest disadvantage.

How to Select an Off-the-Shelf Solution

If you do decide that the freedom to choose all your components isn’t worth the upkeep, then you’ll have to decide upon a vendor solution. Today, there are two major types of infrastructures that enable efficiently run containers.

Some of the more modern offerings are converged and hyperconverged infrastructure choices. Converged infrastructure (CI) pre-packages several servers with a separate storage array. Hyperconverged infrastructure (HCI) combines servers with internal storage, software to virtualize that storage, and virtualization software. These solutions can be deployed for use with Kubernetes. However, as a class, they are designed for virtualization rather than containers, making bare-metal container deployment impossible in almost all cases.

There are also solutions based on virtualization. To deploy Kubernetes on existing virtualized infrastructure, you have a number of possible options. Deploying the software stack and ongoing management for these solutions is time-consuming but enables relatively straightforward scaling.

Still, other solutions exist that manage and monitor the container environment for you once everything is installed. However, many leave you beholden to the vendor’s networking capabilities once you implement the infrastructure. Finally, there are solutions that are purpose-built to deploy Kubernetes applications. These often involve a higher capital expenditure investment than other solutions but are built to provide the right mix of ease of use and performance.

A Wise Decision Now Means an Easier Future Ahead

The need to provide new digital services is forcing enterprises to pivot to container infrastructure, similar to the pivot to virtualization a decade ago. As you make this transition, you have to consider your requirements and make careful infrastructure choices to avoid being saddled with a solution that is overly complex, difficult to manage, that lacks the necessary performance, or that locks you into a specific vendor environment.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon EU, in Amsterdam.

You may also like