Cloud Native ComputingDevelopersDevOpsFeaturedLet's TalkObservability /MonitoringVideo

Pepperdata Helps Organizations Reduce Cloud Costs With Autonomous Optimization 

0

Pepperdata helps organizations moving to the cloud that have big data workloads with optimizing their workloads. Many organizations are facing runaway costs and there can be gaps between the expectations versus the reality of moving to the cloud. However, by utilizing autonomous optimization, the amount of resources is reduced and therefore the cost also. The autonomous optimization is further augmented with their tools for enterprise-class monitoring.

A Pepperdata survey showed that 77% of enterprises were in the process of or had already moved to Kubernetes and that 50% of their workloads in 2021 had been migrated.

In this episode of TFiR Let’s Talk, Maneesh Dhir, CEO of Pepperdata, tells us about the company and how it is helping to tackle the challenges of monitoring big data workloads and how autonomous optimization helps reduce the consumption of resources. He discusses what sets Pepperdata apart from other toolkits like Prometheus and the key trends he is seeing in the space.

Key highlights from this video interview are:

  • Many organizations are struggling to manage costs in the cloud, and Pepperdata is helping them reduce these with autonomous optimization which reduces the number of resources being consumed, hence resulting in cost reduction. Dhir explains what their company is trying to achieve and how it is tackling runaway costs.
  • Dhir explains the findings from a Pepperdata survey on Kubernetes adoption. He goes into further detail on the trends he is seeing.
  • Dhir discusses the role of monitoring for DevOps to ensure the environment is performant; however, not everything always goes straight forward when moving to the cloud. Costs can be higher than expected. He explains how monitoring can help make sure the app is performant, and better understand the resources being consumed.
  • Observability is a big challenge for Pepperdata’s large enterprise clients who want to be able to see the key metrics, examine the behavior of their infrastructure and adjust accordingly. Pepperdata’s autonomous optimization was created to automatically adjust resources dynamically resulting in 30-40% reduction in resource consumption.
  • Often when getting an application ready for the cloud, a lift and shift approach is used; however, it does not necessarily mean it is optimized and consumption can be higher than expected. Dhir tells us how Pepperdata looks at resources in real time and adjusts the workload that is being deployed for the resources that are being commissioned.
  • Dhir discusses the three main areas where Prometheus falls short: not utilizing the long-term monitoring that enterprises want, the ability to do complex combinations and metrics to get the best view of the application’s behavior, and having an enterprise-class dashboard to make sense of everything and take action.
  • Dhir shares his insights into what he feels the future may hold for cloud migration. He believes cost will continue to be a critical consideration for organizations. He feels that the industry needs monitoring as a foundation to get the best use of the resources that are being deployed with big data workloads, and optimizing those workloads accordingly.

Connect with Maneesh Dhir (LinkedIn)
Learn more about Pepperdata (Twitter)

The summary of the show is written by Emily Nicholls.

[expander_maker]

Here is the automated and unedited transcript of the recording. Please note that the transcript has not been edited or reviewed. 

Swapnil Bhartiya: Hi, this is your host Swapnil Bhartiya, and welcome to another episode of TFiR Let’s Talk and today we have with us Maneesh Dhir, CEO of Pepperdata. Maneesh, it’s great to have you on the show.

Maneesh Dhir: Thank you, Swapnil. I’m so glad to be here.

Swapnil Bhartiya: Yeah. And since this is the first time we are talking to each other and you are also CEO of the company. So I would still love to know a bit about the company. Tell us, what is Pepperdata all about?

Maneesh Dhir: Sure. So, we’re based in the Bay Area, like many companies of our kind and we’ve been around for a while, but really what we’re all about in a nutshell, is if you’re a company that has either moved to the cloud or is moving to the cloud and has big data workloads of any kind, and you are faced with the challenge of runaway costs, we help you provide what we call autonomous optimization, the ability to optimize your workload.

So, it brings down the amount of resources you consume and therefore the cost you expend, and we augment that autonomous optimization with some very robust tools for comprehensive enterprise class monitoring. Which incidentally are a companion or built on Prometheus, which is a very popular monitoring tool, especially in the Kubernetes open source community.

Swapnil Bhartiya: Excellent. Thanks for sharing an overview of the company. You talked about monitoring, I also want to understand a bit about when we do look at Kubernetes environment, or these words are interchangeable, cloud native and Kubernetes, they’re becoming synonymous. Can you talk about the importance of monitoring, and this word also, when we talk about monitoring, let’s just forget about these terms, because we talk about observability, tracing, monitoring. Let’s look at the bigger picture, what is the ultimate goal to have visibility into what is going on in the system, so you can either optimize it with performance security? So, talk about it from that perspective.

Maneesh Dhir: Sure, sure. And actually maybe, if you don’t mind, I’ll take a step back also, because first let’s talk a little bit about Kubernetes. The world is moving to Kubernetes. In fact, we did a survey about this time last year, of about 600 organizations. And some of the data that … and this survey is publicly available, is that about 77% of the companies we surveyed, enterprises, said they are in the process of, or have moved to Kubernetes, and roughly 50% of their workloads in 2021 had been migrated. So, Kubernetes is clearly a destination for the vast majority of businesses that are moving to the cloud. That’s premise one.

Premise two is along with it comes a culture where DevOps is the name of the game. CICD is the key process most organizations follow. So what is DevOps? For your users, even though I suspect this is a term that they’re quite familiar with, is that the boundary between development and operations is blurry. The developers in the deployment of apps, in the real-time modification of it, and to make them performant and cost effective.

And so in this process, the role of monitoring for both the developer and the operations person is two or threefold. One, to make sure that in an environment which could be … especially, this is one of the benefits of Kubernetes could be on-prem, could be hybrid, could be multi-cloud, the app is performant. The app is behaving in line with the business case that was made to move things to the cloud because often, a digital transformation comes with it an expectation of a certain set of benefits. One of which is the flexibility and the cost structure that the cloud allows us.

But often when companies move to the cloud, they find that their experience can be a little bit different. Initially the costs are higher and more importantly, month on month, they often see them going up and there’s what we call bill shock.

And so, monitoring has two objectives. One is the basic objective of monitoring, which is to make sure things are running okay and the app is performant and so on and so forth, but increasingly so, the role of monitoring is to understand what resources are being consumed and whether as nodes are commissioned, decommissioned, that the behavior is in a way that is cost effective. So, that’s the lens through which monitoring, or now as we call it, observability, is increasingly being used by augmentations, especially the larger enterprise.

Swapnil Bhartiya: Excellent. Thanks for sharing that overview. One more thing I want to just go a step further when we talk about observability, knowing what’s going on is fine, but without actionability, it’s just locks, you just know. So, can you also talk a bit about that? It could be from Pepperdata’s perspective, as well to actually help DevOps teams to be able to do something about it as well.

Maneesh Dhir: Good question. So our journey with this started with, as I said, observability because our customers. Our customers, to give you just a flavor, generally tend to be Fortune 1000. A few of them are actually Fortune 10. Some are Fortune 100. So, relatively large enterprises for whom this problem was at some significant scale and they wanted to be able to observe key metrics. They wanted to be able to retain them for long periods of time so that they could examine the behavior of their infrastructure and be able to adjust it. And that journey then eventually took us to where we are today, which is you start with observability, but eventually what you realize is, just by monitoring a lot of statistics, because in this world of the cloud, there’s a profusion of metrics. You can instrument just about anything, at any frequency.

So, it’s very easy to end up with thousands or even tens of thousands of readings. It’s trying to make sense of that that becomes the next challenge. And most IT organizations, especially DevOps organizations today, they have many challenges, but having sufficient manpower, having sufficiently trained manpower and then having just sheer hours in the day, are some real challenges. So, we started to be giving out recommendations from our monitoring and observability platform and then we realized that actually, people don’t have the time to look at those recommendations. And that’s when we moved to where we are today, which is we built something called autonomous optimization, where those recommendations, instead of giving them to you, the IT ops person, or the DevOps person, or the architect, call it what you will, we ingest them, we see which resources are being commissioned, but underutilized. And we then automatically, behind the scenes with no intervention from you, we adjust those resources dynamically. So, they go up and down based on the real life usage of those resources. And our customers tend to save about 30%, 40% of their resource consumption, which is then directly reflected in their cloud bill, if they’re on the cloud.

Swapnil Bhartiya: Excellent. I want to go back to the point of DevOps a bit. If you look at the traditional IT landscape or private cloud, there used to be silos where there were teams or professionals who knew about that particular thing. They were responsible that whether it’s networking, security, whatever it was, they knew and they would take care of that with the cloud. As you also said, what happens is you just go to any big provider, put your credit card and get their service. Now, how much attention developers can pay to a lot of things, which are really important. Security could be one, optimization could be one, that’s number one.

Number two is that because of the DevOps movement, earlier developers, your job is just to write application, which adds business value and your job is done. Someone else will run it. Someone else will secure it. Now everything is in your pipeline, so you have to manage so many things, which also means the complexity is growing and the area of [inaudible 00:09:28] that you need maybe lacking. Plus there is a shortage of skills as well.

So, first of all, how do you, when I say you, Pepperdata, look at this problem? And once again, as you talked about autonomous automation. So, let’s talk about what’s the problem here, which you see in general from broader perspective and how you folks actually help teams, whether the company is big, as you mentioned, or is small. So, they are efficient.

Maneesh Dhir: Again, I think this is very emblematic of a real life situation. Most organizations, big or small, are going through this, see often this is … Let’s start from the beginning. Why do companies move to the cloud? Because the cloud is flexible, premises, you can have more dynamicity in your resource allocation and hence you can save in the long run, cost. So, when the mandate comes from, often in a company, there may be an executive. Sometimes they could have a role as chief digital transformation officer or something like that. The mandate to move to the cloud comes, the first thing is to move the cloud to be, I mean move the application, to be cloud ready or cloud enabled.

Often, that could be lift and shift, as we call it in the industry. Which is you take it, you make minor modifications but you don’t necessarily optimize it. If you have the luxury to do so, some organizations actually will build fresh organizations from the stack, which is why going back to where we started, the rise of Kubernetes, it’s what we call the modern data stack. That’s where companies want to be because it’s resilient, it’s performant, and it allows you that flexibility to pick your environment or a mix of environments.

But once a developer has built the application, and usually the reality is, that most projects tend to run over budget, over time. That just the way things are, that there isn’t enough time to optimize the application. And even if you are able to optimize the application, when you are running the application, often again, you find that what the application is consuming versus what you thought it should be consuming, there’s a big gap.

And that’s where we come in. We do not necessarily count on the developer to have gone through all these steps. We look at the dynamic behavior of the application. How is it actually running with the workload that you’ve chosen? Let’s say it’s a big data workload. Let’s say it’s running as a batch process on Spark on a platform like Kubernetes. You may have allocated a certain amount of resources, let’s call that resource memory. You might have allocated, say five gigabytes of memory, but for the most of the time you are actually using maybe two gigabytes in memory. Well, if I knew that I had that extra memory available, I could actually run another job, but the scheduler doesn’t know that. The scheduler doesn’t have the smarts, if you will, to figure that out.

So, what Pepperdata does is to actually look at … it goes beyond what the developer can do. It looks at what is happening in real time and based on that, it adjusts the workload that is being deployed for the resources that are being commissioned. If you have more resources commissioned than are actually needed, because there’s no backlog of pending jobs, it’ll actually throttle back the resource usage. That’s how you get these savings that I spoke of.

Swapnil Bhartiya: Since you brought up and there’s Prometheus already there, so what are the limitations that are there? I mean, because these are two very, very powerful tools, open source tools. But once again, it’s a question I feel is strange to ask because the complexity is so much there that you do need commercial folks like Pepperdata to come and help folks. So, let me [inaudible 00:13:44]. I mean, if you look at Kubernetes, it has Prometheus there, which helps with monitoring, knowing what’s going on. So, talk about just the limitation of Prometheus in the sense, but as we are talking a little about the developers or DevOps, their own limitation that comes from the complexity. So, talk about that, since it’s already there, where Pepperdata actually jumps in to help folks make things easier and simplify them. So once again, they can focus on business value and not get intimidated with all the metrics and data.

Maneesh Dhir: Absolutely. And at the onset, I want to just recognize that Prometheus out of the gate is a great way to get started. It has several advantages and time permitting, I’d go into them. But I think just for brevity, let’s just state that at the onset and focusing on really what you asked, which is, so if I’m on Prometheus where lie the gaps. The gaps lie in one, it’s making it enterprise great. If you’re an enterprise, you want to look at patterns and metrics over long periods of time. The genesis of Prometheus is more for real-time monitoring and real-time monitoring is very valuable, but doesn’t necessarily take away the requirement to have long-term monitoring because workloads change over time, consumption changes over periods of time. Enterprise organizations want to retain that data and examine that data over longer periods of time. So, that’s one aspect, so that we can do that doesn’t come built in.

The second is, in this environment, Prometheus has a fairly rich query capability. It has a real-time database. So, it allows you to be able to instrument and pull in a lot of metrics in real time fairly quickly and efficiently but what it doesn’t have, is the ability to do complex combinations and metrics, business metrics, system metrics. It also, in a Kubernetes kind of environment, where resources are being commissioned and decommissioned. So a lot of ephemeral services, if you will, or requests or node addresses are coming and going, the ability to grab that so you get the best view of your application’s behavior, that is a little bit of the shortcoming of Prometheus, if you will.

And thirdly, it is about putting all of that together in an enterprise class dashboard, where you can look at all of this, you can make sense of the trends and then take action. A lot of people use tools like Refiner, for example, to supplement Prometheus. Again for you, that may be a good solution, but if you’re a larger organization, what we call enterprise, you want things like role-based access control. You want to have more complicated combinations of metrics. And by the way, you want, if your job is a bad job, you want the capability to be able to look at not just the basic metrics, but for Spark specifically, you want to be able to look at the more complex metrics, the more multistage executor metrics, which is where Prometheus often doesn’t give you what you need. But those are the areas where companies like Pepperdata with a strong foundation in observability, that’s where we come in.

Swapnil Bhartiya: Excellent, excellent. Thanks for explaining that. I will go to the very beginning when you’re talking about a survey was done. I’m pretty sure that by now, the numbers are much higher than what folks were saying about adoption capabilities, but Kubernetes sometime I compared with the Linux Colonel as well, because when the [inaudible 00:17:38] came out, he did not expect where it will be used. Now everything is running on Linux Colonel. Same similarly happening with the Kubernetes as well. You can also call it Linux of the cloud in a way.

As the use case is growing, not only use case, but where is being used, edge is also there. So can you also talk a bit about where do you see things are heading and what kind of challenge it will create for monitoring as you talk about cost is becoming a big factor. There was recently an open source project called Open Cost, which is specifically for communities, cost as well. So, talk about from that perspective as well, where you see things are heading and how the ecosystem in terms of company like Pepperdata will evolve to tackle some of those problems.

Maneesh Dhir: Sure. We think about this a lot because we’re one of the early companies to be in this autonomous optimization space. I think you’re absolutely right. One, cost sensitivity is the order of the day. We’re now in an economic environment where frankly, that’s being mandated. If you’re the CFO of an organization and you look at a cloud bill, often there is a mandate to say, “Well, let’s compare the business case to where we are today.” And often, there may be a big gap on that. So, I think all applications on one hand, the move to the cloud isn’t going. I think the fundamental benefits, the premise behind the move to the cloud stays strong, as you mentioned.

There are different forms of the computing paradigms emerging, edge computing being one of them. But I think fundamentally, what the industry needs as a foundation is the ability to look and monitor. In the traditional era where you had a Linux kind of environment, you had tools which did sort of system management, where you could instrument everything and look at that.

That level of enterprise grade monitoring, if you will, has to be prevalent, not just to make apps performant, but also to make sure that you’re getting the best use of the resources that you’re deploying, not just in the cloud, not just in the edge, with workloads like … typically it’s not just big data workloads.

There are so many different workloads now moving to the cloud, the ability to manage all those workloads and understand their characteristics and be able to then address any sort of gap between expectations and reality, that really is where I think the industry is. So, I see monitoring certainly as a baseline capability, but the ability to optimize that consumption of resources, that waste that is being generated, I think that’s the need of the hour. That’s what our customers are telling us.

Swapnil Bhartiya: Maneesh, thank you so much for taking time out today and join me to not only talk about the company, but also share the larger picture in terms of observability and monitoring, and also where things are moving and how folks like Pepperdata are helping users. So, thanks for sharing those insights and I would love to have you back on the show. Thank you.

Maneesh Dhir: Swapnil, pleasure was mine. Thank you too.

[/expander_maker]