Chip Childers, the co-founder of the Cloud Foundry Foundation, recently joined Puppet as Chief Architect. When he announced the move, Childers said that he is excited to go back to a role that’s about shipping products. “I fundamentally have that itch to make sure that I’m involved in getting products out the door,” said Childers.
While there were many companies that would have loved to have Childers, he chose Puppet for many reasons. One reason was that in the last 12 years, Puppet has managed to build a significant business. “It is underpinning some of the most critical infrastructure in the world,” said Childers.
Childers also noted that the Puppet team played a big role in this. An “incredible group of people” that includes new hires and those who go way back to the early days of the company. Of that team, Childers says, “They’re kind, they are empathetic, but they’re also incredibly smart, smart about infrastructure management, smart about configuration management. And they’re going to be a great team to work with.”
Childers’ role as Chief Architect will involve getting back to helping ship products, which boils down to helping the organization balance the act of “keeping the existing customers really happy with the products that they’ve been shipping for quite some time and some of the newer products that have really amazing add-on capabilities to the core Puppet enterprise promise.” He’ll also be helping to accelerate the expansion of Puppet’s offerings to include things like the complexity of configuration for cloud-based services.
Puppet’s Role in the Cloud-Native World
So what role will Puppet play in the cloud-native world? On that front, Childers says it’s important to “think about the difference between the more volatile ephemeral infrastructure and heritage infrastructure, classic infrastructure, it really has to do with the amount of time that each thing that you’re managing, stays alive.” He then uses the pets vs. cattle analogy to say, “Is every operating system a pet that you care for as an individual and you nurture? Or, is it a herd of cattle where you actually care about the health of the herd more so than the health of any individual animal? It’s an imperfect analogy, but it’s one that was used during the early days of cloud.”
According to Childers, we’ve witnessed a slow, and even nascent, shift to a world where configuration management isn’t just for optimizing your experience, but it’s fundamental to the ability to consume and have any faith in the systems being deployed. Because of this, Childers says, “Configuration management’s impact and importance rises dramatically as you get into these newer styles of compute.”
Importance Of Bare Metal and On-Prem
And with the rise of bare-metal, on-prem, and edge computing, Childers says Puppet’s role is even more critical than ever. He says, “It’s almost as if that trend of moving to clouds and moving higher levels of abstraction above the server has similarly played out, but in a slightly different direction when it comes to on-premise infrastructure.” Childers adds to this, “So serverless computing does essentially hide everything underneath it. You don’t need to worry about its configuration management. So you’re just dealing at this higher level of abstraction.”
With regards to edge computing, Childers says it’s just a bit more distributed to physical locations in companies that have been doing it for many years. He gives the example of Point of Sale (POS) systems. If a business is a multi-store retailer, they likely have a distributed edge computing model. Those long-running infrastructure environments need configuration management. And although according to Childers, we don’t have the same cloud-like dynamic for agile infrastructure on the edge, we do have the same business demand for software to be able to go through rapid change.
But why are companies still leaning toward on-premise data centers? To that, Childers says, “If you’re in the Fortune 500, the Global 2000, Global 4,000, you’ve got data centers, you’ve got probably multiple data centers, you’ve got edge devices that you have to deal with, whether you’re in manufacturing, you’re in automotive, whether you’re in retail, whether you’re in telecom, you have a distributed computing environment that you’re responsible for, that the lifeblood of your business runs through.” None of this has to do with public cloud providers and the on-prem environments (which are more classic infrastructure environments) are all part of the modern IT architecture. According to Childers, “The challenge for the CIO and the CEO is that this is just with the added benefit of more opportunity to get access to infrastructure quickly and all of the potential agility that provides your company. The cost is added complexity, and that added complexity can result in a harder time achieving resiliency or reliability.”
That’s where Puppet comes in, with a set of solutions that are being built out to allow for a simplification of that problem. Childers says that it all comes back to the core premise of configuration management being a superpower. He says, “If a company is very good at configuration management, infrastructure automation naturally rolls from that. You then begin to experience what it’s like to have self-healing systems. You can then start to experience what it’s like to have automated compliance problems beginning to get resolved before they become evident through failure or through security breaches.”
And on-prem still matters, especially when it comes to data gravity. In that respect, Childers says, “Once you establish the location of your data, it’s hard to move it. It’s not impossible, but it’s hard and there’s a big cost to moving it.”
Lowering the barrier of entry, Puppet and Low-Code?
Childers also addresses the topic of low-code when he says, “Automation is what frees humans from dealing with toil. It’s something that fundamentally did not exist for a long period of time. And then Puppet was a pioneer in bringing the concepts of DevOps into the market.” But Childers hasn’t reached any type of hypothesis about how configuration management will help developers who use low-code systems to build software. He adds, “The real question comes when you take a look at the opportunity to ensure that the low-code experience is actually available on top of infrastructure automation. So how do you take the goodness, that is something like Puppet, and find a way to help an organization scale the use of infrastructure automation in a way that doesn’t require knowledge of how to author a Puppet module? That’s an area that I think is very attractive. It’s an area that I think that we may have some things to share on.”
The summary of the show is written by Jack Wallen
Here is the rough, unedited transcript of the show…
Swapnil Bhartiya: Hi, this is your host, Swapnil Bhartiya, and welcome to TFiR Let’s Talk. And today we have with us, once again, Chip Childers, who is now a Chief Architect at Puppet. It’s good to see you again, Chip. Thanks for joining me today.
Chip Childers: It’s great to be with you as always, Swapnil. Thanks for having me on.
Swapnil Bhartiya: Now you’re Chief Architect at Puppet, there’s so much to talk about today. But I want to start with, you have been in this cloud space for so long. You have so much very experience. Why did you choose Puppet? Why did you join the company?
Chip Childers: I think that Puppet was a really interesting choice for me for a few reasons. So one is, its think about 12 year history. It’s managed to build a really significant business. It is underpinning some of the most critical infrastructure in the world. And so that’s something that was incredibly attractive to me. The second is that it ships commercial products and that was really something that I personally needed to get back to having spent a lot of time in the Open Source foundation or trade association world. I fundamentally have that itch to make sure that I’m involved in getting products out the door. And I think the other thing about Puppet that made it particularly attractive, frankly, was the team. The team they’ve got, both the folks that are relatively new spanning all the way to the very early employees that are still there, they’re just an incredible group of people. They’re kind, they are empathetic, but they’re also incredibly smart, smart about infrastructure management, smart about configuration management. And they’re going to be a great team to work with.
Swapnil Bhartiya: As a chief architect, what is your actual job? You did mention that you missed shipping the product and everything else. So I want to understand what you are doing here now.
Chip Childers: Yeah. Well, I mean, I think number one, it’s going to involve getting back to helping to ship product, but it kind of boils down to a few things. So the first thing is to help the organization do that tricky balance between keeping the existing customers really happy with the products that they’ve been shipping for quite some time and some of the newer products that have really amazing add-on capabilities to the core Puppet enterprise promise. But it’s also to help accelerate some of the journey as Puppet expands its offerings. So an example of what I mean by that is, the complexity of infrastructure hasn’t gone away now that cloud-based services are available. And in fact, what’s actually occurred is a exponential increase in the complexity of needs around configuration management, right?
So most companies have not abandoned all that infrastructure that Puppet is really good at, which is those long running infrastructure systems, whether they’re virtual, or whether they’re physical servers, or they’re network devices, all of that infrastructure is largely still around and still relied on. But increasingly, we’re seeing those same organizations starting to use infrastructure in public cloud environments, maybe multiple public cloud environments at once. And when they’re in the public cloud, there’s a whole slew of other, let’s say, levels of how you would define infrastructure. So the problem of configuration management is no longer relegated to the confines of the operating system, and configuring what packages are installed, and what security configuration is in place, but it’s also things like how are identity and access management policies configured in AWS, IAM policies. How are the object storage services configured? How is the network configured? Because it’s all now API addressable.
And then if you layer on, even on top of that, all of the Cloud Native application patterns and Open Source tools and other capabilities, we’ve seen this exponential rise in the amount of configuration, which is really kind of a core to the Puppet promise. So there’s a lot of opportunity in that space, and I’m looking forward to helping to explore it.
Swapnil Bhartiya: The world where Puppet operates in is more about, as you also mentioned, long running infrastructure, whereas the Cloud Native world is more about incremental infrastructure. Things come and go very quickly. So what role does Puppet play in this Cloud Native world?
Chip Childers: Yeah. I mean, I think there’s ways that you can use Puppet’s current products today in, let’s call it the ephemeral infrastructure world, right? It’s a much more volatile environment, but some of the core technologies and products that Puppet has on the market and are super successful in organizations already, work fine in that environment. But the way to think about the difference between the more volatile ephemeral infrastructure and heritage infrastructure, classic infrastructure, it really has to do with the amount of time that each individual thing that you’re managing, stays alive.
So in a virtual machine-based world, there may have been a decrease in the longevity of a particular operating system because when we went from physical servers to virtual servers, they were software configurable, they were able to be created in software. They were able to be destroyed in software. And you started to see infrastructure consumption patterns start to shift to become a bit more flexible in how we thought of… I guess, we’ll use that pets versus cattle analogy, right? Is every operating system a pet that you care for as an individual and you nurture? Or, is it a herd of cattle where you actually care about the health of the herd more so than the health of any individual animal? It’s an imperfect analogy, but it’s one that was used during the early days of cloud.
And if we look at that shift as one that I think underpins a lot of the new ways that we consume cloud infrastructure, it’s that we don’t expect most of the things that we deploy to be super long lived, right? Because, for example, if a function [inaudible 00:07:14] service system is able to scale to zero, we’re no longer managing each instance of that function. What we’re actually managing is the configuration that says, “Here’s the code that needs to be the function.” And then here’s the configuration that says, “What API endpoint do I expose in a gateway that will then fire that function, make it come alive? What are the scaling parameters that I’m going to look for? How is it going to tie to any persistent storage or other backing systems?”
And so we’ve seen this slow, and even nascent, shift into a world where configuration management isn’t just an option that helps you optimize your experience, but it’s fundamental to your ability to consume and use and have any sort of faith in the systems that you’re deploying and taking advantage of. So configuration management’s impact and importance rises dramatically as you get into these newer styles of compute.
Swapnil Bhartiya: Nowadays, we hear a lot about bare metal and edge data centers and other emerging space, which we cover a lot here at TFiR. There are a lot of workloads that still run on-prem, there can be many different reasons. Compliance is a very good reason where you don’t want to move your data on public cloud. So looking at these trends, I see that Puppet’s role is even more critical. What do you think?
Chip Childers: Very much so, it’s almost as if that trend where… The trend of moving to clouds and moving higher levels of abstraction above the server has similarly played out, but in a slightly different direction when it comes to on-premise infrastructure, which in fact is going to be long lived. So serverless computing, if you’re really using a cloud provider, does essentially hide everything underneath it. You don’t need to worry about its configuration management. So you’re just dealing at this higher level of abstraction in kind of that story that I explained.
But when we look at edge computing, which arguably in some ways you could say, listen, what that really means, it just a bit more distributed to physical locations in companies who’ve been doing this for many years. Think about the point-of-sale systems that every retailer has, and if it’s a multi-store retailer, they likely have a distributed edge computing model. We don’t always think of point-of-sales as being edge computing, but it’s a really good example of a long time industry that’s used technology in a very distributed way. And yeah, these devices, when they go out there, they’re out there for a very long time. And you could almost argue that the long running infrastructure, the length of time that that infrastructure needs to be in the field is even longer than it is in perhaps a data center where you have easier access to the physical device, and you’re able to actually replace it on a semi-regular basis if that’s valuable to your organization.
But if you think about compute that’s been shoved out to cell towers, or if you think about compute that’s been shoved into a factory setting, these are ultra long running infrastructure environments. And so the need for configuration management there is immense because the same pressure that we might see in a Cloud Native world to accelerate the time to make a change in the software running in that infrastructure exists when you’re talking about the edge devices now. And so we don’t have the same cloud like dynamic for agile infrastructure on the edge, but we do have the same business demand for software to be able to go through rapid change, for new services to be provisioned very easily. And that’s where, again, a company like Puppet, they’re solving that problem and solving it at scale and in environments just like that, really well today. And there’s still even more work that we could do in the future.
Swapnil Bhartiya: Let’s just pan away from edge and talk about traditional data centers. Can you talk about some of the use cases where on-premise still matters? One thing that I do want to bring to this discussion is that most users care more about their data. Apps can run anywhere, they come and go, but data is what matters. But public cloud tends to have data gravity, there’s a risk of vendor lock-in there. And that is why we talk about multicloud, hybrid cloud. And also, when we look at Cloud Native, it’s more about how we do think is less about a thing.
Chip Childers: Yeah, well, so let’s start with that. The difference between Cloud Native and running in a public cloud provider, they’re miles apart. They’re completely unrelated, almost. Cloud Native is an architecture pattern, and there’s a bunch of Open Source tools that you could use to build platforms underneath it. And there are tons of benefits to that pattern, but it’s one of many patterns that the enterprises have used over time. It’s one that more new applications are being built using than otherwise. But there’ll be something that comes after. My bet is a bit more on the kind of the Low Code side of things, enabling citizen developers, that sort of thing. But this is just like a tried and true… It’s a truism that as we move forward through time, we have new patterns and practices around how we develop software. So Cloud Native is that.
Cloud, specifically when I talk about cloud, I’m talking about the hyperscale providers, which arguably is three to four, maybe a couple of others that are truly… They’re global scale. They have a enough of a infrastructure footprint that even the largest of enterprise can arrive in that cloud and still feel like it’s completely elastic from their perspective, that’s a true cloud infrastructure environment and it’s a place that you could choose to run your compute. It’s a place where you could choose to run those applications or store your data. So it’s almost location oriented. And then is that heritage infrastructure ever going to go away? Probably not. I mean, there’s just so much that unless you’re starting the company today, you likely have some type of on-prem infrastructure that makes up your IT systems.
And if you’re in the Fortune 500, the Global 2000, Global 4,000, you’ve got data centers, you’ve got probably multiple data centers, you’ve got edge devices that you have to deal with, whether you’re in manufacturing, you’re in automotive, whether you’re in retail, whether you’re in telecom, you have a distributed computing environment that you’re responsible for, and that the lifeblood of your business runs through. And none of that has to do with the public cloud providers. The public clouds, the on-premise environments that kind of mimic the cloud experience, the on-prem environments that are the more classic infrastructure environments, they’re all part of the modern IT architecture. And so the challenge for the CIO, challenge for the CEO, is that this is just with the added benefit of more opportunity to get access to infrastructure quickly and all of the potential agility that, that provides your company. The cost is added complexity, and that added complexity can result in a little bit of a harder time achieving resiliency or reliability.
And so companies like Puppet, or Puppet specifically, have a set of solutions that are being built out, and most that are already on the market that are going to allow for a simplification of that whole problem. And it really comes back to that core premise of configuration management is a bit of a superpower. If a company is very good at configuration management, infrastructure automation naturally rolls from that. You then begin to experience what it’s like to have self-healing systems. You can then start to experience what it’s like to have automated compliance problems start to get resolved before they become evident through failure or through security breaches.
Swapnil Bhartiya: Can you talk about some of the use cases where on-prem still matters?
Chip Childers: Yeah. So with regards to data and the question of data gravity, once you establish the location of your data, it’s hard to move it. It’s not impossible, but it’s hard and there’s a big cost to moving it. And so if you’re an organization that it’s already existing, it’s been around for a while… Think about the typical bank, most of their core data sits in a mainframe. The lift to not just move the data, but the wrapping software that sits around it, that is at the heart of the business, that’s a huge lift.
Some cloud migration projects do involve actually picking up the heart of the business, that core data system, and moving it into a public cloud. And I have seen quite a few successful big ERP implementations that have been cloud-based, whether that’s going to Oracle cloud, or whether that’s partnering with SAP and using their product, or their product on top of a hyperscaler environment. Data gravity is not insurmountable, but it is a huge source of infrastructure inertia, which is why that question of, will the on-premise software or infrastructure go away. Generally, the realistic answer is to say, “No, it’s not going to go away. Heritage infrastructure is here to stay for a very long time.” And there’s good reasons for it.
Swapnil Bhartiya: You brought up the point of Low Code, when we look at Cloud Native, just look at CNCF landscape, there are so many logos. Kubernetes has too many knobs to turn. Things become very complicated, very quickly, and that’s where automation becomes very important is such a world. At the same time, when you look at Low Code, it allows companies to kind of dance around these complexities of Cloud Native and take their products to market very quickly. So can you talk about first, the importance of automation in Cloud Native world? And second is that, if we just look at Low Code, how is Puppet making its products more accessible to users, or making the whole configuration more accessible?
Chip Childers: Yeah, so that’s a good question. I think the first part is, what’s the important of automation in that world? Well, automation is what frees humans from dealing with toil. It’s something that fundamentally did not exist for a long period of time. And then Puppet was a pioneer in bringing the concepts of DevOps into the market. And one of the premises behind this DevOps concept of fast iterations and fast loops for IT service changes was that you can only really achieve that if you have a huge amount of that change set where the process of delivering change automated. And so that’s where infrastructure automation comes into play, right? Configuration management gives you the model, so you understand what you’re about to change. You can then describe the change you’d like. Automation takes care of delivering that change. And so you’re able to loop very quickly.
Now, as that concept took off, platforms like Kubernetes were created and they embodied some of those core concepts. So that configuration is in fact, essentially code. The configuration of a Kubernetes cluster, if you look at the whole GitOps model of operating a cluster, that’s actually using source control systems to simply manage every change in your infrastructure environment. And the point of the Kubernetes software is to be able to take a declaration of this is the current or the new state that I’d like to see and simply run convergence loops to deliver against it. That’s great and it solves a particular problem for that Kubernetes platform.
Now, if you layer a service mesh on top of it or you layer then some sort of application delivery system on top of that, and then maybe you have a CI/CD system on top of that. Each one of those has lots and lots more configuration, which is why there’s a need for companies like Puppet to figure out how can we help with configuration management across that whole stack from top to bottom?
Low Code is a different beast, and I haven’t personally reached any type of hypothesis about how configuration management is going to relate to helping the developer who uses a Low Code system to build software? And I use the term developer there when I really should say citizen developer, or business analyst, or just a business user. I think, in my view, that configuration management is not going to be consumable in really the same way as it is today for that audience. Number one, it probably needs to be embedded in the Low Code systems themselves. But the real question comes when you take a look at the opportunity to ensure that the Low Code experience is actually available on top of infrastructure automation. So how do you take the goodness, that is something like Puppet, and find a way to help an organization scale the use of infrastructure automation in a way that doesn’t require knowledge of how to author a Puppet module? That’s an area that I think is very attractive. It’s an area that I think that we may have some things share on.
And maybe the last thing I’d point to is that the API driven configuration… Puppet has a newer product, it’s in beta right now called Relay. And Relay by Puppet, fundamentally, it’s a Low Code model for configuring cloud-based infrastructure or API addressable infrastructure. And certainly the products very early on, but that concept, I think, has a lot of promise because scaling to a lot of different individuals across a lot of different infrastructure types is going to require an abstraction that keeps it as simple as possible for them.
Swapnil Bhartiya: Chip, thank you so much for taking time out today and talk about the next chapter of your career, of your journey. We also talk about automation, data gravity, and importance of Puppet into this Cloud Native World. And as usual, I look forward to our next discussion. Thank you for your time today.
Chip Childers: Thank you a ton, Swapnil, appreciate it.