AI/MLCloud Native ComputingDevelopersDevOpsFeaturedLet's TalkVideo

Graft Aims To Make Modern AI Accessible For Everyone | Adam Oliner

0

Building AI infrastructure can be an expensive ordeal accessible only to the top companies that can afford it. While there are vendors that offer a piece of the AI puzzle requiring you to invest lots of time and money to build the infrastructure, Adam Oliner, Founder and CEO of Graft, believes their company is the first to have a platform that offers an all-in-one package for distilling AI infrastructure and workflows.

The company hopes that its platform will lower the entry level for companies wanting to leverage AI, while providing a solution for the talent shortage in engineers that specialize in modern AI technology.

“Not only does Graft obviate the need to build this infrastructure yourself in the first place, but if you want to build a use case inside of Graft, you also don’t need to have data science or machine learning expertise. We are building Graft so that you don’t need to have that talent on your team in order to use it,” says Oliner, in this episode of TFiR Let’s Talk.

Key highlights of this video interview are:

  • One of the barriers to entry for many organizations to do production AI is a lack of proper infrastructure, and bringing in a team to build it is expensive. Oliner describes the problems around organizations adopting production AI and why he set out to found Graft and work to democratize AI.
  • Oliner explains how top AI companies are similarly building and maintaining their AI infrastructure as a central focal point rather than building end-to-end use case specific pipelines and the benefits of this approach.
  • Whereas previously organizations have had to start from scratch building everything, Oliner explains why Graft is trying to build representations that are versatile so can be leveraged by a wide range of customers and not just for common use cases.
  • Oliner goes into detail about the infrastructure they are helping organizations build and why Graft’s offering is unique. He discusses how Graft is helping aspects around monitoring, observability, robustness, and reliability so that the customer doesn’t have to.
  • There is a talent shortage within the subset of engineers that specialize in modern AI technology. Oliner explains how they are helping to bridge that gap by providing an alternative to hiring that team and rather building it out internally.
  • Graft’s offering means organizations do not need to have that talent on their team in order to use it; however, Oliner believes that organizations who do have a machine learning team will also reap the benefits. He explains how Graft lowers the entry level for organizations and the benefits of doing so.

Connect with Adam Oliner (LinkedIn, Twitter)

The summary of the show is written by Emily Nicholls.

[expander_maker]

Here is the automated and unedited transcript of the recording. Please note that the transcript has not been edited or reviewed. 

Swapnil Bhartiya: Hi, this is your host Swapnil Bhartiya and welcome to another episode TFiR Let’s Talk, and today we have with us Adam Oliner,  Founder and CEO at Graft. Adam, it’s great to have you on the show.

Adam Oliner: It’s great to be here Swapnil, so thanks for having me.

Swapnil Bhartiya: Perfect. And today’s discussion is going to focus more on kind of, as we were talking before this, regarding you started was making AI a more accessible kind of democratize it. One of the statements that I was reading was more or less about your aim to make the AI of the 1% accessible to the 99% which also made me curious since you are also a founder of the company that… I mean, when we look at AI in today’s world, we kind of feel that we live in a data driven cloud-centric world where AI/ML is defector, no matter if you look at a small IT Ring device or you look at the Tesla super computer on wheels. So, I was curious that what problem you saw in this space where people do assume that AI has been democratized, that you’re like, “You love?” “No, there are still a lot of problems to be solved. So let’s create Graft.” So, tell me the story of Graft based on what I just said.

Adam Oliner: Yeah. Yeah. So, I think the real barrier to entry for a lot of organizations to do production AI is a lack of the production AI infrastructure. So, in order to get that infrastructure, you either need to hire a really expensive team to then go and build that infrastructure, or you need to somehow get it from a vendor. But as far as we know, there is no vendor offering production AI infrastructure for doing modern AI. And so that’s the gap that we’re looking to fill. When you see the conversations around AI talking about cloud vendors or organizations like OpenAI and Hugging Face, all of which you’re doing great work, they’re very focused on the model part of it. And there are famous papers related to ML going back a decade or more now that say that is a very small piece of the entire puzzle if you want to solve production AI use cases.

So where is the data coming from? How is it getting transformed into a form that is amenable to applying those models to it? What do you do with the results once you have them? How do you monitor performance once it’s in production and so on and so forth? And so if you actually want to do this in production at scale in a way that’s reliable, there’s a lot of infrastructure that you still need to build or at least assemble, and to do that is often a sufficient barrier to entry for organizations. So, what we’re trying to do at Graft is be a kind of sufficient set of infrastructure for doing modern AI without having any other prerequisites or skill requirements.

Swapnil Bhartiya: Excellent. Thanks for explaining that and also that led to some questions. Number one, I want to ask you. When you talk about use cases, there should be… What are the specific use cases that you are looking at and what are the roadblocks or issues that they run into so that they can consume or leverage AIML? Then we’ll also talk about when you talk about infrastructure, but let’s simplify things. Let’s talk about the use cases first.

Adam Oliner: Yeah, absolutely. So, this is also partly a shift in perspective that I’d like to encourage everyone to take which is to think about less in terms of use cases and more in terms of building the right representations because once you have the right representation of your data or of your entities of interest like customers or products, a lot of the use cases become kind of trivial. They could become really easy.

So, for example, when I was at Slack as head of machine learning there, one of the first projects that we undertook was to convert all of the raw text of Slack messages into a representation that made it really easy to do semantic search and recommendations on top of them. And that representation is something that we’re carrying over at Graft and helping users build so that they can get search and recommendations really easily. But it also happens that those representations are good for a long list of other use cases like content moderation or personalization or any sort of predictive analytics where you want to predict a property of the underlying data or entities like customer churn prediction for example.

This was something that we had observed the top AI companies doing largely to the exclusion of other sorts of end-to-end model building workflows which is that they would build infrastructure that could serve personalization and content moderation and search and recommendations and so on and maintaining that infrastructure as kind of the beating heart of their AI as opposed to having 50 different people building really small fragile end-to-end use case specific pipelines that would largely get used once and then kind of thrown away. So, that reuse is one of the hallmarks of modern AI when we talk about it.

Swapnil Bhartiya: Are there any particular use cases that Graft caters to specifically, or is just like a lot of when we do look at foundational technologies, it really doesn’t matter we’ll use external or we’ll use Kubernetes use case on top layer doesn’t really matter?

Adam Oliner: Yeah. I think you’re seeing a convergence in the industry specifically in sort of ML research community around these multipurpose kinds of models. So, I think this notion that well, every use case is a totally new problem. We’re going to start tabula rasa and just build it from scratch. Increasingly, they’re saying like, “Hey, it looks like transformers with attention are really good at a lot of problems. Or if we compute embeddings first, then we can apply that to a bunch of different use cases downstream.” And what we’re doing is kind of finding the narrow waist of that technology space where you get the kind of maximal leverage for all the use cases you might want. And that’s what we’re focused on.

I think you’re right that it certainly used to be the case that you would have to build everything from scratch, and more recently, and this is what we’re sort of jumping on as a technology, is the ability to build these representations that are very versatile and then help customers leverage them, not just for the common use cases that Graft does cater to like search and recommendations, content moderation, and so on, but also the really long fat tail of use cases where no vendor is ever going to build for them a vertical solution just for that use case. And I think that’s where Graft is really going to shine is the ability to help customers get those representations and the infrastructure to productionize it so that they can then build their uses in a matter of minutes, as opposed to months or years.

Swapnil Bhartiya: When we look at Graft, what is your offering? And when you do talk about infrastructure, what exactly do you mean by that?

Adam Oliner: Yeah. So, if you think about an end-to-end production system for doing AI, there’s certain pieces that they always share in common. So, you need data pipelines to marshal the data from however many different places it might reside. You need to apply models that maybe are pre-trained to that data at scale. And if you’re talking about, for example, images or video, that production infrastructure alone can be really expensive. Once you have this new representation, you need to store it somewhere, and then you need the ability to enrich it by applying potentially other downstream models and then query it. So, if you take each of those pieces, that’s already a lot, but then you have to think about the systemic properties of that infrastructure that lay on top of it.

So to take a simple example of kind of why this is not a piecemeal solution, you can think about compliance like GDPR. So, if a user says, “I want to delete my data from your system.” So, if that data was used to train one of the models that was used to build the representations, you now have a kind of dependency cascade through your entire pipeline that you have to build because there’s no vendor that is going to sell you this whole thing except for Graft. So, you would have to wire all this up yourself and think about like, “Well, did I get it right or not? Because if I didn’t, I’ll be out of compliance with GDPR.” And that’s just one example of a type of thing that is a systemic property.

To give a couple more quick examples, you can think about observability and monitoring of these use cases and production. So, you build this pipeline and you start serving it in production. How do you make sure that it stays healthy? And usually that is a thing that you need to look at the entire system for rather than some individual piece. So, you have to build that. So, if you’ve, especially if you’ve cobbled together that underlying infrastructure from a variety of different open source tools or vendors, the question of how you get that end-to-end functionality is much more complicated because there’s no vendor that’s going to have built this solution that looks exactly like the infrastructure you built cause it’s very different.

And at Graft, we’re trying to take all of those problems onto the side of us. So, you don’t have to worry about monitoring and observability and robustness and reliability. That should be something that Graft handles so that the customer can focus on, again, the business specific parts of the use case like the application of it in their product or for their customers or the data itself.

Swapnil Bhartiya: Excellent. Once again, thanks for explaining that. I also want to quickly talk about the cultural and people aspect of… No, first of all, there is already a shortage of talent internal when they come to technology. AIML is even trickier. You cannot… Also, data scientists are very expensive as well. You cannot find them ,and big companies, you don’t want to name them. They have their own requirements. So, they suck up all the engineers that are available in the market. So, how do you also help organizations deal with this role because you are not dealing just with the technological challenges. You’re also dealing with the cultural as well as people challenge there.

Adam Oliner: Yeah. The talent market is challenging especially when you look at the subset of engineers that specialize in modern AI technology like transformers and embeddings. Every time Graft tries to hire someone with that skillset and we have a little bit of trouble doing so, it’s sort of gratifying in that it is validating part of Graft’s thesis which is that these talents are difficult to hire for. So, what we’re building at Graft is an alternative to hiring that team and building it out internally. So, you don’t need to hire an ML engineer who knows embeddings. You simply need to have someone who understands your data and your use case. So, the hiring market is certainly challenging, but we’ve been very lucky to bring on a number of amazing people onto the team. And I could eat up the rest of our time just bragging about the team because they’re fantastic.

Swapnil Bhartiya: Once again, thanks for explaining that. Now, if you look at your customers, they’re also going to the same challenge that you are going through. They cannot hire, and also things are already so complicated. They cannot afford to hire whatever their data scientist needs could be there. So, how do you lower the better of entry for them so that they can still leverage, take benefit around all of those things without having to worry about talent challenge that is there.

Adam Oliner: That’s right. So, we are building Graft so that you don’t need to have that talent on your team in order to use it. So, not only does Graft obviate the need to build this infrastructure yourself in the first place, but if you want to build a use case inside of Graft, you also don’t need to have data science or machine learning expertise. So, we are making that persona unnecessary in an organization. However, if you do already have that talent, if you have a machine learning team at your company, this makes them much more efficient because it focuses their energies on activities that are business specific. Data and use cases rather than, again, all of the plumbing that sits in between which is frankly not the fun part of doing machine learning anyway. And so, those engineers will probably be overjoyed that they don’t have to build some complex data pipeline to handle a real-time use case with large unstructured data pieces. That’s probably a good thing, but it also means, again ,that they can focus on things that are differentiating for the business.

Swapnil Bhartiya: Adam, thank you so much for taking time out today and talk about this topic as I was initially I was not sure that what did you mean by AI of 1% and make it accessible to 99%? Now I have very good understanding of that, but I have more questions than I had before I started talking to you which also means I would love to have you back on the show to get all those questions answered and have, again, a very interesting discussion. But I really appreciate your time today. Thank you.

Adam Oliner: Thank you so much for having me Swapnil and giving me a chance to talk about it.

[/expander_maker]

Don't miss out great stories, subscribe to our newsletter.

Zowe LTS V2 Comes Out With Many New Features | Joe Winchester

Previous article

Avesha Open Sources Its KubeSlice Kubernetes-Native Project

Next article
Login/Sign up