Intel recently announced a new version of open-source OpenVINO toolkit to boost AI inferencing performance for developers. OpenVINO 2022.1 supports natural language processing and audio inference capabilities in response to the company seeing voice recognition playing a bigger role at the edge as well as various types of audio processing.
Automatic optimization capabilities have also been built in, taking some of the work off developers hands. OpenVINO automatically looks across the hardware system at all the accelerators you may have, all the compute that is available and how much memory is in the system, distributing the inferencing AI job across the compute.
Matthew Formica, Senior Director, AI Inference (OpenVINO) Product Marketing & Developer Ecosystem at Intel, believes that companies will continue to adopt AI as it becomes increasingly integrated with the edge and 5G. Formica comments that he saw a lot of AI pilots in the second half of last year, which he expects to progress to broader scale production in the near future. He feels that companies need to be thinking how they can apply AI to their domain in order to increase efficiency and productivity.
Although OpenVINO started as an edge project, it is increasingly being used in PC client applications as game developers and media content creation developers continue to find benefits in integrating AI into their applications. Developers in enterprise cloud are also using OpenVINO to improve performance for their deep learning.
So how does OpenVIVO fit into the edge and the other components? According to Formica, software developers will typically start with a deep learning framework like TensorFlow and PyTorch, which have acceleration built into them. However, when you get the model out of those deep learning frameworks, you need to accelerate it with something. OpenVINO takes the output from those off-the-shelf frameworks, accelerating it for the deep learning and inferencing. Developers can then connect into other layers of the stack above OpenVINO like EdgeX that will enable you to start providing a visual interpretation of the data. Many software developers are building business decision-making dashboards on top of OpenVINO output, taking the deep learning content and making it accessible to store managers, factory floor managers, hospital administrators, to help run their businesses.
About Matthew Formica: Matthew Formica is Senior Director running Intel’s OpenVINO AI Software Product Marketing organization. AI, especially computer vision related, is increasingly a part of use cases across the spectrum: defect detection on assembly lines, disease detection by medical imaging equipment and shopper help needed alerts in retail stores to name just a few. Getting the best performance and ROI requires the right tools. With OpenVINO™, developers can easily accelerate their inferencing solution performance by several times using the general purpose Intel compute already built into their solutions, and using OpenVINO APIs they can write once and deploy across a wide range of Intel processors.
About Intel: Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.
The summary of the show is written by Emily Nicholls.
Here is the full unedited transcript of the show:
- Swapnil Bhartiya: Hi, this is your host Swapnil Bhartiya and welcome to TFiR Let’s Talk. And today we have with us Matthew Formica senior director of OpenVINO ecosystem at Intel. Matthew it’s great to have you on the show.
Matthew Formica: Great to be here. Thanks for having me today.
- Swapnil Bhartiya: Today we are going to talk about, of course, OpenVINO 2022.1 release, but before we go there, I would like to know a bit more about the project. What is it all about what problem you’re trying to solve with it? When was it created? So let’s just talk about the project.
Matthew Formica: This is a project that is really about using the power of AI to do amazing work out in the real world at the edge. That’s where this project got started, back four years ago, we needed to do a lot of deep analytics and analysis in factories, in retail stores, in hospitals, and particularly on computer vision data. And there really wasn’t anything available at the time, especially nothing that was really optimized for Intel hardware. So we built our own and now ISVs and developers all over the world and hundreds of thousands are using it.
- Swapnil Bhartiya: How do you define edge? Then you mention edge. And can you also share some used cases where OpenVINO is being used?
Matthew Formica: For sure. So the edge, the way Intel thinks about it is really about the thin-edge and the network edge. And when you’ve got large volumes of data in this case, video data from cameras that can, and all that data cannot be sent back up to the cloud for processing economically, you really have to do the analytics right there, and you can either do it in the device. Sometimes that happens. For example, x-ray machines, MRIs in hospitals. Those are common devices where Intel processors and this OpenVINO technology are used to analyze the medical images coming off those devices.
So it can be in the thing, or you could have an edge server, either sitting for example, on the factory floor or in a city in some central location, or it could be in the network edge, which is a little bit further upstream, typically situated near a tower that would also perhaps have 5G equipment. And all of those enable you to have more analytics closer to where the data is actually being generated, which gives you less latency. You can be more responsive in real time. It also saves cost, because again, you’re not trying to send all that data up to the cloud to send results then back down.
- Swapnil Bhartiya: Can you also talk about how you have seen the evolution of this project. Because four years ago, the kind of edge we used to look at or what we were trying to define was a bit different than what it is today.
Matthew Formica: Yeah. The fact that we even needed to have this definition discussion around what is the edge shows how it’s an evolving landscape and the definition does keep changing. If we go back four years, probably the most mature domain where this was used was smart cities where traffic management was a very popular solution, right? Governments wanted to check for wrong way drivers or make sure people were stopping before turning right on red. And they can use AI OpenVINO to analyze the data off the street cameras to do that analysis. Now we see this technology being deployed in retail stores and in Quickserve Restaurants. Restaurants, for example, wanted to keep track of how long is the queue for the drive through and see whether they’re making enough food or have enough staff on hand at different times of the day. It’s being deployed in factories and commonly as well with, with 5G for industrial 4.0 type solutions.
Companies want to be able to have robotics be more responsive to where our people for workers safety in those environments. They’re also building the technology into defect detection systems on assembly lines to more rapidly detect. Is there a problem with anything being manufactured? And so all of these use cases are cases where you want to analyze some image data and quickly reach a conclusion. You want to do pattern matching, which is essentially the analytics that deep learning is all about pattern matching. We have found as OpenVINO has grown that it’s become more than just an edge project. Increasingly it’s being used in PC client applications as well, game developers and media content creation developers are finding that the ability to integrate AI into their application is powerful.
And of course there’s the enterprise in the cloud and developers in that space are finding that they too want the best performance for their deep learning and OpenVINO is perfect for that. So that’s another thing that’s surprised us as we’ve gone over the last few years is the at, although it was originally designed for the edge, it’s actually being used across the computing landscape now.
- Swapnil Bhartiya: If you look at the Open Source community, there are a lot of project edge foundries there, Zephyr is there and a lot of projects, they try to solve similar problems that you’re talking about. Of course, they of target different architecture altogether as well. But can you talk about… You did give us a good history relation that why it was created, but when you look at these projects, do you think where things are heading, these other projects that are there, but they don’t really fulfill that niche that we needed this one. Because Intel there’s a lot of Open Source. So-
Matthew Formica: We do a ton of Open Source and actually I would answer your question in a different way. I would say actually all of those projects serve a purpose and a great many of them actually have OpenVINO built in under the hood. They’re leveraging OpenVINO and then building additional capabilities up the stack on top. So EdgeX for example, there are many cases of EdgeX using OpenVINO because what OpenVINO does is it gets the acceleration out of that AI. But you still need a software stack on top of OpenVINO to do something useful with the information once you’ve accelerated it. And so typically it’s OpenVINO plus additional other Open Source software working in tandem together that provides more of a complete solution for developers.
- Swapnil Bhartiya: And directly create a very good segue to my next question, which is like, can you talk about, first of all how does the whole stack look like? What are the core component of OpenVINO as well? And when I say for instance where does it fit when you talk about edge, what are the other components?
Matthew Formica: If you’re a software developer or an ISV wanting to get started, typically you’ll start with a deep learning framework. The most common ones today are TensorFlow and PyTorch. Those frameworks in turn have acceleration built them. One underlying layer from Intel is the one API software layer that’s kind of a foundation capability across everything. But when you get the model out of those deep learning frameworks, you then have to accelerate it with something. And that’s where OpenVINO comes in. It takes the output from those off the shelf frameworks and accelerates it for the deep learning and inferencing.
At that point, you’ll want to connect into other layers of the stack that will be above OpenVINO, things like EdgeX and other software environments that will basically enable you to start providing a visual interpretation of that data and interface to that data to whoever the ultimate consumer of that data will be. So many ISVs are building business decision making dashboards on top of OpenVINO output because they want to take the deep learning content and then serve it up to store managers, factory floor managers, hospital administrators, to help them run their business.
- Swapnil Bhartiya: What kind of community you have built around it, or you want to build around it because the kind of project it is when you talk about community, it could be like vendors, it could be users, it could be customers, it could be just developer and it could betel employees.
Matthew Formica: At Intel we care a lot about the ecosystem. We’re an ecosystem centric company. And so we care about developers who are just getting started, for example, who are trying to learn AI for the first time. They need a community for them. We have things like our edge AI certification program that enables them with other developers to carefully go through a learning course that will give them instruction with coaches to help them. We also have our online forums, of course, where people can get questions answered, but then we also integrate closely with other large Open Source projects and their developers. So OpenCV is a common one. ROS is another the Robotics Operating System where developers like to work together on projects. For example, in the robotics space and robots often need to be able to use computer vision to figure decision making. And that’s a perfect use case for OpenVINO.
And so we are working to write sample code that shows how to use OpenVINO with these other ecosystem, Open Source projects. And that is something that developers love. We’re also working with hardware vendors and software companies to enable them with this technology. And they can then go to market being confident and being able to communicate confidence to end customers that this technology has been proven out and validated end to end. And so whether you’re just getting started in AI, there’s a community for you here, as well as if you’re an for or are running a business that needs AI. You can be confident that we’ve piped cleaned it all the way through.
- Swapnil Bhartiya: How do you see the further evolution of Edge in 2022, which might also affect your own roadmap?
Matthew Formica: What we have seen is the trajectory of the edge over the past few years has largely matched the trajectory of the overall economy. When everyone was pulling in and staying home and businesses were shutting down Intel’s edge business was hurt as well, but as the economy has opened back up, our business has boomed. And so what we saw especially second half of last year is there were a lot of AI pilots. A lot of projects just getting started deployed in one or two restaurants or one or two factories where the technology was being proven out. And so what I expect to see as we go through this year is those projects shifting from initial pilot to broader scale production. And so we’re going to see this increasingly become a part of the technology that is integrated into all that happens at the edge. And ultimately it’s a capability that along with the cloud, AI is just becoming pervasive everywhere. And so I think it’s something that is just going to keep growing as those pilots move forward into production.
- Swapnil Bhartiya: If I ask you, it will be unfair to say that just the way few years ago we used, “Hey, you know what? If you’re a company, you have to be a software company, otherwise you will not survive.” Then we said, “You have to have a cloud strategy otherwise you will not survive.” So should we… It’s like, you need to have an AI/ML strategy too, because without AI/ML, it must be core part of your… So what are your thoughts about that? How our integral AI/ML will be?
Matthew Formica: Well Intel’s view is that AI is a workload that all of our processors need to have accelerators to handle. It’s something that’s going to be a part of what all of our customers do. And we do agree that every customer ought to be thinking through how they can apply AI to their domain and companies and customers that do are finding growth in efficiency and in productivity. We worked with one company on one of their factories doing car welding just an example of what we’ve seen, where there was over 1000 welds per car. And because of that number, they could not quality check every single weld they had to do sampling in order to check. But with AI and OpenVINO, they were able to shift to actually checking every weld and do so in less time than it took to do the sampling before. So not only does the factory become more efficient, but quality goes up. And so that’s just an example of why this is going to be critical ultimately for every company to think through.
- Swapnil Bhartiya: It happens with almost all emerging technologies that you come up with a use case in mind, but folks will start using it in the use case that you have never dreamt off, look at Linux kernel, look at Kubernetes. So are there any use cases that you are either worried about, “Hey, you know what, this is exciting, but this is beyond a scope,” but you see people are using it. So how prepared are you, are there use cases where you look at them and you’re like, yes, this is the potential use case that people should be using it, or no, you should not be using it, or we should prepare ourselves so people can use it. Does it make sense?
Matthew Formica: Absolutely. This is a fast evolving space. This is a technology space that didn’t exist hardly prior to 2015, at least from a deep learning capability perspective. And so if you just look at the history of OpenVINO, it actually before it was an edge computer vision deep learning toolkit, it was actually for printer vendors. It was actually something we were originally designed to help printer vendors do OCR on images that being scanned within a printer system.
And so it evolved into what it is today. And it’s actually in the process of taking another leap forward with our new 2022.1 version to support natural language processing and audio day. OpenVINO was originally designed specifically for computer vision, but increasingly we see voice recognition playing a bigger role at the edge and even all kinds of audio processing. So that welding example I told you about, you also need the ability to listen, to hear if it was a good weld. It can be more than just a visual inspection and be an audio inspection as well. And so we’re augmenting OpenVINO with natural language processing and audio inferencing capabilities in this new version, something we didn’t really conceive of four years ago.
- Swapnil Bhartiya: Now let’s talk about, especially this project, the latest release was if I’m not wrong, 2022.1, is that correct?
Matthew Formica: It is. We come out with a major version every year, and we increment the major version number to match the year. And then we have a roughly quarterly release update cycle within the year.
- Swapnil Bhartiya: So do you have only one release or you have different branches? Like this is the release or LTS release that companies can. And then they do also rolling release or which is a different channel altogether. So if you can explain, yeah.
Matthew Formica: Yeah, we do it very much the way some common Linux distributions do. We have an LTS release. We do late each year and we support that for two years. So you’ve got something very stable. And then we’ve got the latest release that has all the latest innovations in it. So right now we’re about to come out here with 2022.1 over the next several days. And we also have our 2021.4 version that is available as an LTS. And we have our 2020.3 version as an LTS. And so you’ve got that stability if you need something very stable or you can catch all the cutting edge innovation that we’ve been adding.
- Swapnil Bhartiya: What are the new features, something that is really exciting that you are excited about in the latest, which is the 2022.1 release.
Matthew Formica: Well, we already spoke about the new natural language processing capabilities. That’s one of the things we’re most excited about. We’ve really gone back in this version and taken the last four years of learnings and rolled them into a bunch of new APIs to make it easier to adopt, but probably feature I’m most excited about is the new automatic optimization capability built in. And this takes some of the work out of software developers and application developers hands, and instead has OpenVINO handle it for them.
And so we will automatically look across your hardware system at all of the accelerators you may have, all the compute that’s available and how much memory is in the system and automatically load balance and distribute the inferencing AI job across all of that compute for you. Dynamically measuring yours system to make sure it’s running at best performance. And so as a software company, a software developer, you want to take advantage of Intel’s large market of hundreds of millions of devices worldwide. You can ship one copy of your product and know that under the hood OpenVINO is automatically going to optimize for whatever hardware it’s running on.
- Swapnil Bhartiya: What kind of roadmap do you have for the next release? What are the things that you folks are working on that you can share with us today?
Matthew Formica: Sure. So we’re already working on 2021.2, and you’re right. It’s on GitHub. You can also find it at www.openvino.ai. And we’re working on of course, expanding support for upcoming new hardware from Intel, right? The market we’ve already disclosed. We’ve got Sapphire Rapids on its way and big new Xeon from Intel. We are working hard to co-optimize our software with that hardware. So when that hardware comes out, there’ll be a version of OpenVINO that goes with it that’s already ready.
Additionally, we always have customers submitting new model to us, right? Again, as I said, this is a cutting edge field that changes dramatically almost in a 12 month cadence. And so there’s always new models coming in that we have to work to support and optimize. And so we’ve got a bunch of those queued up in the pipeline. Additionally, we’ve got some more tweaks and improvements to make to this auto optimization capability. There’s more dynamicness that we can add in, more system understanding. We can make it even smarter about automatically handling the compute on your system and getting the most performance out of it. So we’ve got things we’re excited to do over the next few versions. We actually would welcome anyone who wants to come and contribute code and help out. We’ve got a variety of partners and developers around the world who are already doing so. And if anyone wants to, we welcome to help.
- Swapnil Bhartiya: Matthew, thank you so much for taking time out today. And of course talk about not only OpenVINO, but also you gave a very good insight into how edge is evolving, what role AI/ML is playing in today’s world. And I think it’s going to be almost everywhere as you rightly said. So thanks for sharing those insights. And I would love to have him back on the show. Thank you.
Matthew Formica: Great. Thank you. Appreciate it.