Intel defines edge as the thin edge and the network edge. Matthew Formica, Senior Director, AI Inference (OpenVINO) Product Marketing & Developer Ecosystem at Intel, explains that when you have large volumes of data that cannot be sent up to the cloud in its entirety for processing, the analytics need to be done closer to where the data is being generated, giving you less latency. This could be via an edge server or in the network edge, which means less latency and more cost efficiency as not all the data is going up to the cloud for analysis. However, Formica explains that the landscape is evolving and with it the definition of edge. To find out more, watch this video.
About Matthew Formica: Matthew Formica is Senior Director running Intel’s OpenVINO AI Software Product Marketing organization. AI, especially computer vision related, is increasingly a part of use cases across the spectrum: defect detection on assembly lines, disease detection by medical imaging equipment and shopper help needed alerts in retail stores to name just a few. Getting the best performance and ROI requires the right tools. With OpenVINO™, developers can easily accelerate their inferencing solution performance by several times using the general purpose Intel compute already built into their solutions, and using OpenVINO APIs they can write once and deploy across a wide range of Intel processors.
About Intel: Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.
The summary of the show is written by Emily Nicholls.