AI/MLDevelopersNewsOpen Source

Intel, Google Enable oneDNN Library As Default Backend CPU Optimization For TensorFlow 2.9

0

Intel announced that performance improvements delivered by its oneAPI Deep Neural Network Library (oneDNN) are turned on by default in TensorFlow 2.9. This applies to all Linux x86 packages and for CPUs with neural-network-focused hardware features (like AVX512_VNNI, AVX512_BF16, and AMX vector and matrix extensions that maximize AI performance through efficient compute resource usage, improved cache utilization and efficient numeric formatting) found on 2nd Gen Intel Xeon Scalable processors and newer CPUs.

oneDNN is an open source cross-platform performance library of basic deep learning building blocks intended for developers of deep learning applications and frameworks. According to the company, these optimizations enabled by oneDNN accelerate key performance-intensive operations such as convolution, matrix multiplication and batch normalization, with up to 3 times performance improvements compared to versions without oneDNN acceleration.

The oneDNN-driven accelerations to TensorFlow deliver remarkable performance gains that benefit applications spanning natural language processing, image and object recognition, autonomous vehicles, fraud detection, medical diagnosis and treatment and others.

Intel-optimized TensorFlow is available both as a standalone component and through the Intel oneAPI AI Analytics Toolkit, and is already being used across a broad range of industry applications including the Google Health project, animation filmmaking at Laika Studios, language translation at Lilt, natural language processing at IBM Watson and many others.

Don't miss out great stories, subscribe to our newsletter.

Red Hat Open Sources StackRox Kubernetes Security Platform

Previous article

Deepfactor, Synopsys Help Developers Improve Cloud-Native Supply Chain Security Mechanisms

Next article
Login/Sign up