AI/MLDevelopersNews

Intel oneAPI 2022 Toolkits Expand Cross-Architecture Features

0

Intel has announced the release of oneAPI 2022 toolkits. Newly enhanced toolkits expand cross-architecture features to provide developers greater utility and architectural choice to accelerate computing.

New capabilities include the world’s first unified compiler implementing C++, SYCL and Fortran, data parallel Python for CPUs and GPUs, advanced accelerator performance modeling and tuning, and performance acceleration for AI and ray tracing visualization workloads. The oneAPI cross-architecture programming model provides developers with tools that aim to improve the productivity and velocity of code development when building cross-architecture applications.

The 2022 Intel oneAPI toolkits deliver performance and productivity through a complete set of advanced tools including compilers, libraries, pre-optimized frameworks, analyzers and debuggers. There are more than 900 new and enhanced features added over the past year that strengthen every tool in the foundational and domain-specific toolkits. They are now available to download or use in the Intel DevCloud for free.

Cross-architecture programming

  • Intel created what it claims to be the world’s first unified compiler implementing C++, SYCL and Fortran for CPUs and GPUs utilizing a common LLVM backend.
  • Accelerated compute on CPUs and GPUs for Python, today’s most popular programming language.
  • The Intel DPC++ Compatibility Tool was improved to automatically migrate 90% to 95% of CUDA code to SYCL/DPC++. 2

Intel oneAPI Toolkits are optimized to enable advanced features of the latest and upcoming new hardware, including 12th Gen Intel Core processors with AVX-VNNI, Next Gen Intel Xeon Scalable processors, codenamed Sapphire Rapids with Intel Advanced Matrix Extension (Intel AMX), and upcoming Xe client and data center GPUs.

AI performance optimizations

  • Deep learning framework performance is accelerated up to 10 times over earlier versions with the latest Intel Optimization for TensorFlow and Intel Optimization for PyTorch.
  • New Intel Extension for Scikit-learn speeds up machine learning algorithms more than 100 times on Intel CPUs over the stock open source version.
  • Introduced Intel Neural Compressor to achieve increased inference performance through post-training optimization techniques across multiple deep learning frameworks.
Don't miss out great stories, subscribe to our newsletter.

Login/Sign up