Nvidia AI Enterprise enables virtual workloads to run at near bare-metal performance on vSphere with support for the performance of NVIDIA A100 GPUs for AI and data science, Justin Boitano, general manager of Nvidia’s enterprise and edge computing unit, said in a blog post.
AI workloads can now scale across multiple nodes, allowing even the largest deep learning training models to run on VMware Cloud Foundation.
This combination enables multi-node performance and compatibility for a vast set of accelerated CUDA applications, AI frameworks, models and SDKs for the hundreds of thousands of enterprises that use vSphere for server virtualization.
“Through the collaboration between NVIDIA and VMware, vSphere is the only server virtualization software to provide hypervisor support for live migration with NVIDIA Multi-Instance GPU technology,” Boitano added.
With MIG, it is possible to partition each A100 GPU into up to seven instances at the hardware level to maximize efficiency for workloads of all sizes.