AI/MLNews

New Diamanti Platform For AI/ML Workloads Supports Containerized Workloads On Kubernetes

0

Diamanti has made available what it claims to be the first enterprise platform with GPU support for running containerized workloads under Kubernetes. In conjunction with its recent announcement of Diamanti Spektra, customers can now provide to their end users GPU capacity in cloud clusters for scaling AI/ML workloads on premises out to public clouds to accelerate model development and training.

The new Diamanti platform for AI/ML workloads fully supports Nvidia’s NVLink cross connect GPU card technology for higher performing workloads, as well as Kubeflow, a machine learning framework for Kubernetes that provides highly-available Jupyter notebooks and ML pipelines.

According to the company, early access Diamanti customers are already benefiting from the new platform support for GPUs in industries as varied as financial services, energy and travel, among others.

For AI/ML applications requiring GPUs, the new Diamanti Spektra solution can also now manage the full lifecycle of containerized workloads across on-premises and public clouds, moving applications and data between Kubernetes clusters as necessary.

Diamanti Spektra is in technology preview.