AI/MLCloud Native ComputingDevelopersNewsOpen Source

Alluxio 2.6 Brings Ease Of Use Improvements To GPU-Centric AI/ML Workloads

Data
0

Alluxio has announced the immediate availability of version 2.6 of its Data Orchestration Platform. This new release features an enhanced system architecture enabling AI/ML platform teams using GPUs to accelerate their data pipelines for business intelligence, applied machine learning and model training.

In the latest release, Alluxio improves its system architecture to best support AI/ML applications using the POSIX interface. System performance is maximized by removing inter-process latency overheads, which is critical for enabling full utilization of compute resources. Aside from I/O performance, the end-to-end workflow of data preprocessing, loading, training, and result writing is well supported by Alluxio’s data management capabilities.

Alluxio 2.6 unifies the Alluxio worker and FUSE process. By coupling the two, significant performance improvements are achieved due to reductions in inter-process communication. This is especially evident in AI/ML workloads where file sizes are small and RPC overheads make up a significant portion of the I/O time.

Also, Alluxio 2.6 enhances the mechanism to load data into Alluxio managed storage and introduces more traceability and metrics for easier operability. This distributed load operation is a key portion of the AI/ML workflow, and adjustments to the internal mechanisms have been made to optimize for the common case of loading prepared data for model training.

Free downloads of Alluxio 2.6 open source Community Edition and trials of Alluxio Enterprise Edition are now generally available.

Flexify.IO Now Supports 20+ Public Clouds And Cloud Storage Environments

Previous article

QCT Launches IronCloud — Robin Cloud Platform 5G Solution

Next article
Login/Sign up