Alluxio, together with Intel, has launched a go-to-market solution to offer an in-memory acceleration layer with 2nd Gen Intel Xeon Scalable processors and Intel Optane persistent memory (PMem). The solution is said to eliminate performance degradation of analytics clusters that are increasingly built on disaggregated compute and storage architecture.
- Benchmarking results are said to show 2.1x faster completion for decision support queries when adding Alluxio and PMem compared to only using disaggregated S3 object storage.
- Using Storage over App Direct, a feature of PMem App Direct mode, allows Alluxio to access high-performance block storage without the latency of moving data across the I/O bus to provide the data acceleration and reduction in query runtime.
- With Alluxio and Intel Xeon Scalable processors, an I/O intensive benchmark is said to deliver a 3.4x speedup over disaggregated S3 object storage and a 1.3x speedup over a co-located compute and storage architecture.
- In addition, Alluxio and Intel have joined hands to improve their joint customers’ experience with managing and processing their data, such as optimizations for Intel Deep Learning Boost, the AI acceleration technology built –into Intel Xeon Scalable processors.
- Together, Alluxio and Intel plan to bring solutions to market to help fuel next-generation data, analytics and AI applications and use cases.
“Today’s disaggregated cloud storage lacks efficient file system semantics support like ‘rename’. Additionally, disaggregated cloud storage typically can’t leverage compute side storage media such as DRAM and SSD for use as buffers and page caches,” said Haoyuan Li, founder and CEO, Alluxio. “Adding Alluxio Data Orchestration System and Intel’s Optane persistent memory solves both issues…This is particularly helpful for hybrid cloud environments when data is remote.”