Cloud Native ComputingDevelopersDevOpsFeaturedLet's TalkVideo

Hazelcast: High Time For Real-Time Stream Processing

0

Guest: Dennis Duckworth (LinkedIn)
Company: Hazelcast (Twitter)

Real-time data is invaluable. Companies need this to glean insights and act on them immediately. This competitive advantage is essential in order to succeed—not just survive—in today’s market and economic conditions.

In this episode of Let’s Talk recorded at KubeCon, Swapnil Bhartiya sits down with Dennis Duckworth, Director of Product Marketing at Hazelcast, to talk about the latest iteration of their platform and how it is helping companies build applications with real-time capabilities.

Key highlights of this video interview:

  • Hazelcast started out as an in-memory data grid that accelerates companies’ data access, offloading of legacy systems, and near-real-time performance of analytics and machine learning.
  • An added Hazelcast capability (and differentiator) is real-time stream processing. The platform not only stores data and serves it up quickly, but also reacts to that data and processes it in real-time.
  • Duckworth explains the evolution of data access: in legacy mainframe systems, offloads were done overnight or on the weekends, so the insights were already stale by the time analysts got to them. Then, came faster in-memory databases that sped up the access to the data. However, there is still a bottleneck because it takes time to prep the data for analytics. Hazelcast provides the ability to speed up the analytics that run on it, so companies can get those insights in real-time and act on them accordingly.
  • Some use cases for real-time data: 1) In financial services, credit card issuers need real-time data to determine whether to authorize or decline a transaction with every swipe. 2) In the retail space, recommendation engines need real-time data to engage customers in the store or on the mobile app and make recommendations on what they’re likely to purchase. 3) In IoT, real-time data from machines and equipment can prevent catastrophic failure. If data falls outside safe values, machines can be shut down for predictive maintenance reasons.
  • Machine learning algorithms are about taking historical data and building a model based on the patterns found there. Today, Duckworth says there is a need for the ability to intersect with real-time data streams coming in so that one can add contextual information and enrichment to it. To support that, infrastructure engineers and DevOps people typically cobble together different components for stream processing, data storage, and orchestration. Suddenly, there are 3 different clusters that have to be maintained and kept up and running to maintain real-time capability.
  • Hazelcast has bundled all that capability into one platform. It eases operations, simplifies the tech stack dramatically, and adds automatic integration.
  • At KubeCon, Hazelcast announced the release of version 5.2, which features tiered storage, the ability to stream data from memory to disk, the ability to take multiple streams of data and treat them as tables and do joins across them, and the ability to connect to and query any database that supports the JDBC interface.
  • Even though the Hazelcast platform is available as open source, it has an enterprise version with advanced security and WAN replication.
  • Duckworth believes the adoption of real-time streaming data platforms relies on the buy-in of engineers, both development engineers and DevOps, because they are the ones who get calls in the middle of the night. That means real-time streaming data platforms have to be reliable, robust, and able to keep up with the dev engine.

This summary was written by Camille Gregory.