AI/MLCloud Native ComputingContributory Blogs

Lessons from the Pandemic: Overcoming the Barriers to Speedy Digital Transformation

0

The global pandemic has accelerated the timeline for digital transformation at most organizations, as it underlines the need to be able to pivot in response to a changed world.

Companies have had to adapt quickly to unprecedented work-from-home mandates, sudden changes in customer demands and worldwide disruptions in supply chains. The struggle for businesses to adapt continues as the pandemic lingers. Based on my experience helping companies address these digital-transformation requirements, it’s apparent that an integrated approach for data and applications across the enterprise holds the key to success.

The flexibility and speed of an enterprise’s response depend on having data-driven business processes that work at scale and at pace, along with simplification and efficiency of execution. Along with the data, AI and machine learning models and the business applications they serve complete the IT triumvirate required to deliver a data-driven business.

For CTOs and IT groups, the challenge they are faced with is delivering affordable and agile systems to deliver this transformation, across data centers on-premises or in the cloud and out to edge environments, while maintaining the resiliency, reliability and security essential to core business processes. These requirements are amplified by the extreme changes of the pandemic, but they are fundamental. Even in more normal times, businesses must respond to change in order to be successful.

I’ve observed four primary barriers that must be overcome in order to achieve these goals.

Breaking Through the Barriers to Digital Transformation

First is the challenge of managing data simply and affordably, with flexibility of access wherever applications need it. That’s easier said than done, as organizations continue to be inundated with a tsunami of data. Without an integrated system, there’s a danger that siloed data and fragmented insights will result.

And while the vast majority of today’s applications and data remain in the data center, increasing amounts are being generated at the edge. Often, intelligence needs to be gathered there, too. Applications may need to be delivered to the edge just as data or partially processed data needs to be moved back to the core. If developers and data engineers have to handle this movement by hand-coding it at the application level, that can make for a costly, confusing and risky nightmare. By implementing a comprehensive data strategy, supported by an integrated data infrastructure capable of handling much of the data management and mobility at the platform level, this barrier can be overcome.

Furthermore, a system that also provides containerization of applications gives you flexibility to run those applications where and when you need to, in the customized environments that each needs.

Tied to the first challenge is this second one: The need to quickly build AI/ML models trained on current data that reflects changes in the world. Models that performed well before COVID-19 suddenly were not relevant; they needed retraining, retuning or entirely new development. The logistics of data preparation, feature extraction and model management have always taken the majority of effort in AI/ML projects. For this reason, the urgency of adjusting to the COVID-19 situation underlined the need for efficient data access, data mobility and application deployment to avoid long delays.

These challenges are not limited to the response to COVID. Historically, more than 60% of models never make it from piloting to operationalization. Thus, our third challenge: How to reliably operationalize AI, ML and data science in order to reap the benefits of digital innovation. And this is an ongoing challenge given that AI/ML projects are dynamic and iterative, continuously undergoing evaluation and monitoring, adjustment and retraining. Teams experienced with MLOps skills and appropriate tools to support them can improve the time and reliability of bringing these systems into production.

Beyond these considerations is a fourth barrier: often, many legacy applications are decades old. Unfortunately, it can take months or years for apps to be re-architected or re-written for the cloud, which causes an exorbitant expense. Unfortunately, this is preventing many companies from meeting digital transformation timelines, as many non-cloud-native apps are deeply embedded in operations functions. With the cloud being the preferred environment for as much as 85% of applications, the inability to run non-cloud-native apps there is a significant detriment for many enterprises.

Thus, IT teams are facing a conundrum. Without the necessary human and technology resources, projects are slower, clumsier and more expensive to pilot and, subsequently, operationalize. Since not all apps are micro-services based, they often cannot be delivered without expensively refactoring, reengineering or replacing them.

Fortunately, there are data infrastructure technologies that permit data access by legacy applications running alongside modern applications, as well as technologies to help with containerization of non-cloud-native applications. Innovations in the development and deployment of consumption-based container platforms are helping solve cost- and time-based problems, and most importantly to provide flexibility, agility and elasticity. Indeed, if all applications could be built using a micro-services approach, they could be augmented and updated more accurately and rapidly. For these and other reasons, Gartner predicts the number of enterprises that run containerized platforms will more than triple, from 20% today to 75% by 2022.

Single, end-to-end framework hastens innovative research

While challenges to a speedy, efficient digital transformation are many, tech advancements exist to overcome them – especially when implemented in a strategic, integrated fashion. Fortunately, resources from innovative IT vendors can drive data-centric insights and value from across multiple datasets; and they are becoming invaluable for business, government and academic institutions.

One illustrative approach is that being taken by Edinburgh International Data Facility (EIDF), Europe’s first regional data innovation center, in Scotland. EIDF enables researchers to address such global issues as food production and climate change by providing an end-to-end infrastructure that seamlessly combines advanced HPC, AI, container and software technologies into a single framework. By applying analytics to modeling and simulation, EIDF is enabling collaborative and optimal experiences across broad groups of users by ensuring increased accuracy and speed in their quest for discovery.

While the global pandemic has turned the world upside down, it has also reinforced the need for taking a more industrialized, “assembly line”-like approach to creating digital-first data landscapes. While today’s tools and technologies are different, they have the ability to ensure that a repeatable process, from start to end, will enable organizations to thrive in today’s edge-to-cloud world.

It’s a challenging journey. But when completed, the rewards can be business-defining for years to come.

To learn more about containerized infrastructure and cloud native technologies, consider joining us at KubeCon + CloudNativeCon NA Virtual, November 17-20.