Best Practices for Digital Transformation and Why In-Memory Computing Matters

Best Practices for Digital Transformation and Why In-Memory Computing MattersGridGain recently started publishing the Best Practices for Digital Transformation with In-Memory Computing (IMC) eBook series. The series captures some of the best practices for putting the right people, processes, and technology in place that helped early adopters succeed with their digital transformations.

This blog post summarizes the first eBook in the series and outlines the best practices for getting started. But it starts with a conclusion that early adopters of digital business came to realize midway through their digital transformations. Digital business requires in-memory computing.

The Right Definition of Digital Business and Digital Transformation

First, it is important to get the definition of digital business right. One could define digital business as a business that is digital. Amazon, eBay, Expedia, PayPal are all great examples of digital business and how they disrupted their respective industries. But that is a bad definition, in part because it ignores why digital business is so important.

A digital business is a business that delivers greater value to customers by providing a better, more-convenient, integrated “omnichannel” experience across traditional and newer digital channels based on the end customer’s needs. Digital transformation is the process of becoming more digital to deliver a better customer experience. Digital transformation best practices must be centered around this goal.

Achieving great customer experience is how existing industry leaders keep customers as they integrate digital and traditional channels. This is why the newer entrants still add traditional channels. Think about it. We use Amazon for a better, more convenient experience for certain—but not all—purchases.  We buy fruit and fish at a grocery story. Amazon bought Whole Foods for exactly this reason.

How to Prepare for Digital Transformation the Right Way

Over my career I have talked to more than 100 companies in various stages of their own digital transformation about API management, integration, real-time analytics, in-memory computing, and a host of other technologies. When organized properly, and when employing digital transformation best practices, these customers focus on measuring and improving the customer experience strategies. Within IT, the digital business initiative typically started with three main goals:

  1. Provide new capabilities as public APIs that can be consumed by all developers.
  2. Deliver these new capabilities in days instead of months (or years).
  3. Streamline, improve, and react more quickly to each step of the customer experience by adding new technologies that ingest massive amounts of data in real-time.

But as adoption grew, these digital business teams suddenly added a few more requirements as they encountered new challenges and had their “uh oh” moment. The first was around security. Some groups mistakenly thought they could secure APIs the same way as existing applications and data. Part three of this eBook series provides some good examples of how to build with security in mind.

The second most common “sudden” requirement is even more fundamental. They realized they needed a horizontally scalable architecture with sub-second response times, every time, on top of their existing systems. This required in-memory computing.

How In-Memory Computing Solved the Digital Business Challenge with Speed and Scale

Existing IT systems that support the APIs cannot deliver the speed and scale needed for new digital web, mobile, or other services like connected devices and the Internet of Things (IoT).

The increase in scale has been far beyond what existing systems were designed to support. There has been a 10-1000 times increases in queries and transactions over the last decade. The amount of data used to improve the customer experience has grown by 50 times or more. Many existing systems were not designed to be real-time either. But to meet customer expectations, every data interaction must be sub-second. Since most companies are still early on in their digital transformations, and most of their customers have not fully adopted all these new services, you can expect the growth to continue to explode for years to come.

If you are with a digital business group and are far along in your transformation, or if you are in another part of IT supporting an existing application, you may already have figured out that scaling horizontally using an in-memory data grid (IMDG) is much more cost effective than the alternatives:

  • Most of the databases that support these applications can only scale vertically, which becomes either too expensive or impossible to continue over time. (If you want more details on your options for adding speed and scale to different databases you can read more on all the options for MySQL, Oracle, PostgreSQL and SQL Server.)
  • Any new APIs or applications must access existing data or applications, which means they are also limited by vertical-only database scalability.
  • It is impossible to deliver sub-second response times across so many network hops and layers of software. Just think about the Web or mobile clients on existing systems that call APIs, which in turn call middleware to access multiple applications, which in turn access databases over several network hops, and then merge data and return a result.
  • Even if you could fix all the challenges above, it is impossible to change existing applications (or even get permission to access existing data) in days. Most existing applications take months or years to change.

Part two of the Best Practices eBook series covers adding speed and scale to existing applications.

Several early adopters found a solution: trick the existing systems into thinking that nothing has changed and build a new real-time layer over time that can support all these requirements. These early adopters met all their goals, including delivering the speed and scale, and were able to take a more incremental approach by using in-memory computing (IMC) technology.

The first step for most of these early adopters was to add speed and scale to existing applications or APIs. They accomplished this by sliding an in-memory data grid (IMDG) in between each application its underlying database. The IMDG acts as a read/write-through proxy that writes into memory and to the database, and offloads all reads onto a cluster with linear horizontal scalability on commodity infrastructure. For SQL-based applications, an IMDG must support SQL and ACID transactions.

Besides speed and scale, implementing an IMDG for existing applications helped fulfill the original three goals of digital business as well:

  1. It unlocked the data for use in public APIs. Some people developed APIs that called the IMDG as if it were the database. Others embedded IMDG nodes directly in their APIs, within Docker containers for example, to scale horizontally and lower latency at the same time. In both cases it helped open the existing systems for use in public APIs.
  2. It allowed new functionality to be delivered in days because developers could now directly access existing data without having to go through the application or worry about degrading the performance of the existing applications.
  3. It made it easy to adopt new technologies such as real-time analytics and machine/deep learning because they were already supported by the IMC platform.

Learn More About the Best Practices for Digital Transformation with In-Memory Computing

Start learning some of the best practices now by downloading Part 1 of Best Practices for Digital Transformation with In-Memory Computing. It will help you get started with building your own foundation for digital business.

There is also a webinar on the same topic.

In future blogs we will cover the different types of projects you will tackle, including:

  • Adding speed and scale to existing applications
  • Adding new applications and APIs
  • Adding real-time analytics and machine or deep learning