In-Memory Computing Best Practices Part 1: Changing the Data Foundation

How to Add Speed and Scalability to Existing Applications with In-Memory Data Grids.

In-Memory Computing Best Practices Part 1: Changing the Data FoundationIf you want to build a basement fix its foundation for future construction, jack it up.  It’s much cheaper, faster and less disruptive than building a new house. 

The same is true for applications. If you want to add speed, scalability and flexibility to your existing applications, slide an in-memory data grid in-between the application and the database. Don’t rewrite the application or replace the database.

Not only does this approach solve the short term problem of adding speed and scalability. It builds out a new data and computing layer that companies need for future projects in a much less disruptive way. It also pays for itself. If you want to learn how this works, watch Denis Magda’s presentation on adding speed and scalability to existing applications with in-memory data grids.

Applications are being sandwiched between two major challenges that are both created by what is arguably the biggest opportunity, and the biggest threat for most companies; improving the customer experience.  Amazon, eBay, Expedia, PayPal, Uber, and many other new companies have proven customers want a more digital, personalized and responsive customer experience, and are willing to switch companies to get it.

The first, longer term challenge is how to transform existing IT systems to enable this digital transformation. How do you open up existing assets and turn them into consumable APIs?  How do you add more intelligence behind APIs to improve decisions and business outcomes? And how do you do all this much faster, because the new “digital” players deliver much faster than the traditional companies. 

This has created the second, shorter term challenge. As companies started this journey they’ve added new Web, mobile and other self-service channels, along with new personalization and other automation. They’ve also started using a lot more data to support these decisions. All of this is pushing existing IT infrastructure and applications far beyond its existing performance and scalability limits.

Over the last decade, the new anytime, anywhere, personalized experience has driven query and transaction volumes up 10-1000x. It’s created 50x more data about customers, products, and interactions. It’s shrunk the response times customers expect in their experience from days or hours to seconds or less.  

Many companies have focused on addressing the performance and scalability challenges of their existing systems by scaling vertically in the short term with much more expensive hardware. But the existing way of scaling each system vertically is not cost effective or possible in the long run. The growth in queries, transactions, data ingestion and analytics are faster than the annual performance and cost improvements in hardware, and these growth rates show no signs of slowing down. 

The good news is many companies have used in-memory data grids (IMDG) to solve both challenges. It’s proven. These companies did not have to rip out and replace their existing applications and databases. Instead, they were able to take a more evolutionary approach.

The first step they took was to add speed and scalability by sliding an IMDG in-between their existing applications and databases.  The best way to improve speed, or lower latency, is to move all your data and computing into memory, together. The best way to scale quickly and cost effectively is horizontally, with a shared nothing architecture. That’s exactly what IMDGs do. GridGain is the only IMDG that slides in-between existing SQL-based applications and RDBMSs without requiring you to replace SQL with something else in the existing application or replace the database. That’s because it’s the only IMDG that supports SQL and ACID transactions, so you can continue to use SQL. It’s the only one that provides JDBC/ODBC drivers so that an existing application can treat GridGain like a database.

Now, just scaling existing systems vertically doesn’t help deliver a better customer experience. The actual experience, and underlying systems that support them need to change. Adding an IMDG not only provided a more cost effective solution to handle the load for the existing systems. By moving their existing data into memory, they built a new in-memory foundation for accessing and processing existing and new data. 

Many companies who successfully delivered a more personalized, responsive and real-time experience did it by using in-memory computing to perform analytics during their transactions and interactions with customers. They automated decisions to personalize the experience, to promote products, or to proactively address a customer issue as it occurred, before it impacted the customer’s satisfaction, purchasing decision or loyalty. They implemented what many experts call hybrid transactional/analytical processing (HTAP), or hybrid operational/analytical processing (HOAP). The IMDG became the foundation that allowed them to combine transactions and services with real-time analytics, and deliver new HTAP applications for their digital transformations and customer experience initiatives.

If you want to learn more about how an IMDG works, how companies use IMDGs, and their best practices, then learn from Denis Magda explain how to add speed and scalability to existing applications with IMDGs.