Blog

A few months ago, I spoke at the conference where I explained the difference between caching and an in-memory data grid. Today, having realized that many people are also looking to better understand the difference between two major categories in in-memory computing: In-Memory Database and In-Memory Data Grid, I am sharing the succinct version of my thinking on this topic – thanks to a recent analyst call that helped to put everything in place :)

TL;DR

Skip to conclusion to get the bottom line.

Nomenclature

Let’s clarify the naming and buzzwords first. In-Memory Database (IMDB) is a well-established category name and it is typically used unambiguously.

It is important to note that there is a new crop of traditional databases with serious In-Memory “options”. That includes MS SQL 2014, Oracle’s Exalytics and Exadata, and IBM DB2 with BLU offerings. The line is blurry between these and the new pure In-Memory Databases, and for the simplicity I’ll continue to call them In-Memory Databases.

In-Memory Data Grids (IMDGs) are sometimes (but not very frequently) called In-Memory NoSQL/NewSQL Databases. Although the latter can be more accurate in some case – I am going to use the In-Memory Data Grid term in this article, as it tends to be the more widely used term.

Note that there are also In-Memory Compute Grids and In-Memory Computing Platforms that include or augment many of the features of In-Memory Data Grids and In-Memory Databases.

Confusing, eh? It is… and for consistency – going forward we’ll just use these terms for the two main categories:

  • In-Memory Database
  • In-Memory Data Grid

Tiered Storage

It is also important to nail down what we mean by “In-Memory”. Surprisingly – there’s a lot of confusion here as well as some vendors refer to SSDs, Flash-on-PCI, Memory Channel Storage, and, of course, DRAM as “In-Memory”.

In reality, most vendors support a Tiered Storage Model where some portion of the data is stored in DRAM (the fastest storage but with limited capacity) and then it gets overflown to a verity of flash or disk devices (slower but with more capacity) – so it is rarely a DRAM-only or Flash-only product. However, it’s important to note that most products in both categories are often biased towards mostly DRAM or mostly flash/disk storage in their architecture.

Bottom line is that products vary greatly in what they mean by “In-Memory” but in the end they all have a significant “In-Memory” component.

Technical Differences

It’s easy to start with technical differences between the two categories.

Most In-Memory Databases are your father’s RDBMS that store data “in memory” instead of disk. That’s practically all there’s to it. They provide good SQL support with only a modest list of unsupported SQL features, shipped with ODBC/JDBC drivers and can be used in place of existing RDBMS often without significant changes.

In-Memory Data Grids typically lack full ANSI SQL support but instead provide MPP-based (Massively Parallel Processing) capabilities where data is spread across large cluster of commodity servers and processed in explicitly parallel fashion. The main access pattern is key/value access, MapReduce, various forms of HPC-like processing, and a limited distributed SQL querying and indexing capabilities.

It is important to note that there is a significant crossover from In-Memory Data Grids to In-Memory Databases in terms of SQL support. GridGain, for example, provides pretty serious and constantly growing support for SQL including pluggable indexing, distributed joins optimization, custom SQL functions, etc.

Speed Only vs. Speed + Scalability

One of the crucial differences between In-Memory Data Grids and In-Memory Databases lies in the ability to scale to hundreds and thousands of servers. That is the In-Memory Data Grid’s inherent capability for such scale due to their MPP architecture, and the In-Memory Database’s explicit inability to scale due to fact that SQL joins, in general, cannot be efficiently performed in a distribution context.

It’s one of the dirty secrets of In-Memory Databases: one of their most useful features, SQL joins, is also is their Achilles heel when it comes to scalability. This is the fundamental reason why most existing SQL databases (disk or memory based) are based on vertically scalable SMP (Symmetrical Processing) architecture unlike In-Memory Data Grids that utilize the much more horizontally scalable MPP approach.

It’s important to note that both In-Memory Data Grids and In-Memory Database can achieve similar speed in a local non-distributed context. In the end – they both do all processing in memory.

But only In-Memory Data Grids can natively scale to hundreds and thousands of nodes providing unprecedented scalability and unrivaled throughput.

Replace Database vs. Change Application

Apart from scalability, there is another difference that is important for uses cases where In-Memory Data Grids or In-Memory Database are tasked with speeding up existing systems or applications.

An In-Memory Data Grid always works with an existing database providing a layer of massively distributed in-memory storage and processing between the database and the application. Applications then rely on this layer for super-fast data access and processing. Most In-Memory Data Grids can seamlessly read-through and write-through from and to databases, when necessary, and generally are highly integrated with existing databases.

In exchange – developers need to make some changes to the application to take advantage of these new capabilities. The application no longer “talks” SQL only, but needs to learn how to use MPP, MapReduce or other techniques of data processing.

In-Memory Databases provide almost a mirror opposite picture: they often require replacing your existing database (unless you use one of those In-Memory “options” to temporary boost your database performance) – but will demand significantly less changes to the application itself as it will continue to rely on SQL (albeit a modified dialect of it).

In the end, both approaches have their advantages and disadvantages, and they may often depend in part on organizational policies and politics as much as on their technical merits.

Conclusion

The bottom line should be pretty clear by now.

If you are developing a green-field, brand new system or application the choice is pretty clear in favor of In-Memory Data Grids. You get the best of the two worlds: you get to work with the existing databases in your organization where necessary, and enjoy tremendous performance and scalability benefits of In-Memory Data Grids – both of which are highly integrated.

If you are, however, modernizing your existing enterprise system or application the choice comes down to this:

You will want to use an In-Memory Database if the following applies to you:

  • You can replace or upgrade your existing disk-based RDBMS
  • You cannot make changes to your applications
  • You care about speed, but don’t care as much about scalability

In other words – you boost your application’s speed by replacing or upgrading RDBMS without significantly touching the application itself.

On the other hand, you want to use an In-Memory Data Grid if the following applies to you:

  • You cannot replace your existing disk-based RDBMS
  • You can make changes to (the data access subsystem of) your application
  • You care about speed and especially about scalability, and don’t want to trade one for the other

In other words – with an In-Memory Data Grid you can boost your application’s speed and provide massive scale by tweaking the application, but without making changes to your existing database.

It can be summarized it in the following table:

  In-Memory Data Grid In-Memory Database
Existing Application Changed Unchanged
Existing RDBMS Unchanged Changed or Replaced
Speed Yes Yes
Max. Scalability Yes No

In-memory computing is comprised of two main categories: In-Memory Databases and In-Memory Data Grids. I’d like to delve into the differences between the two groups.

Start-up GridGain Inc. says its distributed, in-memory SQL database has the potential to “democratize” in-memory computing, but in-memory’s democratization could well be a fait accompli.

Today GridGain™ Systems ( GridGain.com ), provider of the leading open source In-Memory Computing (IMC) Platform, announced that Konstantin Boudnik has joined its Advisory Board. Boudnik brings 20 years of expertise in enterprise IT infrastructure management and development, and is a recognized thought leader in the open source community through his role as Vice President at the Apache Software Foundation .

“As data grows even bigger and the performance expectation becomes more demanding, the notion of scalability for technology providers is no longer a question of ‘if’ but ‘when,’” said Boudnik. “GridGain is opening the door for unprecedented innovation by removing the constraints in computing that limit its ability to evolve and keep pace with business demands.”

GridGain’s In-Memory Computing Platform is the most accessible and comprehensive of its kind, drastically accelerating computing speed and data processing scale beyond that of traditional disk-based infrastructures. In March, GridGain released its end-to-end stack under the Apache 2.0 open source license, making these performance enhancements widely available to developers of both small and large projects, whether for evaluation or full production.

“Konstantin Boudnik adds to GridGain’s leadership and vision for the future of computing, particularly as it applies to Big Data and its accelerating demand for increased computing performance and scale,” said Abe Kleinfeld, GridGain’s CEO.

Konstantin’s fluency across multiple programming languages, operating systems, and databases showcase his prowess in enterprise technology. In addition to his role as Vice President at Apache Software Foundation, Konstantin is also a director of advanced technologies at WANdisco. He was one of the original developers of Hadoop and co-founder of Apache Bigtop, the open source project that focuses on building community around creation of software stacks of Hadoop-related projects. Prior to these roles, Boudnik amassed twenty years of engineering and systems architecture experience at leading organizations like Sun Microsystems, Yahoo!, Cloudera and Karmasphere.

Boudnik holds a Master degree in Mathematics and PhD in Computer Science from Saint-Petersburg University in Russia. He has also published and maintains United States patents for several distributed systems, computer farms, and software

We are pleased to announce that GridGain 6.1.0 has been released today. This is the first main upgrade since GridGain 6.0.0 was released in February and contains some cool new functionality and performance improvements:

Support for JDK8

With GridGain 6.1.0 you can execute JDK8 closures and functions in distributed fashion on the grid:

try (Grid grid = GridGain.start()) {
    grid.compute().broadcast((GridRunnable)() -> 
        System.out.println("Hello World")).get();
}

Geospatial Indexes

GridGain allows to easily query in-memory data in SQL using in-memory indexes. Now you can extend SQL to geospatial queries. For example, query below will find all points on the map within a certain square region:

Polygon square = factory.createPolygon(new Coordinate[] {
    new Coordinate(0, 0),
    new Coordinate(0, 100),
    new Coordinate(100, 100),
    new Coordinate(100, 0),
    new Coordinate(0, 0)
});

cache.queries().
    createSqlQuery(MapPoint.class, "select * from MapPoint where location && ?").
        queryArguments(square).
        execute().get();

Near Cache in Atomic Mode

Prior to 6.1.0 GridGain supported near cache only in transactional mode. Starting with 6.1.0 near cache support was added to atomic mode as well.

Near cache allows for client-side caching (vs traditional server side caching) and renders significant performance improvements in some cases.

Fair Affinity Functions

Many know that Consistent Hashing provides a consistent distribution of data within a cluster that is resilient to server failures, but not many know that consistent hashing is not very fair. The discrepancies in distribution can be up to 20% which means that some servers will end up with 20% more data than others. This may create uneven load distribution when running cluster-enabled computations or queries.

GridGain 6.1 added two more affinity functions in addition to consistent hashing: Rendezvous and Fair.

Rendezvous affinity function works faster than consistent hashing and for smaller topologies (under 10 servers) provides a pretty fair distribution. One of the nice features here is that cache key affinity survives full cluster restarts. This means that you can back up data to disk and then reload it on restart knowing that all keys are still mapped to the same node.

Fair affinity function provides absolutely fair cache key distribution with all grid nodes holding absolutely equal amount of keys at all times. However, fair affinity function may change key-to-node assignment upon full cluster restarts.

Other Enhancements

Other fixes and enhancements involve improvements to multicast protocol for discovery and significant performance improvements for distributed cache queues.

You can download GridGain 6.1 here.

GridGain™ Systems (Gridgain.com), provider of the leading open source In-Memory Computing (IMC) Platform , today announced that Gartner has recognized it in its “Cool Vendors in In-Memory Computing Technologies, 2014” report .

“We believe that the increased attention given to In-Memory-Computing signifies the growing role that IMC is playing across all industries,” said Abe Kleinfeld, CEO, GridGain. “Companies today need to consider an in-memory computing architecture to address the hyper-scale demands of Big Data, Internet of Things (IoT) and Cloud Computing.”

Get the Gartner Report     |     View Press Release

GridGain Systems (GridGain.com), provider of the leading open source In-Memory Computing Platform, today announced the appointment of Max Herrmann as Executive Vice President of Marketing. Herrmann comes to GridGain from Microsoft’s cloud and enterprise marketing team and will lend his expertise to growing awareness and adoption of in-memory computing across the enterprise, cloud computing providers and the developer community.

“GridGain is poised to deliver the next generation of computing infrastructure that addresses the unprecedented demand for performance and scalability driven by big data, cloud, mobility and social networks – and as all of these technologies become more entwined with the Internet of Things,” said Herrmann. “I’m thrilled to be on the cutting-edge of this movement, which I believe is set to change computing as we know it today”.

Last month, GridGain released its In-Memory Computing Platform to open source through an Apache 2.0 license, making it the most comprehensive and accessible solution of its kind. The company has gained tremendous momentum within the last year, securing $10 million in series B funding in July, and appointing Abe Kleinfeld as its CEO in December.

“We are foremost impressed by Max’s ability to develop, educate and grow markets for emerging technologies,” said GridGain’s CEO, Abe Kleinfeld. “Through his expertise, we are confident that GridGain will accelerate adoption of in-memory computing and speed innovation across all sectors of business”.

Herrmann spent several years at Microsoft where he guided product marketing efforts for Windows Server datacenter and cloud infrastructure software. Prior to Microsoft, Herrmann was the Vice President of Marketing at Calista Technologies, where he created the company’s product and go-to-market strategy through their eventual acquisition by Microsoft. In addition, he provided marketing leadership to New Moon Systems through two acquisitions by Tarantella and Sun Microsystems in 2005, where he drove product management and product marketing for multiple desktop virtualization offerings.

Herrmann holds a master’s degree in Aerospace Engineering from the University of Stuttgart, and an MBA from the Technical University of Munich.

After five days (and eleven meetings) with new customers in Europe, Russia, and the Middle East, I think the time is right for another refinement of in-memory computing’s definition. To me, it is clear that our industry is lagging when it comes to explaining in-memory computing to potential customers and defining what in-memory computing is really about. We struggle to come up with a simple, understandable definition of what in-memory computing is all about, what problems it solves, and what uses are a good fit for the technology.

In-Memory Computing: What Is It?

In-memory computing means using a type of middleware software that allows one to store data in RAM, across a cluster of computers, and process it in parallel. Consider operational datasets typically stored in a centralized database which you can now store in “connected” RAM across multiple computers. RAM, roughly, is 5,000 times faster than traditional spinning disk. Add to the mix native support for parallel processing, and things get very fast. Really, really, fast.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing.

RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing. While in-memory data storage is expected of in-memory technology, the parallelization and distribution of data processing, which is an integral part of in-memory computing, calls for an explanation.

Parallel distributed processing capabilities of in-memory computing are… a technical necessity. Consider this: a single modern computer can hardly have enough RAM to hold a significant dataset. In fact, a typical x86 server today (mid-2014) would have somewhere between 32GB to 256GB of RAM. Although this could be a significant amount of memory for a single computer, that’s not enough to store many of today’s operational datasets that easily measure in terabytes.

To overcome this problem in-memory computing software is designed from the ground up to store data in a distributed fashion, where the entire dataset is divided into individual computers’ memory, each storing only a portion of the overall dataset. Once data is partitioned – parallel distributed processing becomes a technical necessity simply because data is stored this way.

Developing technology that enables in-memory computing and parallel processing is highly challenging and is the reason there are literally less than 10 companies in the world that have mastered the ability to produce commercially available in-memory computing middleware. But for end users of in-memory computing, they are now able to enjoy dramatic performance benefits from this “technical necessity”.

In-Memory Computing: What Is It Good For?

Let’s get this out of the way first: if one wants a 2-3x performance or scalability improvements – flash storage (SSD, Flash on PCI-E, Memory Channel Storage, etc.) can do the job. It is relatively cheap and can provide that kind of modest performance boost.

To see, however, what a difference in-memory computing can make, consider this real-live example…

Last year GridGain won an open tender for one of the largest banks in the world. The tender was for a risk analytics system to provide real-time analysis of risk for the bank’s trading desk (common use case for in-memory computing in the financial industry). In this tender GridGain software demonstrated one billion (!) business transactions per second on 10 commodity servers with the total of 1TB of RAM. The total cost of these 10 commodity servers? Less than $25K.

Now, read the previous paragraph again: one billion financial transactions per second on $25K worth of hardware. That is the in-memory computing difference — not just 2-3x times faster; more than 100x faster than theoretically possible even with the most expensive flash-based storage available on today’s market (forget about spinning disks). And 1TB of flash-based storage alone would cost 10x of entire hardware setup mentioned.

Importantly, that performance translates directly into the clear business value:

  • you can use less hardware to support the required performance and throughput SLAs, get better data center consolidation, and significantly reduce capital costs, as well as operational and infrastructure overhead, and
  • you can also significantly extend the lifetime of your existing hardware and software by getting increased performance and improve its ROI by using what you already have longer and making it go faster.

And that’s what makes in-memory computing such a hot topic these days: the demand to process ever growing datasets in real-time can now be fulfilled with the extraordinary performance and scale of in-memory computing, with economics so compelling that the business case becomes clear and obvious.

In-Memory Computing: What Are The Best Use Cases?

I can only speak for GridGain here but our user base is big enough to be statistically significant. GridGain has production customers in a wide variety of industries:

  • Investment banking
  • Insurance claim processing & modeling
  • Real-time ad platforms
  • Real-time sentiment analysis
  • Merchant platform for online games
  • Hyper-local advertising
  • Geospatial/GIS processing
  • Medical imaging processing
  • Natural language processing & cognitive computing
  • Real-time machine learning
  • Complex event processing of streaming sensor data

And we’re also seeing our solutions deployed for more mundane use cases, like speeding the response time of a student registration system from 45 seconds to under a half-second.

By looking at this list it becomes pretty obvious that the best use cases are defined not by specific industry but by the underlying technical need, i.e. the need to get the ultimate best and uncompromised performance and scalability for a given task.

In many of these real-life deployments in-memory computing was an enabling technology, the technology that made these particular systems possible to consider and ultimately possible to implement.

The bottom line is that in-memory computing is beginning to unleash a wave of innovation that’s not built on Big Data per se, but on Big Ideas, ideas that are suddenly attainable. It’s blowing up the costly economics of traditional computing that frankly can’t keep up with either the growth of information or the scale of demand.

As the Internet expands from connecting people to connecting things, devices like refrigerators, thermostats, light bulbs, jet engines and even heart rate monitors are producing streams of information that will not just inform us, but also protect us, make us healthier and help us live richer lives. We’ll begin to enjoy conveniences and experiences that only existed in science fiction novels. The technology to support this transformation exists today – and it’s called in-memory computing.

Gordon E. Moore’s famously predicted tech explosion was prophetic, but it may have hit a snag. While the number of transistors on integrated circuits has doubled approximately every two years since his 1965 paper, the ability to process and transact on data hasn’t. We’re now ingesting data faster than we can make sense of it, leaving computing at an impasse. Without a new approach, the innovation promised by the combination of big data and internet scale may be like the flying cars we thought we’d see by 2014. Fortunately, this is is not the case, as in-memory computing offers a way to bridge this impasse.

Keeping up with Moore’s law requires computing orders of magnitude faster than allowed by traditional methods, and at a reasonable cost. In-memory computing achieves just this. It’s already well-established that in-memory computing is much, much faster and scalable than traditional methods. Furthermore, the dropping cost of memory has made it economical.

Despite this, there’s a lingering misperception that in-memory computing resides in the realm of supercomputers. Most people don’t realize just how fast and affordable it really is. To offer some perspective, GridGain recently demonstrated one billion transactions per second using our In-Memory Data Grid on just $25K worth of commodity hardware. In short, it’s now economical for organizations of all sizes.

Opening the doors to mass adoption through open source in-memory technology

In-memory computing is definitely entering the mainstream (http://www.gartner.com/newsroom), however, achieving mass innovation with any technology requires mass adoption. One of the best ways to accomplish this is by offering technology through an open source license, enabling users to begin to work with it without necessarily committing to it. This allows developers the flexibility to use the technology in new and interesting ways, and to address very specific challenges.

With GridGain offering a complete In-Memory Computing Platform through an Apache 2.0 license, all the barriers to adoption are removed. The high performance computing capabilities of in-memory technology are now fully part of the public domain, meaning that developers have full freedom to experiment with it, test its capabilities and try out new ideas.

Unifying the cloud, big data and real-time analytics to accelerate innovation

Now that developers have access to computing power commensurate with their creativity it’ll be exciting to see what they come up with. While we can’t predict the future, one thing is for certain — the new level of computing power afforded by in-memory technology will enable developers to create a new class of applications that combine the cloud, Big Data and real-time analytics. Once you can do that, the genie is out of the bottle.

World’s fastest, most scalable In-Memory Computing Platform now available under Apache 2.0 license

FOSTER CITY, Calif., March 3, 2014 /PRNewswire/ — Today GridGain (www.gridgain.org) officially released its industry leading In-Memory Computing Platform through an Apache 2.0 open source license, offering the world access to its technology for a broad range of real-time data processing applications. GridGain’s open source software provides immediate, unhindered freedom to develop with the most mature, complete and tested in-memory computing platform on the market, enabling computation and transactions orders of magnitude faster than traditional technologies allow.

“The promised advances of big data combined with internet scale simply can’t happen without a transformative change in computing performance,” said Abe Kleinfeld, CEO of GridGain. “In-memory computing is enabling this change by offering a major leap forward in computing power, opening doors to a new era of innovation.”

In-memory computing on commodity hardware is no longer a dream. The falling cost of RAM has made in-memory computing broadly accessible to organizations of all sizes. In a recent customer engagement, GridGain demonstrated one billion financial transactions per second using its In-Memory Data Grid software on just $25,000 of commodity hardware. According to GridGain, a performance increase of this magnitude will allow organizations to achieve goals they would not have previously considered pursuing.

“Organizations that do not consider adopting in-memory application infrastructure technologies risk being out-innovated by competitors that are early mainstream users of these capabilities,” said Massimo Pezzini, Gartner Press Release, Gartner Says In-Memory Computing Is Racing Towards Mainstream Adoption, April 3, 2013, http://www.gartner.com/newsroom/id/2405315.

GridGain’s In-Memory Computing Platform is used in a broad range of applications around the world including:

  • Financial trading systems
  • Online gaming
  • Bioinformatics
  • Hyperlocal advertising
  • Cognitive computing
  • Geospatial analysis

Developers can download GridGain’s In-Memory Computing Platform at www.gridgain.org.

About GridGain™

GridGain’s complete In-Memory Computing Platform enables organizations to conquer challenges that traditional technology can’t approach. While most organizations now ingest infinitely more data than they can possibly make sense of, GridGain’s customers leverage a new level of real-time computing power that allows them to easily innovate ahead of the accelerating pace of business. Built from the ground up, GridGain’s product line delivers all the high performance benefits of in-memory computing in a simple, intuitive package. From high performance computing, streaming and data grid to an industry first in-memory Hadoop accelerator, GridGain provides a complete end-to-end stack for low-latency, high performance computing for each and every category of payload and data processing requirements. Fortune 500 companies, top government agencies and innovative mobile and web companies use GridGain to achieve unprecedented performance and business insight. GridGain is headquartered in Foster City, California. Learn more at http://www.gridgain.com.

1 2 3 4 19