A few months ago, I spoke at the conference where I explained the difference between caching and an in-memory data grid. Today, having realized that many people are also looking to better understand the difference between two major categories in in-memory computing: In-Memory Database and In-Memory Data Grid, I am sharing the succinct version of my thinking on this topic – thanks to a recent analyst call that helped to put everything in place :)
Skip to conclusion to get the bottom line.
Let’s clarify the naming and buzzwords first. In-Memory Database (IMDB) is a well-established category name and it is typically used unambiguously.
It is important to note that there is a new crop of traditional databases with serious In-Memory “options”. That includes MS SQL 2014, Oracle’s Exalytics and Exadata, and IBM DB2 with BLU offerings. The line is blurry between these and the new pure In-Memory Databases, and for the simplicity I’ll continue to call them In-Memory Databases.
In-Memory Data Grids (IMDGs) are sometimes (but not very frequently) called In-Memory NoSQL/NewSQL Databases. Although the latter can be more accurate in some case – I am going to use the In-Memory Data Grid term in this article, as it tends to be the more widely used term.
Note that there are also In-Memory Compute Grids and In-Memory Computing Platforms that include or augment many of the features of In-Memory Data Grids and In-Memory Databases.
Confusing, eh? It is… and for consistency – going forward we’ll just use these terms for the two main categories:
- In-Memory Database
- In-Memory Data Grid
It is also important to nail down what we mean by “In-Memory”. Surprisingly – there’s a lot of confusion here as well as some vendors refer to SSDs, Flash-on-PCI, Memory Channel Storage, and, of course, DRAM as “In-Memory”.
In reality, most vendors support a Tiered Storage Model where some portion of the data is stored in DRAM (the fastest storage but with limited capacity) and then it gets overflown to a verity of flash or disk devices (slower but with more capacity) – so it is rarely a DRAM-only or Flash-only product. However, it’s important to note that most products in both categories are often biased towards mostly DRAM or mostly flash/disk storage in their architecture.
Bottom line is that products vary greatly in what they mean by “In-Memory” but in the end they all have a significant “In-Memory” component.
It’s easy to start with technical differences between the two categories.
Most In-Memory Databases are your father’s RDBMS that store data “in memory” instead of disk. That’s practically all there’s to it. They provide good SQL support with only a modest list of unsupported SQL features, shipped with ODBC/JDBC drivers and can be used in place of existing RDBMS often without significant changes.
In-Memory Data Grids typically lack full ANSI SQL support but instead provide MPP-based (Massively Parallel Processing) capabilities where data is spread across large cluster of commodity servers and processed in explicitly parallel fashion. The main access pattern is key/value access, MapReduce, various forms of HPC-like processing, and a limited distributed SQL querying and indexing capabilities.
It is important to note that there is a significant crossover from In-Memory Data Grids to In-Memory Databases in terms of SQL support. GridGain, for example, provides pretty serious and constantly growing support for SQL including pluggable indexing, distributed joins optimization, custom SQL functions, etc.
Speed Only vs. Speed + Scalability
One of the crucial differences between In-Memory Data Grids and In-Memory Databases lies in the ability to scale to hundreds and thousands of servers. That is the In-Memory Data Grid’s inherent capability for such scale due to their MPP architecture, and the In-Memory Database’s explicit inability to scale due to fact that SQL joins, in general, cannot be efficiently performed in a distribution context.
It’s one of the dirty secrets of In-Memory Databases: one of their most useful features, SQL joins, is also is their Achilles heel when it comes to scalability. This is the fundamental reason why most existing SQL databases (disk or memory based) are based on vertically scalable SMP (Symmetrical Processing) architecture unlike In-Memory Data Grids that utilize the much more horizontally scalable MPP approach.
It’s important to note that both In-Memory Data Grids and In-Memory Database can achieve similar speed in a local non-distributed context. In the end – they both do all processing in memory.
But only In-Memory Data Grids can natively scale to hundreds and thousands of nodes providing unprecedented scalability and unrivaled throughput.
Replace Database vs. Change Application
Apart from scalability, there is another difference that is important for uses cases where In-Memory Data Grids or In-Memory Database are tasked with speeding up existing systems or applications.
An In-Memory Data Grid always works with an existing database providing a layer of massively distributed in-memory storage and processing between the database and the application. Applications then rely on this layer for super-fast data access and processing. Most In-Memory Data Grids can seamlessly read-through and write-through from and to databases, when necessary, and generally are highly integrated with existing databases.
In exchange – developers need to make some changes to the application to take advantage of these new capabilities. The application no longer “talks” SQL only, but needs to learn how to use MPP, MapReduce or other techniques of data processing.
In-Memory Databases provide almost a mirror opposite picture: they often require replacing your existing database (unless you use one of those In-Memory “options” to temporary boost your database performance) – but will demand significantly less changes to the application itself as it will continue to rely on SQL (albeit a modified dialect of it).
In the end, both approaches have their advantages and disadvantages, and they may often depend in part on organizational policies and politics as much as on their technical merits.
The bottom line should be pretty clear by now.
If you are developing a green-field, brand new system or application the choice is pretty clear in favor of In-Memory Data Grids. You get the best of the two worlds: you get to work with the existing databases in your organization where necessary, and enjoy tremendous performance and scalability benefits of In-Memory Data Grids – both of which are highly integrated.
If you are, however, modernizing your existing enterprise system or application the choice comes down to this:
You will want to use an In-Memory Database if the following applies to you:
- You can replace or upgrade your existing disk-based RDBMS
- You cannot make changes to your applications
- You care about speed, but don’t care as much about scalability
In other words – you boost your application’s speed by replacing or upgrading RDBMS without significantly touching the application itself.
On the other hand, you want to use an In-Memory Data Grid if the following applies to you:
- You cannot replace your existing disk-based RDBMS
- You can make changes to (the data access subsystem of) your application
- You care about speed and especially about scalability, and don’t want to trade one for the other
In other words – with an In-Memory Data Grid you can boost your application’s speed and provide massive scale by tweaking the application, but without making changes to your existing database.
It can be summarized it in the following table:
|In-Memory Data Grid||In-Memory Database|
|Existing RDBMS||Unchanged||Changed or Replaced|
After five days (and eleven meetings) with new customers in Europe, Russia, and the Middle East, I think the time is right for another refinement of in-memory computing’s definition. To me, it is clear that our industry is lagging when it comes to explaining in-memory computing to potential customers and defining what in-memory computing is really about. We struggle to come up with a simple, understandable definition of what in-memory computing is all about, what problems it solves, and what uses are a good fit for the technology.
In-Memory Computing: What Is It?
In-memory computing means using a type of middleware software that allows one to store data in RAM, across a cluster of computers, and process it in parallel. Consider operational datasets typically stored in a centralized database which you can now store in “connected” RAM across multiple computers. RAM, roughly, is 5,000 times faster than traditional spinning disk. Add to the mix native support for parallel processing, and things get very fast. Really, really, fast.
RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing.
RAM storage and parallel distributed processing are two fundamental pillars of in-memory computing. While in-memory data storage is expected of in-memory technology, the parallelization and distribution of data processing, which is an integral part of in-memory computing, calls for an explanation.
Parallel distributed processing capabilities of in-memory computing are… a technical necessity. Consider this: a single modern computer can hardly have enough RAM to hold a significant dataset. In fact, a typical x86 server today (mid-2014) would have somewhere between 32GB to 256GB of RAM. Although this could be a significant amount of memory for a single computer, that’s not enough to store many of today’s operational datasets that easily measure in terabytes.
To overcome this problem in-memory computing software is designed from the ground up to store data in a distributed fashion, where the entire dataset is divided into individual computers’ memory, each storing only a portion of the overall dataset. Once data is partitioned – parallel distributed processing becomes a technical necessity simply because data is stored this way.
Developing technology that enables in-memory computing and parallel processing is highly challenging and is the reason there are literally less than 10 companies in the world that have mastered the ability to produce commercially available in-memory computing middleware. But for end users of in-memory computing, they are now able to enjoy dramatic performance benefits from this “technical necessity”.
In-Memory Computing: What Is It Good For?
Let’s get this out of the way first: if one wants a 2-3x performance or scalability improvements – flash storage (SSD, Flash on PCI-E, Memory Channel Storage, etc.) can do the job. It is relatively cheap and can provide that kind of modest performance boost.
To see, however, what a difference in-memory computing can make, consider this real-live example…
Last year GridGain won an open tender for one of the largest banks in the world. The tender was for a risk analytics system to provide real-time analysis of risk for the bank’s trading desk (common use case for in-memory computing in the financial industry). In this tender GridGain software demonstrated one billion (!) business transactions per second on 10 commodity servers with the total of 1TB of RAM. The total cost of these 10 commodity servers? Less than $25K.
Now, read the previous paragraph again: one billion financial transactions per second on $25K worth of hardware. That is the in-memory computing difference — not just 2-3x times faster; more than 100x faster than theoretically possible even with the most expensive flash-based storage available on today’s market (forget about spinning disks). And 1TB of flash-based storage alone would cost 10x of entire hardware setup mentioned.
Importantly, that performance translates directly into the clear business value:
- you can use less hardware to support the required performance and throughput SLAs, get better data center consolidation, and significantly reduce capital costs, as well as operational and infrastructure overhead, and
- you can also significantly extend the lifetime of your existing hardware and software by getting increased performance and improve its ROI by using what you already have longer and making it go faster.
And that’s what makes in-memory computing such a hot topic these days: the demand to process ever growing datasets in real-time can now be fulfilled with the extraordinary performance and scale of in-memory computing, with economics so compelling that the business case becomes clear and obvious.
In-Memory Computing: What Are The Best Use Cases?
I can only speak for GridGain here but our user base is big enough to be statistically significant. GridGain has production customers in a wide variety of industries:
- Investment banking
- Insurance claim processing & modeling
- Real-time ad platforms
- Real-time sentiment analysis
- Merchant platform for online games
- Hyper-local advertising
- Geospatial/GIS processing
- Medical imaging processing
- Natural language processing & cognitive computing
- Real-time machine learning
- Complex event processing of streaming sensor data
And we’re also seeing our solutions deployed for more mundane use cases, like speeding the response time of a student registration system from 45 seconds to under a half-second.
By looking at this list it becomes pretty obvious that the best use cases are defined not by specific industry but by the underlying technical need, i.e. the need to get the ultimate best and uncompromised performance and scalability for a given task.
In many of these real-life deployments in-memory computing was an enabling technology, the technology that made these particular systems possible to consider and ultimately possible to implement.
The bottom line is that in-memory computing is beginning to unleash a wave of innovation that’s not built on Big Data per se, but on Big Ideas, ideas that are suddenly attainable. It’s blowing up the costly economics of traditional computing that frankly can’t keep up with either the growth of information or the scale of demand.
As the Internet expands from connecting people to connecting things, devices like refrigerators, thermostats, light bulbs, jet engines and even heart rate monitors are producing streams of information that will not just inform us, but also protect us, make us healthier and help us live richer lives. We’ll begin to enjoy conveniences and experiences that only existed in science fiction novels. The technology to support this transformation exists today – and it’s called in-memory computing.
World’s fastest, most scalable In-Memory Computing Platform now available under Apache 2.0 license
FOSTER CITY, Calif., March 3, 2014 /PRNewswire/ — Today GridGain (www.gridgain.org) officially released its industry leading In-Memory Computing Platform through an Apache 2.0 open source license, offering the world access to its technology for a broad range of real-time data processing applications. GridGain’s open source software provides immediate, unhindered freedom to develop with the most mature, complete and tested in-memory computing platform on the market, enabling computation and transactions orders of magnitude faster than traditional technologies allow.
“The promised advances of big data combined with internet scale simply can’t happen without a transformative change in computing performance,” said Abe Kleinfeld, CEO of GridGain. “In-memory computing is enabling this change by offering a major leap forward in computing power, opening doors to a new era of innovation.”
In-memory computing on commodity hardware is no longer a dream. The falling cost of RAM has made in-memory computing broadly accessible to organizations of all sizes. In a recent customer engagement, GridGain demonstrated one billion financial transactions per second using its In-Memory Data Grid software on just $25,000 of commodity hardware. According to GridGain, a performance increase of this magnitude will allow organizations to achieve goals they would not have previously considered pursuing.
“Organizations that do not consider adopting in-memory application infrastructure technologies risk being out-innovated by competitors that are early mainstream users of these capabilities,” said Massimo Pezzini, Gartner Press Release, Gartner Says In-Memory Computing Is Racing Towards Mainstream Adoption, April 3, 2013, http://www.gartner.com/newsroom/id/2405315.
GridGain’s In-Memory Computing Platform is used in a broad range of applications around the world including:
- Financial trading systems
- Online gaming
- Hyperlocal advertising
- Cognitive computing
- Geospatial analysis
Developers can download GridGain’s In-Memory Computing Platform at www.gridgain.org.
GridGain’s complete In-Memory Computing Platform enables organizations to conquer challenges that traditional technology can’t approach. While most organizations now ingest infinitely more data than they can possibly make sense of, GridGain’s customers leverage a new level of real-time computing power that allows them to easily innovate ahead of the accelerating pace of business. Built from the ground up, GridGain’s product line delivers all the high performance benefits of in-memory computing in a simple, intuitive package. From high performance computing, streaming and data grid to an industry first in-memory Hadoop accelerator, GridGain provides a complete end-to-end stack for low-latency, high performance computing for each and every category of payload and data processing requirements. Fortune 500 companies, top government agencies and innovative mobile and web companies use GridGain to achieve unprecedented performance and business insight. GridGain is headquartered in Foster City, California. Learn more at http://www.gridgain.com.
I would like to clarify definitions for the following technologies:
- In-Memory Distributed Cache
- In-Memory Data Grid
These terms are, surprisingly, often used interchangeably and yet technically and historically they represent very different products and serve different, sometimes very different, use cases.
It’s also important to note that there’s no specifications or industry standards on what cache, or data grid should be (unlike java application servers and JEE, for example). There was and still is an attempt to standardize caching via JSR107 and JSR 347 but it has been years (almost a decade for JSR 107) in the making and they are both hopelessly outdated by now (I’m on the expert group for JSR 107).
Tricycle vs. Motorcycle
First of all, let me clarify that I am discussing caches and data grids in the context of in-memory, distributed architectures. Traditional disk-based databases and non-distributed in-memory caches or databases are out of scope for this article.
Chronologically, caches and data grids were developed in that order from simple caching to more complex data grids. The first distributed caches appeared in the late 1990s, and data grids emerged around 2003-2005.
Both of these technologies are enjoying a significant boost in interest in the last couple years thanks to explosive growth in-memory computing in general fueled by 30% YoY price reduction for DRAM and cheaper Flash storage.
Despite the fact that I believe that distributed caching is rapidly going away, I still think it’s important to place it in its proper historical and technical context along with data grids and databases.
In-Memory Distributed Caching
The primary use case for caching is to keep frequently accessed data in process memory to avoid constantly fetching this data from disk, which leads to the High Availability (HA) of that data to the application running in that process space (hence, “in-memory” caching).
Most of the caches were built as distributed in-memory key/value stores that supported a simple set of ‘put’ and ‘get’ operations and optionally some sort of read-through and write-through behavior for writing and reading values to and from underlying disk-based storage such as an RDBMS. Depending on the product, additional features like ACID transactions, eviction policies, replication vs. partitioning, active backups, etc. also became available as the products matured.
These fundamental data management capabilities of distributed caches formed the foundation for the technologies that came later and were built on top of them such as In-Memory Data Grids.
In-Memory Data Grid
The feature of data grids that distinguishes them from distributed caches the most was their ability to support co-location of computations with data in a distributed context and consequently provided the ability to move computation to data. This capability was the key innovation that addressed the demands of rapidly growing data sets that made moving data to the application layer increasing impractical. Most of the data grids provide some basic capabilities to move the computations to the data.
Another uniquely new characteristic of in-memory data grids is the addition of distributed MPP processing based on standard SQL and/or MapReduce, that allows to effectively compute over data stored in-memory across the cluster.
Just as distributed caches were developed in response to a growing need for data HA, in-memory data grids were developed to respond to the growing complexities of data processing. It was no longer enough to have simple key-based access. Distributed SQL, complex indexing and MapReduce-based processing across TBs of data in-memory are necessary tools for today’s demanding data processing.
Adding distributed SQL and/or MapReduce type processing required a complete re-thinking of distributed caches, as focus has shifted from pure data management to hybrid data and compute management.
This new and very disruptive capability of in-memory data grids also marked the start of the in-memory computing revolution.
What is common about Oracle and SAP when it comes to In-Memory Computing? Both see this technology as merely a high performance addition to SQL-based database products. This is shortsighted and misses a significant point.
SQL Is Not Enough For New Payloads
It is interesting to note that as the NoSQL movement sails through the “trough of disillusionment,” traditional SQL and transactional datastores are re-gaining some of the attention. But, importantly, the return to SQL, even based on in-memory technology, is limiting for many newer payload types. In-Memory Computing will play a role which is much more significant than that of a mere SQL database accelerator.
Let’s take high performance computations as an example. Use cases abound: anything from traditional MonteCarlo simulations, video and audio processing, to NLP and image processing software. All can benefit greatly from in-memory processing and gain critical performance improvements – yet for systems like this a SQL database is of little, if any, help at all. In fact, SQL has absolutely nothing to do with these use cases – they require traditional HPC processing along the lines of MPI, MapReduce or MPP — and none of these are features of either Oracle or SAP Hana databases.
Or take streaming and CEP as another example. Tremendous growth in sensory, machine-to-machine and social data, generated in real time, makes streaming and CEP one of the fastest growing use cases for big data processing. Ability to ingest hundreds of thousands of events per seconds and process them in real time has practically nothing to do with traditional SQL databases – but everything to do with in-memory computing. In fact – these systems require a completely different approach of sliding window processing, streaming indexing and complex distributed workflow management – none of which are capabilities of either Oracle or SAP Hana.
Nonetheless, SQL processing was, is, and always will be with us. Ironically, it is now getting back on some of the pundits’ radars. For example, in data warehousing, where Hadoop can be used as a massive data store of record, SQL can play well. In-Memory Computing, however, plays a greater role than just a cache for a large datastore. New payload types require different processing approaches – and all benefit from the dramatic performance improvements brought by in-memory computing.
At GridGain, we are keenly aware of the self evident point: In-Memory Computing is much more significant than just getting a slow SQL database to go faster. Our end-to-end product suite delivers many additional benefits of in-memory computing, handles use cases that are impossible to address in the traditional database world. And there’s so much more to come.
We are happy to announce the general availability release for GridGain 5.2 which includes updates to all products in the platform:
- In-Memory HPC 5.2
- In-Memory Database 5.2
- In-Memory Streaming 2.0
- In-Memory Accelerator for Hadoop 2.0
We anticipate this being the last mid-point release in the platform before we roll out 6.0 line Q114 or Q214 (we are still planning to have bi-weekly service releases going forward as usual).
During past months we’ve been working very diligently to improve the general usability of our products: from first impressions, to POCs, to production use. Despite the fact that GridGain has enjoyed a stellar record on this front for years – the platform’s size is growing rapidly (we are now at almost 4x size of the entire Hadoop codebase, for example) – and we need to make sure that size and complexity don’t overshadow the simplicity and usability our products enjoyed so far.
We’ve added many features and enhancements: better error messages, automatic configuration conflict detection, automatic backward compatibility checks, and better documentation.
Work in this direction will continue. We listen to our customers and pay attention to how they use our products. We make improvements every sprint.
One of the biggest improvement in the last 6 months is performance for non-transactional use cases. GridGain has been winning every benchmark when it comes to distributed ACID transactions – but we haven’t had same winning margins when it came to simpler, non-transactional payloads.
It’s fixed now.
We are currently running over 50 benchmarks against every competitive database and data grid products (all seven of them) and currently are winning over 95% of them with some as much as 3-4x. That includes 100% of distributed ACID transactional use cases and most of of the non-transactional use cases (EC, simple automicity, local-only transactions, etc.)
GridGain still holds a record of achieving 1 Billion TPS on 10 commodity Dell R610 blades. The records was achieved in a open tender and is verifiable. No other product has yet achieved this level of performance.
There’s plenty of exciting stuff that we’ve been working on for the past 6-9 months that will be made public early next year when GridGain 6.0 platform will roll out. Some features have trickled out to the public – but most have been kept tight for the next release.
As always, grab your free download of GridGain at http://www.gridgain.com/download and check out our constantly growing documentation center for all your screencasts, videos, white papers, and technical documentation: http://www.gridgain.com/documentation
In the last 12 months we observed a growing trend that use cases for distributed caching are rapidly going away as customers are moving up stack… in droves.
Let me elaborate by highlighting three points that when combined provide a clear reason behind this observation.
Databases Caught Up With Distributed Caching
In the last 3-5 years traditional RDBMSs and new crop of simpler NewSQL/NoSQL databases have mastered the in-memory caching and now provide comprehensive caching and even general in-memory capabilities. MongoDB and CouchDB, for example, can be configured to run mostly in-memory (with plenty caveats but nonetheless). And when Oracle 12 and SAP HANA are in the game (with even more caveats) – you know it’s a mainstream already.
There’s simply less reasons today for just caching intermediate DB results in memory as data sources themselves do a pretty decent job at that, 10GB network is often fast enough and much faster IB interconnect is getting cheaper. Put it the other way, performance benefits of distributed caching relative to the cost are simpler not as big as they were 3-5 years ago.
Emerging “Caching The Cache” anti-pattern is a clear manifestation of this conundrum. And this is not only related to historically Java-based caching products but also to products like Memcached. It’s no wonder that Java’s JSR107 has been such a slow endeavor as well.
Customers Demand More Sophisticated Products
In the same time as customers moving more and more payloads to in-memory processing they are naturally starting to have bigger expectations than the simple key/value access or full-scan processing. As MPP style of processing on large in-memory data sets becoming a new “norm” these customers are rightly looking for advanced clustering, ACID distributed transactions, complex SQL optimizations, various forms of MapReduce – all with deep sub-second SLAs – as well as many other features.
Distributed caching simply doesn’t cut it: it’s a one thing to live without a distributed hash map for your web sessions – but it’s completely different story to approach mission critical enterprise data processing without transactional data center replication, comprehensive computational and data load balancing, SQL support or complex secondary indexes for MPP processing.
Apples and oranges…
Focus Shifting to Complex Data Processing
And not only customers move more and more data to in-memory processing but their computational complexity grows as well. In fact, just storing data in-memory produces no tangible business value. It is the processing of that data, i.e. computing over the stored data, is what delivers net new business value – and based on our daily conversations with prospects the companies across the globe are getting more sophisticated about it.
Distributed caches and to a certain degree data grids missed that transition completely. While concentrating on data storage in memory they barely, if at all, provide any serious capabilities for MPP or MPI-based or MapReduce or SQL-based processing of the data – leaving customers scrambling for this additional functionality. What we are finding as well is that just SQL or just MapReduce, for instance, is often not enough as customers are increasingly expecting to combine the benefits of both (for different payloads within their systems).
Moreover, the tight integration between computations and data is axiomatic for enabling “move computations to the data” paradigm and this is something that simply cannot be bolted on existing distributed cache or data grid. You almost have to start form scratch – and this is often very hard for existing vendors.
And unlike the previous two points this one hits below the belt: there’s simply no easy way to solve it or mitigate it.
So, what’s next? I don’t really know what the category name will be. May be it will be Data Platforms that would encapsulate all these new requirements – may be not. Time will tell.
At GridGain we often call our software end-to-end in-memory computing platform. Instead of one do-everything product we provide several individual but highly integrated products that address every major type of payload of in-memory computing: from HPC, to streaming, to database, and to Hadoop acceleration.
It is an interesting time for in-memory computing. As a community of vendors and early customers we are going through our first serious transition from the stage where simplicity and ease of use were dominant for the early adoption of the disruptive technology – to a stage where growing adaption now brings in the more sophisticated requirements and higher customer expectations.
As vendors – we have our work cut out for us.
As any fast growing technology In-Memory Computing has attracted a lot of interest and writing in the last couple of years. It’s bound to happen that some of the information gets stale pretty quickly – while other is simply not very accurate to being with. And thus myths are starting to grow and take hold.
I want to talk about some of the misconceptions that we are hearing almost on a daily basis here at GridGain and provide necessary clarification (at least from our our point of view). Being one of the oldest company working in in-memory computing space for the last 7 years we’ve heard and seen all of it by now – and earned a certain amount of perspective on what in-memory computing is and, most importantly, what it isn’t.
Let’s start at… the beginning. What is the in-memory computing? Kirill Sheynkman from RTP Ventures gave the following crisp definition which I like very much:
“In-Memory Computing is based on a memory-first principle utilizing high-performance, integrated, distributed main memory systems to compute and transact on large-scale data sets in real-time – orders of magnitude faster than traditional disk-based systems.”
The most important part of this definition is “memory-first principle”. Let me explain…
Memory-first principle (or architecture) refers to a fundamental set of algorithmic optimizations one can take advantage of when data is stored mainly in Random Access Memory (RAM) vs. in block-level devices like HDD or SSD.
RAM has dramatically different characteristics than block-level devices including disks, SSDs or Flash-on-PCI-E arrays. Not only RAM is ~1000x times faster as a physical medium, it completely eliminates the traditional overhead of block-level devices including marshaling, paging, buffering, memory-mapping, possible networking, OS I/O, and I/O controller.
Let’s look at example: say you need to read a single record in your program.
In in-memory context your code will be compiled to interact with memory controller and read it directly from local RAM in the exact format you need (i.e. your object representation in particular programming language) – in most cases that will result in a simple pointer arithmetic. If you use proper vectorized execution technique – you’ll often read it from L2 cache of your CPUs. All in all – we are talking about nanoseconds and this performance is guaranteed for all cases.
If you read the same record form block-level device – you are in for a very different ride… Your code will have to deal with OS I/O, buffered read, I/O controller, seek time of the device, and de-marshaling back the byte stream that you get from it to an object representation that you actually need. In worst case scenario – we’re talking dozen milliseconds. Note that SSDs and Flash-on-PCI-E only improves portion of the overhead related to seek time of the device (and only marginally).
Taking advantage of these differences and optimizing your software accordingly – is what memory-first principle is all about.
Now, let’s get to the myths.
Myth #1: It’s Too Expensive
This is one of the most enduring myths of in-memory computing. Today – it’s simply not true. Five or ten years ago, however, it was indeed true. Look at the historical chart of USD/MB storage pricing to see why:
The interesting trend is that price of RAM is dropping 30% every 12 months or so and is solidly on the same trajectory as price of HDD which is for all practical reasons is almost zero (enterprises care more today about heat, energy, space than a raw price of the device).
The price of 1TB RAM cluster today is anywhere between $20K and $40K – and that includes all the CPUs, over petabyte of disk based storage, networking, etc. CIsco UCS, for example, offers very competitive white-label blades in $30K range for 1TB RAM setup: http://buildprice.cisco.com/catalog/ucs/blade-server Smart shoppers on eBay can easily beat even the $20K price barrier (as we did at GridGain for our own recent testing/CI cluster).
In a few years from now the same 1TB TAM cluster setup will be available for $10K-15K – which makes it all but commodity at that level.
And don’t forget about Memory Channel Storage (MCS) that aims to revolutionize storage by providing the Flash-in-DIMM form factor – I’ve blogged about it few weeks ago.
Myth #2: It’s Not Durable
This myths is based on a deep rooted misunderstanding about in-memory computing. Blame us as well as other in-memory computing vendors as we evidently did a pretty poor job on this subject.
The fact of the matter is – almost all in-memory computing middleware (apart from very simplistic ones) offer one or multiple strategies for in-memory backups, durable storage backups, disk-based swap space overflow, etc.
More sophisticated vendors provide a comprehensive tiered storage approach where users can decide what portion of the overall data set is stored in RAM, local disk swap space or RDBMS/HDFS – where each tier can store progressively more data but with progressively longer latencies.
Yet another source of confusion is the difference between operational datasets and historical datasets. In-memory computing is not aimed at replacing enterprise data warehouse (EDW), backup or offline storage services – like Hadoop, for example. In-memory computing is aiming at improving operational datasets that require mixed OLTP and OLAP processing and in most cases are less than 10TB in size. In other words – in-memory computing doesn’t suffer from all-or-nothing syndrome and never requires you to keep all data in memory.
If you consider the totally of the data stored by any one enterprise – the disk still has a clear place as a medium for offline, backup or traditional EDW use cases – and thus the durability is there where it always has been.
Myth #3: Flash Is Fast Enough
The variations of this myth include the following:
- Our business doesn’t need this super-fast processing (likely shortsighted)
- We can mount RAM disk and effectively get in-memory processing (wrong)
- We can replace HDDs with SSDs to get the performance (depends)
Mounting RAM disk is a very poor way of utilizing memory from every technical angle (see above).
As far as SSDs – for some uses cases – the marginal performance gain that you can extract from flash storage over spinning disk could be enough. In fact – if you are absolutely certain that the marginal improvements is all you ever need for a particular application – the flash storage is the best bet today.
However, for a rapidly growing number of use cases – speed matters. And it matters more and for more businesses every day. In-memory computing is not about marginal 2-3x improvement – it is about giving you 10-100x improvements enabling new businesses and services that simply weren’t feasible before.
There’s one story that I’ve been telling for quite some time now and it shows a very telling example of how in-memory computing relates to speed…
Around 6 years ago GridGain had a financial customer who had a small application (~1500 LOC in Java) that took 30 seconds to prepare a chart and a table with some historical statistical results for a given basket of stocks (all stored in Oracle RDBMS). They wanted to put it online on their website. Naturally, users won’t wait for half a minute after they pressed the button – so, the task was to make it around 5-6 seconds. Now – how do you make something 5 times faster?
We initially looked at every possible angle: faster disks (even SSD which were very expensive then), RAID systems, faster CPU, rewriting everything in C/C++, running on different OS, Oracle RAC – or any combination of thereof. But nothing would make an application run 5x faster – not even close… Only when we brought the the dataset in memory and parallelized the processing over 5 machines using in-memory MapReduce – we were able to get results in less than 4 seconds!
The morale of the story is that you don’t have to have NASA-size problem to utilize in-memory computing. In fact, every day thousands of businesses solving performance problem that look initially trivial but in the end could only be solved with in-memory computing speed.
Speed also matters in the raw sense as well. Look at this diagram from Stanford about relative performance of disks, flash and RAM:
As DRAM closes its pricing gap with flash such dramatic difference in raw performance will become more and more pronounced and tangible for business of all sizes.
Myth #4: It’s About In-Memory Databases
This is one of those mis-conceptions that you hear mostly from analysts. Most analysts look at SAP HANA, Oracle Exalytics or something like QlikView – and they conclude that this is all that in-memory computing is all about, i.e. database or in-memory caching for faster analytics.
There’s a logic behind it, of course, but I think this is rather a bit shortsighted view.
First of all, in-memory computing is not a product – it is a technology. The technology is used to built products. In fact – nobody sells just “in-memory computing” but rather products that are built with in-memory computing.
I also think that in-memory databases are important use case… for today. They solve a specific use case that everyone readily understands, i.e. faster system of records. It’s sort of a low hanging fruit of in-memory computing and it gets in-memory computing popularized.
I do, however, think that the long term growth for in-memory computing will come from streaming use cases. Let me explain.
Streaming processing is typically characterized by a massive rate at which events are coming into a system. Number of potential customers we’ve talked to indicated to us that they need to process a sustained stream of up to 100,000 events per second with out a single event loss. For a typical 30 seconds sliding processing window we are dealing with 3,000,000 events shifting by 100,000 every second which have to be individually indexed, continuously processed in real-time and eventually stored.
This downpour will choke any disk I/O (spinning or flash). The only feasible way to sustain this load and corresponding business processing is to use in-memory computing technology. There’s simply no other storage technology today that support that level of requirements.
So we strongly believe that in-memory computing will reign supreme in streaming processing.
GridGain just posted service point releases for In-Memory HPC and In-Memory Database products version 5.1.6. If you are currently running either of these two products we recommend to update. This point release includes performance improvements and number of bug fixes:
CLIENT_ONLYmode for partitioned cache.
ATOMICatomicity mode for better performance for non-transactional use.
- New optional
GridOptimizedMarshallableinterface to improve optimized marshaller.
- New one-phase commit in
TRANSACTIONALmode for basic
- New automatic back-pressure control for async operations.
- Multiple fixes/enhancements to Visor Management Console.
Release notes available.
What does the relatively new acronym MCI have to do with the accelerated adoption of in-memory computing? I’d say everything.
MCI stands for Memory Channel Interface storage (a.k.a MCS – Memory Channel Storage) and it essentially allows you to put NAND flash storage into a DIMM form factor and enable it to interface with a CPU via a standard memory controller. Put another way, MCI provides a drop-in replacement for DDR3 RDIMMs with 10x the memory capacity and a 10x reduction in price.
Historically, one of the major inhibitors behind in-memory computing adoption was the high cost of DRAM relative to disks and flash storage. While advantages such as 100x performance, lower power consumption and higher reliability were clearly known for years, the price delta was and is still relatively high:
|Storage||~ Performance||~ Price|
|1TB DDR3 RDIMM (32 DIMMs)||1000-10,000x||$20,000|
While spinning HDDs are essentially cost-free for enterprise consumption, and flash storage is enjoying mass adoption, DRAM storage still lags behind simply due to higher cost.
MCI-based storage is about to change this once and for all as it aims to bring the price of flash-based DRAM to the same level as today’s SSD and PCI-E flash storage.
MCI vs. PCI-E Flash
If prices are relatively similar between MCI and PCI-E storage, what makes MCI so much more important? The answer is direct memory access vs. block-based device.
All of the PCI-E flash storage today (FusionIO, Violin, basic SSDs, etc.) are recognized by the OS as block devices, i.e. essentially fast hard drives. Applications access these devices via typical file interface involving all typical marshaling, buffering, OS context switching, networking and IO overhead.
MCI provides an option to view its flash storage simply as main system memory, eliminating all the OS/IO/network overhead, while working directly via a highly optimized memory controller – the same controller that handles massive CPU-DDR3 data exchange – and enabling software like GridGain’s to access the flash storage as normal memory. This is a game changer and potentially a final frontier in the storage placement technology. In fact, you can’t place application data any closer to the CPU than the main memory and that is precisely what MCI enables us to do on terabyte and petabyte scale.
Moreover, MCI provides direct improvements over PCI-E storage. Diablo Technology, the pioneer behind MCI technology, claims that MCI is more performant (lower latencies and higher bandwidth) than typical PCI-E and SATA SSDs while providing ever elusive constant latency that is unachievable with standard PCE-E or SSD technologies.
Another important characteristic of MCI storage is the plug-n-play fashion in which it can be used – no custom hardware, no custom software required. Imagine, for example, an array of 100 micro-servers (ARM-based servers in micro form factor), each with 256GB of MCI-based system memory, drawing less than 10 watts of power, costing less than $1000 each.
You now have a cluster with 25TB in-memory storage, 200 cores of processing power, running standard Linux, drawing around 1000 watts for about the same cost as a fully loaded Tesla Model S. Put GridGain’s In-Memory Computing Stack on it and you have an eco-friendly, cost effective, powerful real-time big data cluster ready for any task.