The GridGain In-Memory Computing Performance Blog

Information and Insights on In-Memory Computing

Dmitriy Setrakyan
Tuesday, May 19, 2015
In my previous post I have demonstrated benchmarks for atomic JCache (JSR 107) operations and optimistic transactions between Apache Ignite™ data grid and Hazelcast. In this blog I will focus on benchmarking the pessimistic transactions.
Dmitriy Setrakyan
Monday, March 9, 2015
Ever seen a product which has duplicated mirrored APIs for synchronous and asynchronous processing? I never liked such APIs as they introduce extra noise to what otherwise could be considered a clean design. There is really no point to have myMethod() and myMethodAsync() methods while all you are trying to do is to change the mode of method execution from synchronous to asynchronous.
Nikita Ivanov
Monday, November 3, 2014
Much of what human beings experience as commonplace today — social networking, on-line gaming, mobile and wearable computing — was impossible a decade ago. One thing is certain: we're going to see even more impressive advances in the next few years. However, this will be the result of a fundamental change in computing, as current methods have reached their limit in terms of speed and volume. Traditional disk-based storage infrastructure is far too slow to meet today’s data demands for speed at volume, which are growing exponentially.
Dane Christensen
Thursday, September 25, 2014
Hi, this is Max Herrmann from GridGain Systems, and today is a big day as in-memory computing as you know it is about to be redefined. Sure, in-memory computing technologies have been around for many years in one form or another. First there was caching, which graduated to distributed caching over time by affording itself a scale-out architecture. Then came in-memory databases which, it turns out, often don’t scale so well, and/or they don’t support ACID-compliant transactions.
Dmitriy Setrakyan
Tuesday, September 16, 2014
In this blog we cover a very important optimization that can be utilized for in-memory caches, specifically for cases where data is partitioned across the network.
Dmitriy Setrakyan
Tuesday, September 9, 2014
In this blog we will cover a case when an in-memory cache serves as a layer on top of a persistent database. In this case the database serves as a primary system of records, and distributed in-memory cache is added for performance and scalability reasons to accelerate reads and (sometimes) writes to the data.
Dmitriy Setrakyan
Tuesday, September 2, 2014
2-Phase-Commit is probably one of the oldest consensus protocols and is known for its deficiencies when it comes to handling failures, as it may indefinitely block the servers waiting in prepare state. To mitigate this, a 3-Phase-Commit protocol was introduced which adds better fault tolerance at the expense of extra network round-trip message and higher latencies.
Dmitriy Setrakyan
Wednesday, August 27, 2014
We are pleased to announce the release of GridGain Open Source In-Memory Computing Platform 6.2.0. The main components of the platform are: compute grid, data grid (or in-memory distributed cache), and CEP streaming. This release revolves primarily around Portable Object functionality as well as Distributed (or Guaranteed) Services.
Dmitriy Setrakyan
Thursday, July 3, 2014
With release 6.1.9, GridGain significantly simplified its installation and deployment. GridGain now allows for: One Click Installation: The product simply has to be downloaded and unzipped. After that it is ready to be used. One Jar Dependency: GridGain now has only one mandatory dependency - gridgain-6.1.9.jar. All other jars are optional.
Dmitriy Setrakyan
Monday, June 16, 2014
At GridGain, having worked on a distributed caching (data grid) product for many years, we constantly benchmark with various Garbage Collectors to find the optimal configuration for larger heap sizes. From conducting numerous tests, we have concluded that unless you are utilizing some off-heap technology (e.g. GridGain OffHeap), no Garbage Collector provided with JDK will render any kind of stable GC performance with heap sizes larger that 16GB.
Dmitriy Setrakyan
Tuesday, June 10, 2014
If you prefer a video demo with coding examples, visit the original blog post at Distributed In-Memory Caching generally allows you to replicate or partition your data in memory across your cluster. Memory provides a much faster access to the data, and by utilizing multiple cluster nodes the performance and scalability of the application increases significantly.
Nikita Ivanov
Monday, June 9, 2014
A few months ago, I spoke at the conference where I explained the difference between caching and an in-memory data grid. Today, having realized that many people are also looking to better understand the difference between two major categories in in-memory computing: In-Memory Database and In-Memory Data Grid, I am sharing the succinct version of my thinking on this topic - thanks to a recent analyst call that helped to put everything in place :)