Welcome to In-Memory Computing...
Most of the time we at GridGain are not at liberty to discuss customers' benchmark and POC - but I want to share some numbers we've recently demonstrated to one of the largest financial institution in the world (under the strict open tender rules). The task was rather simple and isolated - yet the one that presents a challenge to achieve the target performance numbers.
Use Case
Imagine you are building a hypothetical real-time risk analytics system. You have 500 events per second coming into your system and you need to update approximately 10,000,000 positions per each event based on some predefined formula. For obvious performance considerations all data must reside and be processed in memory with possible overflow to disk, when necessary. System should scale linearly up to 100+ of nodes and work on any type of commodity hardware.
Results
Back to this POC. One of our top engineers spent 10 days building this pilot and after few configuration & algorithmic improvements was able to achieve 1 billon ACID updates per second on the target dataset using GridGain 4.3 "Big Data" edition running on 10 nodes cluster consisting of commodity Dell PowerEdge R410 servers with 96GB RAM each.
GridGain 4.3 provides several key features that were necessary in this POC to achieve the performance numbers:
There are very few technologies today on the market that can deliver 1,000,000,000 transactions per second on $50K hardware - if any. If you need it today - GridGain 4.3 delivers this performance 100%.

And, yes, it wasn't a typo in the title: 1,000,000,000 distributed fully transactional updates per second on 10 nodes cluster costing less than $50K using GridGain's In-Memory Data Platform.
Most of the time we at GridGain are not at liberty to discuss customers' benchmark and POC - but I want to share some numbers we've recently demonstrated to one of the largest financial institution in the world (under the strict open tender rules). The task was rather simple and isolated - yet the one that presents a challenge to achieve the target performance numbers.
Use Case
Imagine you are building a hypothetical real-time risk analytics system. You have 500 events per second coming into your system and you need to update approximately 10,000,000 positions per each event based on some predefined formula. For obvious performance considerations all data must reside and be processed in memory with possible overflow to disk, when necessary. System should scale linearly up to 100+ of nodes and work on any type of commodity hardware.
What I really like about these requirements is that almost any series financial organization would have projects with similar requirements to these - if not exactly like these. We are all moving towards same-day processing and more and more into the realm of real-time processing regardless of how big the book of business is. And when it comes to risk analytics, fraud protection, or any type of trading - we are seeing these requirements almost on a weekly basis...
Results
Back to this POC. One of our top engineers spent 10 days building this pilot and after few configuration & algorithmic improvements was able to achieve 1 billon ACID updates per second on the target dataset using GridGain 4.3 "Big Data" edition running on 10 nodes cluster consisting of commodity Dell PowerEdge R410 servers with 96GB RAM each.
GridGain 4.3 provides several key features that were necessary in this POC to achieve the performance numbers:
- World's fastest marshaling algorithm, up to 5x faster than Google Kryo.
- Highly optimized co-located cache mode
- Pluggable & user customizable affinity distribution function
- Affinity-aware group locking
- Pluggable cache store with pre-loading
- Compute and data loaders with back-pressure controlling
There are very few technologies today on the market that can deliver 1,000,000,000 transactions per second on $50K hardware - if any. If you need it today - GridGain 4.3 delivers this performance 100%.