Information and Insights on In-Memory Computing
Thursday, January 21, 2016
Join us on Wednesday, February 3, 2016 at 11:00 AM PDT / 2:00 PM EDT for another deep-dive webinar to learn how to easily share state in-memory across multiple Spark jobs, either within the same application or between different Spark applications using an implementation of Spark RDD abstraction provided in Apache Ignite™.
Friday, January 15, 2016
In recent benchmark testing performed on GridGain Community Edition 1.5.0 and Hazelcast 3.6-EA2, GridGain consistently outperformed Hazelcast on various atomic and transactional cache operations, and SQL based cache queries as measure on both AWS EC2 and Yardstick Configurations. Results Summary Deadlock Free Transactions: 65% – 115% higher operations/sec throughput Transactional Operations: 24% – 84% higher operations/sec throughput Atomic Operations: 15% – 38% higher operations/sec throughput
Thursday, January 14, 2016
The call for speaking proposals for the In-Memory Computing Summit 2016 closes February 1st. The conference is May 23-24 at the Grand Hyatt San Francisco in Union Square. This event was created for developers, decision makers and visionaries to network and learn about technologies, solutions and real-world use cases for in-memory computing.
In-Memory Computing Summit 2016 Dates and Call for Speakers Announced after Overwhelmingly Successful Inaugural Summit Wins Industry Raves
Monday, December 7, 2015
Attendees Urged to Buy Tickets Early based on Last Year’s Oversubscribed Waiting List; Visionaries Invited to Submit Proposals for IMC Summit in San Francisco May 23-24
Thursday, July 9, 2015
In this blog I will describe how a large bank was able to scale a multi-geographical deployment on top of Apache Ignite™ (incubating) In-Memory Data Grid. Problem Definition Imagine a bank offering variety of services to its customers. The customers of the bank are located in different geo-zones (regions), and most of the operations performed by a customer are zone-local, like ATM withdrawals or bill payments. Zone-local operations are very frequent and need to be processed very quickly.
Tuesday, May 19, 2015
In my previous post I have demonstrated benchmarks for atomic JCache (JSR 107) operations and optimistic transactions between Apache Ignite™ data grid and Hazelcast. In this blog I will focus on benchmarking the pessimistic transactions.
Tuesday, April 14, 2015
Wednesday, April 8, 2015
Sunday, April 5, 2015
Monday, March 9, 2015
Ever seen a product which has duplicated mirrored APIs for synchronous and asynchronous processing? I never liked such APIs as they introduce extra noise to what otherwise could be considered a clean design. There is really no point to have myMethod() and myMethodAsync() methods while all you are trying to do is to change the mode of method execution from synchronous to asynchronous.
Why the Fast Data world needs a proven and mature In-Memory Data Fabric backed by the Apache Software Foundation
Monday, November 3, 2014
Much of what human beings experience as commonplace today — social networking, on-line gaming, mobile and wearable computing — was impossible a decade ago. One thing is certain: we're going to see even more impressive advances in the next few years. However, this will be the result of a fundamental change in computing, as current methods have reached their limit in terms of speed and volume. Traditional disk-based storage infrastructure is far too slow to meet today’s data demands for speed at volume, which are growing exponentially.
Thursday, September 25, 2014
Hi, this is Max Herrmann from GridGain Systems, and today is a big day as in-memory computing as you know it is about to be redefined. Sure, in-memory computing technologies have been around for many years in one form or another. First there was caching, which graduated to distributed caching over time by affording itself a scale-out architecture. Then came in-memory databases which, it turns out, often don’t scale so well, and/or they don’t support ACID-compliant transactions.
Tuesday, September 16, 2014
In this blog we cover a very important optimization that can be utilized for in-memory caches, specifically for cases where data is partitioned across the network.
Tuesday, September 9, 2014
In this blog we will cover a case when an in-memory cache serves as a layer on top of a persistent database. In this case the database serves as a primary system of records, and distributed in-memory cache is added for performance and scalability reasons to accelerate reads and (sometimes) writes to the data.
Tuesday, September 2, 2014
2-Phase-Commit is probably one of the oldest consensus protocols and is known for its deficiencies when it comes to handling failures, as it may indefinitely block the servers waiting in prepare state. To mitigate this, a 3-Phase-Commit protocol was introduced which adds better fault tolerance at the expense of extra network round-trip message and higher latencies.