In-Memory Speed and Massive Scalability

In-memory computing solutions deliver real-time application performance and massive scalability by creating a copy of your data stored in disk-based databases in RAM. When data is stored in RAM, applications run 1,000x faster because data does not need to be retrieved from disk and moved into RAM prior to processing. Performance is further increased across the distributed in-memory computing cluster by using massive parallel processing to distribute processing across the cluster nodes, leveraging the computing power of all of the nodes in the cluster. In-memory computing platforms provide massive application scalability by allowing users to expand the size of their in-memory CPU pool by adding more nodes to the cluster. In-memory computing platforms can scale up by upgrading existing nodes with more powerful servers which have more cores, RAM and/or computing power.

The GridGain® in-memory computing platform sits between the application and data layers to provide in-memory speed and massive scalability to applications built on disk-based databases. GridGain works seamlessly with existing application and data layers including all popular RDBMS, NoSQL and Hadoop databases. The Unified API allows you to easily integrate with your existing applications using a variety of common protocols including SQL, Java, C++, .NET, and many more. Advanced ANSI-99 SQL support, which includes DDL and DML, allows you to interact with the system using standard SQL commands. GridGain provides ACID transaction guarantees and is a powerful platform for OLTP, OLAP and hybrid transactional/analytical processing (HTAP) use cases. The GridGain in-memory computing platform is built on Apache® Ignite, the leading open source in-memory computing platform. The GridGain Community, Enterprise and Ultimate Editions add capabilities to the extensive Apache Ignite feature set detailed below.

GridGain includes an in-memory data grid, in-memory database, streaming analytics, and a continuous learning framework for machine and deep learning built on a next generation, memory-centric architecture