Developers Use In-Memory Computing to Accelerate Application Performance

Developers must find ways to accelerate new and existing applications, achieve massive scalability, and use technologies like machine and deep learning to power real-time decision making. Developers must help their companies meet competitive demands to accelerate digital transformations, build out their applications quickly, and get it right the first time.

Developers
In-Memmory Computing
GridGain In-Memory Computing Solutions

GridGain offers developers the in-memory computing platform software, support, and professional services you need to better achieve your real-time digital transformation technical challenges. The GridGain in-memory computing platform easily integrates with your new or existing applications and provides real-time performance and massive scalability.

Built on the open source Apache Ignite project, GridGain is a cost-effective solution for accelerating and massively scaling out new or existing applications with users that span a wide variety of use cases and industries.

Developers Need the GridGain In-Memory Platform

For existing applications, GridGain is typically used as an in-memory data grid between the application and data layer, with no rip-and-replace of the underlying database. For new applications, GridGain is used as an in-memory data grid or an in-memory database. A Unified API, including ANSI-99 SQL and ACID transaction support, provides easy integration with your new or existing code, enabling you to create modern, flexible applications built on an in-memory computing platform which will grow with your business needs. Thin and thick clients are available which support a wide variety of protocols including SQL, Java, C++, .NET, PHP, Scala, Groovy and Node.js.

Business Decision Makers
How GridGain In-Memory Computing Solutions Help Developers Accelerate Applications

The white papers, webinars, application notes, product comparisons, and videos below discuss provide developers with various technical in-memory computing development examples.

Resources

This white paper will take a detailed look at the challenges faced by companies that have either used Redis and run into its limitations, or are considering Redis and find it is insufficient for their needs. This paper will also discuss how the GridGain in-memory computing platform has helped companies overcome the limitations of Redis for existing and new applications, and how GridGain has helped improve the customer experience.
This white paper discusses how incorporating Apache Ignite into your architecture can empower dramatically faster online analytics processing (OLAP) and online transaction processing (OLTP) when augmenting your current MySQL infrastructure. Read this white paper to learn more about how Apache Ignite can eliminate the pain points of MySQL.

Spread betting offers some compelling advantages, including low entry and transaction costs, preferential tax treatment, and a diverse array of products and options. Traders can bet on any type of event for which there is a measurable outcome that might go in either of two directions – for example, housing prices, the value of a stock-market index, or the difference in the scores of two teams in a sporting event.

This white paper provides an overview of in-memory computing technology with a focus on in-memory data grids. It discusses the advantages and uses of an IMDG and its role in digital transformation and improving the customer experience. It also introduces the GridGain® in-memory computing platform, and explains GridGain’s IMDG and other capabilities that have helped companies add speed and scalability to their existing applications.
This white paper covers the architecture, key capabilities, and features of GridGain®, as well as its key integrations for leading RDBMSs, Apache Spark™, Apache Cassandra™, MongoDB® and Apache Hadoop™. It describes how GridGain adds speed and unlimited horizontal scalability to existing or new OLTP or OLAP applications, HTAP applications, streaming analytics, and continuous learning use cases for machine or deep learning.
This white paper discusses the architecture, key capabilities and features of the Apache® Ignite™ in-memory computing platform project. Learn how it adds speed and scalability to existing and new applications.

The Apache Ignite transactional engine can execute distributed ACID transactions which span multiple nodes, data partitions, and caches/tables. This key-value API differs slightly from traditional SQL-based transactions but its reliability and flexibility lets you achieve an optimal balance between consistency and performance at scale by following several guidelines.

Apache Ignite can function in a strong consistency mode which keeps application records in sync across all primary and backup replicas. It also supports distributed ACID transactions that allow you to update multiple entries stored on different cluster nodes and in various caches/tables. In addition, consistency and transactional guarantees are put in effect for memory and disk tiers on every cluster node.

Apache Ignite 2.8 includes over 1,900 upgrades and fixes that enhance almost all components of the platform. The release notes include hundreds of line items cataloging the improvements. In this webinar Ignite community members demonstrate and dissect new capabilities related to production maintenance, monitoring, and machine learning including:

Attendees will be introduced to the fundamental capabilities of in-memory computing platforms (IMCPs) in this Apache Ignite tutorial. IMCPs boost application performance and solve scalability problems by storing and processing unlimited data sets distributed across a cluster of interconnected machines.

Apache Ignite® and GridGain® allow you to perform fast calculations and run highly efficient queries over distributed data. Both Ignite and GridGain provide a flexible configuration that can help you make cluster operations more secure. In this webinar, we will cover the following security topics:

  • The secure connection between nodes (SSL/TLS)
  • User authentication
  • User authorization

Using live examples, we will go through the configurations for:

Change Data Capture (CDC) has become a very efficient way to automate and simplify the ETL process for data synchronization between disjointed databases. It is also a useful tool for efficient replication schemas. We will cover the fundamental principles and restrictions of CDC and review examples of how change data capture is implemented in real life use cases. By the end of this session you will understand:

With most machine learning (ML) and deep learning (DL) frameworks, it can take hours to move data and to train models. It can also be hard to scale with data sets that are increasingly frequently larger than the capacity of any single server. The size of the data can also make it hard to incrementally test and retrain models in near real-time to improve business results.

Deployment models for Apache Ignite® and applications connected to it vary depending on the target production environment. A bare metal environment provides the most flexibility and fewer restrictions on configuration options. When using Docker and Kubernetes environments, you need to decide how Ignite and its associated applications will interact before writing the first line of code.

To take full advantage of an in-memory platform, it’s often not enough to upload your data into a cluster and start querying it with key-value or SQL APIs. You need to distribute the data efficiently and tap into distributed computations that minimize data movement over the network.

In this webinar, you’ll see how to design and execute distributed computations considering all the pros and cons. In particular, the following will be covered:

Apache Ignite is a powerful in-memory computing platform. The Apache IgniteSink streaming connector enables users to inject Flink data into the Ignite cache. Join Saikat Maitra to learn how to build a simple data streaming application using Apache Flink and Apache Ignite. This stream processing topology will allow data streaming in a distributed, scalable, and fault-tolerant manner, which can process data sets consisting of virtually unlimited streams of events.

If your company is one of the tens of thousands of organizations that use Apache® IgniteTM or GridGain® Community Edition in a production environment, GridGain Basic Support can provide you with peace of mind that you have a trusted partner to help keep your environment running flawlessly. The service includes....

This data sheet provides the key features and benefits of the GridGain in-memory computing platform.
This data sheet provides the key features and benefits of the GridGain In-Memory Accelerator for Hadoop and Spark.

This in-depth feature comparison shows how the most current versions of GridGain Professional Edition, Enterprise Edition, Ultimate Edition and Redis Enterprise (and their respective open source projects where relevant) compare in 25 categories.

Compares GridGain and Pivotal GemFire features in 25 areas: in-memory data grid functionality, caching, data querying, transactions, security and more.

This in-depth feature comparison shows how the most current versions of GridGain Professional Edition, Enterprise Edition, Ultimate Edition and Hazelcast (and their respective open source projects where relevant) compare in 25 different categories.

Compares GridGain and GigaSpaces features in 22 key areas: in-memory data grid functionality, caching, data querying, transactions, security and more.
Compares GridGain and Terracotta features in 22 key areas: in-memory data grid functionality, caching, data querying, transactions, security and more.

This in-depth feature comparison shows how the most current versions of GridGain Professional Edition, Enterprise Edition, Ultimate Edition and Oracle Coherence (and their respective open source projects where relevant) compare in 25 different categories.

Attendees were introduced to the fundamental capabilities of in-memory computing platforms (IMCPs). IMCPs boost application performance and solve scalability problems by storing and processing unlimited data sets distributed across a cluster of interconnected machines.

With most machine learning (ML) and deep learning (DL) frameworks, it can take hours to move data and to train models. It can also be hard to scale with data sets that are increasingly frequently larger than the capacity of any single server. The size of the data can also make it hard to incrementally test and retrain models in near real-time to improve business results.

Apache Ignite® and GridGain® allow users to perform fast calculations and run highly efficient queries over distributed data. Both Ignite and GridGain provide a flexible configuration that can help you make cluster operations more secure. This webinar covered the following security topics:

  • The secure connection between nodes (SSL/TLS)
  • User authentication
  • User authorization

It included examples of configurations for:

Deployment models for Apache Ignite® and applications connected to it vary depending on the target production environment. A bare metal environment provides the most flexibility and fewer restrictions on configuration options. When using Docker and Kubernetes environments, you need to decide how Ignite and its associated applications will interact before writing the first line of code.

Change Data Capture (CDC) has become a very efficient way to automate and simplify the ETL process for data synchronization between disjointed databases. It is also a useful tool for efficient replication schemas. These webinar slides cover the fundamental principles and restrictions of CDC and reviews examples of how change data capture is implemented in real life use cases. Topics covered include:

To take full advantage of an in-memory platform, it’s often not enough to upload your data into a cluster and start querying it with key-value or SQL APIs. You need to distribute the data efficiently and tap into distributed computations that minimize data movement over the network.

In this webinar, you’ll see how to design and execute distributed computations considering all the pros and cons. In particular, the following will be covered:

Apache Ignite is a powerful in-memory computing platform. The Apache IgniteSink streaming connector enables users to inject Flink data into the Ignite cache. Join Saikat Maitra to learn how to build a simple data streaming application using Apache Flink and Apache Ignite. This stream processing topology will allow data streaming in a distributed, scalable, and fault-tolerant manner, which can process data sets consisting of virtually unlimited streams of events.

If you experience limitations with the size, scale or performance of your relational database, it may be time to migrate to a distributed system. This webinar discussed how the distributed Apache Ignite platform can function as a database, providing both SQL and JCache APIs to work with your data. In this webinar, we will consider real-world examples and discuss pros and cons of each approach, especially:

Attendees of this webinar learned how to use the service grid capabilities of the Apache Ignite distributed in-memory computing platform. Simple code examples helped attendees review possible architectural solutions and learn how to build fault-tolerant, scalable and flexible systems. Throughout the webinar we also discussed service grid basic principles and internal implementation details that helped attendees better understand the product capabilities and build successful applications on top of Ignite.

Most enterprises have PostgreSQL deployments that they will be using for years to come for transactional, big data, mobile, and IoT use cases. How can Postgres continue to support the current and emerging use cases which demand ever higher performance and more scalability into the future?

In this video from the Bay Area In-Memory Computing Meetup on Wednesday, July 17, 2019, GridGain's Director of Product Management Greg Stachnick, discusses some of the in-memory computing cloud deployment best practices for in-memory data grid (IMDG) and in-memory database (IMDB) in the cloud. 
This IMCS Europe 2019 talk discusses the various components of Apache Ignite and GridGain, including memory storage, networking layer, compute grid, to help explain in-memory computing best practices for DevOps, high availability, proper testing, fault tolerance, and more.
This IMCS Europe 2019 video discusses some best practices for monitoring distributed in-memory computing systems, including how to monitor applications, cluster logs, cluster metrics, operating systems, and networks. It provides guidance on tools like Elasticsearch, Grafana, and GridGain Web Console.
This IMCS Europe 2019 talk discusses migrating an in-memory computing platform to the cloud. It covers best practices, special considerations, tools, and differences between public and private clouds.
This IMCS Europe 2019 keynote is a panel discussion of current and emerging trends in in-memory computing for enterprises looking to enable digital transformation.
This talk demonstrates how to implement integrating Apache Kafka with Apache Ignite in practice, explains the architectural reasoning and the benefits of real-time integration, and shares common usage patterns. The presenters build a streaming data pipeline using nothing but their bare hands, Apache Ignite, Kafka Connect, and KSQL.
GridGain Meetups provide the in-memory computing community with a venue to discuss in-memory computing issues, solutions, and examples. Our summertime-themed edition Meetup on June 26, 2019, featured three talks on analytics from GridGain, Confluent, Oracle, and Alluxio.
GridGain Meetups provide the in-memory computing community with a venue to discuss in-memory computing issues, solutions, and examples. Our summertime-themed edition Meetup on June 26, 2019, featured three talks on analytics from GridGain, Confluent, Oracle, and Alluxio.
In this IMCS Europe 2019 session, Denis Magda describes how Apache Ignite and GridGain as an in-memory computing platform can modernize existing data lake architectures, enabling real-time analytics that spans operational, historical, and streaming data sets.

Over the last decade, the 10x growth of transaction volumes, 50x growth in data volumes, and drive for real-time response and analytics has pushed relational databases beyond their limits. Scaling an existing RDBMS vertically with hardware is expensive and limited. Moving to NoSQL requires new skills and major changes to applications. Ripping out the existing RDBMS and replacing it with another RDBMS with a lower TCO is still risky.