GridGain Blog

Imagine that we need to build a monitoring infrastructure for a distributed database, such as Apache Ignite. Let’s put metrics into Prometheus. And, let’s draw charts in Grafana. Let’s not forget about the notification system—we’ll set up Zabbix for that. Let’s use Jaeger for traces analysis. For state management, the CLI will do. As for SQL queries, let’s use a free JDBC client, such as DBeaver…
Publisher's Note: the article describes a custom data loading technique that worked best for a specific user scenario. It's neither a best practice nor a generic approach for data loading in Ignite. Explore standard loading techniques first, such as IgniteDataStreamer or CacheStore.loadCache, which can also be optimized for loading large data sets. Now, in-memory cache technology is becoming…
Using the initial-query, listener, and remote-filter features of Ignite continuous queries to detect, filter, process, and dispatch real-time events (Note that this is Part 3 of a three-part series on Event Stream Processing. Here are the links for Part 1 and Part 2.) Real-time handling of streams of business events is a critical part of modern information-management systems, including online…
In this third article of the three-part series “Getting Started with Ignite Data Loading,” we continue to review data loading into Ignite tables and caches, but now we focus on using the Ignite Data Streamer facility to load data in large volume and with highest speed. Apache Ignite Data-Loading Facilities In the first article of this series, we discussed the facilities that are available to…
In this second article of the three-part “Getting Started with Ignite Data Loading” series, we continue our review of data loading into Ignite tables and caches. However, we now focus on Ignite CacheStore. CacheStore Load Facility Background Let’s review what was discussed about CacheStore in “Article 1: Loading Facilities.” The CacheStore interface of Ignite is the primary vehicle used in…
With this first part of “Getting Started with Ignite Data Loading” series we will review facilities available to developers, analysts and administrators for data loading with Apache Ignite. The subsequent two parts will walk through the two core Apache Ignite data loading techniques, the CacheStore and the Ignite Data Streamer. We are going to review these facilities in relation to specific…
Apache Ignite Deployment Patterns The Apache Ignite® in-memory computing platform comprises high-performance distributed, multi-tiered storage and computing facilities, plus a comprehensive set of APIs, libraries, and frameworks for consumption and solution delivery (all with a “memory first” paradigm). This rich set of capabilities enables one to configure and deploy Ignite in many diverse…
Kafka with Debezium and GridGain connectors allows synchronizing data between third party Databases and a GridGain cluster. This change data capture based synchronization can be done without any coding; all it requires is to prepare configuration files for each of the points. Developers and architects who can’t yet fully move from a legacy system can deploy this solution to give a performance…
My acquaintanceship with PostgreSQL started back in 2009 - the time when many companies were trying to board the social networking train by following Facebook's footsteps. An employer I used to work for was not an exception. Our team was building a social networking platform for a specific audience and faced various architectural challenges. For instance, soon after launching the product and…
When the Ignite project emerged in the Apache Software Foundation, it was thought of as a pure in-memory-solution: a distributed cache that put data into memory to speed access time. But then in 2017 came Apache® Ignite™ 2.1, which saw the debut of Ignite’s Native Persistence module that allowed Ignite to be treated as a full-blown distributed database. Since then Ignite has not depended on an…