GridGain Developers Hub
GitHub logo GridGain iso GridGain.com
GridGain Software Documentation

Managing and Monitoring Replication

Managing Replication Process

Replication process can be managed per cache.

When you start the master cluster, the replication process will start automatically if there are sender nodes available. If there are no sender nodes available, replication will be paused and you’ll need to start it manually.

Full State Transfer

Before the replication process is started, the caches in the master and remote clusters must be synchronized. If you start with empty caches, you do not need to do anything. However, if the caches in the master cluster already have data, you need to perform a full state transfer. Full state transfer is a process of sending the entire cache content to the remote cluster.

There are several ways you can perform a full state transfer:

Starting Replication

To start replication process for a specific cache, perform the startReplication(cache) operation. This can be done using one of the JMX beans. After performing the start operation, GridGain will start processing data updates that occur in the cache.

Stopping Replication

The replication process for a particular cache can be stopped either manually or automatically. Replication is stopped automatically when GridGain detects that the data cannot be replicated for some reason. Possible reasons include:

  • The sender nodes that are configured to replicate the data are not available;

  • The sender storage is full or corrupted.

A manual pause can be performed through the JMX Bean.

Pausing/Resuming Replication

You can pause the replication process between the master cluster and a specific remote cluster by suspending the process on the sender nodes. The sender nodes will stop sending data to the remote center and store the updates into the storage. All data updates that happen during the pause will be accumulated in the sender storage. When you resume the replication process, sender nodes will send the accumulated data to the remote cluster.

The pause and resume operations have to be performed on all sender nodes. The Using JMX Beans to Monitor and Manage Replication section explains how to pause/resume replication using JMX beans.

Using JMX Beans to Monitor and Manage Replication

The following JMX Bean provides information about the replication process of a specific cache. The bean can be obtained on any node that hosts the cache.

MBean’s ObjectName:

group=[cache name],name="Cache data replication"

Attributes:

Attribute Type Description Scope

DrSenderGroup

String

The name of the sender group.

Global

DrStatus

String

The status of the replication process for this cache.

Global

DrQueuedKeysCount

int

The number of entries that are waiting to be sent to the sender node.

Node

DrSenderHubsCount

int

The number of sender nodes available for the cache

Global

Operations:

Operation Description

start

Start the replication process for this cache.

stop

Stop the replication process for this cache.

transferTo (id)

Perform a full state transfer of the content of this cache to the given remote cluster ID.

The following MBean can be obtained on a sender node and allows you to pause/resume the replication process on that node.

MBean’s ObjectName:

group="Data Center Replication",name="Sender Hub"

Attributes:

Attribute Type Description Scop

Operations:

Operation Description

pause (id)

Pause replication for the given remote cluster ID. This operation will stop sending updated to the remote cluster. The updates will be stored in the sender storage on that node until the replication is resumed.

pause

Pause replication for all remote clusters.

resume (id)

Resume replication for the given remote cluster ID.

resume

Resume replication for all remote clusters.

Troubleshooting

When using data replication, you can monitor various statistics provided through JMX beans and events to make sure that the replication process is running smoothly.

Most common problems that you may want to monitor for include:

Problem: Size of the sender storage is growing

If the size of the sender storage on a specific sender node is growing, it means that the sender is not keeping up with the load or there is a problem with the connection to the remote cluster.

How to monitor:

Monitor the metric that shows the size of the storage

Actions:

Check network capacity or add more sender nodes (+ receivers?)

Problem: Sender storage is full or corrupted

When the sender storage gets full or becomes corrupted (i.e. due to an error), the replication process will stop for all caches.

How to monitor:

Listen to the EVT_DR_STORE_OVERFLOW or EVT_DR_STORE_CORRUPTED events. Refer to the Data Replication Events section for more information.

Actions:

After addressing the issue that caused the sender storage to get full, you have to do a full state transfer.

Problem: There are failed batches

Updated entries are accumulated into batches on the primary nodes; then each batch is sent to one of the sender nodes. If no sender is available, the batch will be marked as failed. Failed batches will never be resent.

How to monitor:

Monitor the GridDr.senderCacheMetrics("myCache").batchesFailed() metric for all caches that are configured to replicate their data.

Actions:

Make sure that at least one sender is available to all server nodes in the master cluster. You will need to do a full state transfer to synchronize the content of the cache with the remote cluster.

Data Replication Events

In addition to metrics, you can listen to replication-specific events. DR events must be enabled first and can be handled as regular Ignite events. To learn how to listen to specific events, refer to the Working with Events section.

IgniteConfiguration cfg = new IgniteConfiguration();

// Enable the event
cfg.setIncludeEventTypes(org.gridgain.grid.events.EventType.EVT_DR_CACHE_REPLICATION_STOPPED);

//enable data replication
cfg.setPluginConfigurations(new GridGainConfiguration().setDataCenterId((byte) 1));

Ignite ignite = Ignition.start(cfg);

IgniteEvents events = ignite.events();

// Local listener that listens to local events.
IgnitePredicate<DrCacheReplicationEvent> localListener = evt -> {

    CacheDrPauseReason reason = evt.reason();

    System.out.println("Replication stopped. Reason: " + reason);
    return true; // Continue listening.
};

// Listen to the "replication stopped" events
events.localListen(localListener, org.gridgain.grid.events.EventType.EVT_DR_CACHE_REPLICATION_STOPPED);

The replication event types are defined in the org.gridgain.grid.events.EventType class and are listed in the following table.

Event Type Event Description Where Event Is Fired

EVT_DR_REMOTE_DC_NODE_CONNECTED

A sender node connects to the node in the replica cluster.

The sender node.

EVT_DR_REMOTE_DC_NODE_DISCONNECTED

A sender node looses connection to the node in the remote cluster. The sender will try to connect to another receiver node if more than one are configured. If no receiver is available, the sender will try to reconnect after the period defined in the DrSenderConfiguration.reconnectOnFailureTimeout property.

The sender node.

EVT_DR_CACHE_REPLICATION_STOPPED

Replication of a specific cache is stopped due to any reason. A full state transfer is required before resuming the replication process.

All nodes that host the primary partitions of the cache.

EVT_DR_CACHE_REPLICATION_STARTED

Replication of a specific cache is started.

All nodes that host the primary partitions of the cache.

EVT_DR_CACHE_FST_STARTED

A full state transfer for a specific cache is started.

All nodes that host the primary partitions of the cache.

EVT_DR_CACHE_FST_FAILED

A full state transfer for a specific cache fails.

All nodes that host the primary partitions of the cache.

EVT_DR_STORE_OVERFLOW

A sender node cannot store entries to the sender storage because the storage is full. Manual restart and a full state transfer may be required after the issue is fixed.

The sender node.

EVT_DR_DC_REPLICATION_RESUMED

Replication to a specific cluster is resumed on a sender node.

The sender node.