GridGain Developers Hub
GitHub logo GridGain iso GridGain.com
GridGain Software Documentation

Configuring Replication

This chapter explains how to configure your GridGain clusters for data replication.

Enabling Replication

To enable replication, you need to perform the following steps:

1. Set Unique Cluster ID

Each cluster participating in the replication process must have a unique ID. The ID is specified in the configuration of each node of the cluster and must be the same within each cluster.

The following configuration example shows how to set the cluster ID.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
            </bean>
        </list>
    </property>
</bean>
IgniteConfiguration cfg = new IgniteConfiguration();

GridGainConfiguration ggCfg = new GridGainConfiguration();

ggCfg.setDataCenterId((byte) 1);

cfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(cfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
         new GridGainPluginConfiguration()
         {
             DataCenterId = 1
         }
     }
};

var ignite = Ignition.Start(cfg);

Make sure to set the cluster ID on every node in the master cluster. Similarly, set a different cluster ID on every node of the replica cluster.

2. Configure Connection Between Clusters

To connect the master cluster to the replica, configure a few sender nodes in the master cluster and the same number of receiver nodes in the replica cluster. A sender group is a logical group of nodes identified by a name, in which each node is configured to connect to the nodes in the replica. To configure a sender group, specify the name of the group in the configuration of each node you want to be a part of the group.

We recommend that you use client nodes as senders and receivers.

Sender Nodes

A sender node is a node that can connect to the replica cluster and send data to it. Configure each node that you want to be a part of a specific sender group. Basic sender node configuration involves two properties:

  • The name of the sender group this node belongs to. This property is optional. If it’s not specified, the node will be included in the default group, named "<default>", which includes all nodes configured as sender nodes.

  • The connection configuration that establishes a connection to the replica cluster. You can have multiple connection configurations, each connecting to different nodes.

The connection configuration parameters are defined in a DrSenderConnectionConfiguration object. The following example illustrates how to configure a node to be a part of the sender group called "group1" and connect to two remote nodes.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote data center's nodes that will receive data updates
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001"));

GridGainConfiguration ggCfg = new GridGainConfiguration();
ggCfg.setDataCenterId((byte) 1);
ggCfg.setDrSenderConfiguration(drSenderCfg);

IgniteConfiguration igniteCfg = new IgniteConfiguration();

igniteCfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(igniteCfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrSenderConfiguration = new DrSenderConfiguration
            {
                SenderGroups = new[] { "group1" },
                ConnectionConfiguration = new[]
                {
                    new DrSenderConnectionConfiguration
                    {
                        DataCenterId = 2,
                        // the addresses of the remote data center's nodes that will receive data updates
                        ReceiverAddresses = new[] { "172.25.4.200:50001" }
                    }
                }
            }
        }
    }
};

var ignite = Ignition.Start(cfg);

In the example above, the sender node will connect to one node in the replica cluster. For redundancy purposes, you can specify the addresses of multiple nodes. The sender node will try to load balance between all of them.

 <property name="connectionConfiguration">
    <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">

        <!-- the ID of the replica cluster -->
        <property name="dataCenterId" value="2"/>

        <!-- Addresses of the replica cluster's nodes that will receive data -->
         <property name="receiverAddresses">
            <list>
                <value>172.25.4.200:50001</value>
                <value>172.25.4.201:50001</value>
            </list>
        </property>
    </bean>
</property>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote data center's nodes that will receive data updates
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001", "172.25.4.201:50001"));
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrSenderConfiguration = new DrSenderConfiguration
            {
                SenderGroups = new[] { "group1" },
                ConnectionConfiguration = new[]
                {
                    new DrSenderConnectionConfiguration
                    {
                        DataCenterId = 2,
                        // the addresses of the remote data center's nodes that will receive data updates
                        ReceiverAddresses = new[] { "172.25.4.200:50001", "172.25.4.201:50001" }
                    }
                }
            }
        }
    }
};

The connection parameters that can be specified in the DrSenderConnectionConfiguration object include:

Parameter Description

dataCenterId

The ID of the replica cluster.

receiverAddresses

A list of endpoints in the replica cluster this node will connect to. The nodes in the replica must be configured to listen on the specified ports. See Receiver Nodes.

localOutboundAddress

The network interface to use for cross-cluster communication.

loadBalancingMode

The load balancing policy that is used to distribute batches across the nodes in the replica cluster if more than one address is configured. Possible values include:

  • DR_RANDOM — Default. Random load balancing policy.

  • DR_ROUND_ROBIN — Round-robin load balancing policy.

store

A sender storage where data updates are stored before they are sent to the replica nodes. See Sender Storage.

Receiver Nodes

For each sender node in the master cluster, your replica cluster must have a node (or multiple nodes) that will receive data from that sender node. This node is called a receiver node.

To configure a node as a receiver, define the drReceiverConfiguration property of the GridGainConfiguration plugin. Below is an example of a receiver node configuration.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that receiver nodes be clients -->
    <property name="clientMode" value="true"/>

    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this data center -->
                <property name="dataCenterId" value="2"/>
                <!--
                Setting up receiver node specific parameters.
                -->
                <property name="drReceiverConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrReceiverConfiguration">
                        <!-- TCP port receiver node of this data center is bound to. -->
                        <property name="localInboundPort" value="50001"/>

                        <property name="messageQueueLimit" value="100"/>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
    <!-- other properties -->
</bean>
IgniteConfiguration igniteCfg = new IgniteConfiguration();

GridGainConfiguration ggCfg = new GridGainConfiguration();

ggCfg.setDrReceiverConfiguration(new DrReceiverConfiguration().setLocalInboundPort(50001));

igniteCfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(igniteCfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrReceiverConfiguration = new DrReceiverConfiguration
            {
                LocalInboundPort = 50001
            }
        }
    }
};

var ignite = Ignition.Start(cfg);

Below are some important parameters of the receiver node configuration:

Parameter Description

messageQueueLimit

The maximum number of batches in the queue on the receiver node that haven’t been processed. If the queue is growing, it means that the receiver node gets more data than it can handle. When the limit is exceeded, the node will stop accepting new messages from the master cluster until the queue size drops below that value. The default value is 0, which means that the queue can grow indefinitely. We recommend you should set this parameter to 100 to avoid OutOfMemory errors on the receiver node.

For the complete list of parameters, refer to the DrReceiverConfiguration javadoc.

Active-Active Mode

In the active-active mode, both clusters send data to each other. It means that each cluster must have sender nodes and receiver nodes. The sender nodes in cluster 1 will connect to the receivers in cluster 2, and the sender nodes in cluster 2 will connect to the receiver nodes in cluster 1.

You can configure the same node to be both a sender and a receiver by combining the configuration properties described in the two previous sections.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <property name="sslContextFactory">
                            <bean class="org.apache.ignite.ssl.SslContextFactory">
                                <property name="keyStoreFilePath" value="/path/dr/server.jks"/>
                                <property name="keyStorePassword" value="123456"/>
                                <property name="trustStoreFilePath" value="/home/abudnikov/gridgain/configs/dr/trust.jks"/>
                                <property name="trustStorePassword" value="123456"/>
                            </bean>
                        </property>
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                        <!-- receiver node parameters -->
                        <property name="drReceiverConfiguration">
                            <bean class="org.gridgain.grid.configuration.DrReceiverConfiguration">
                                <!-- TCP port receiver node of this data center is bound to. -->
                                <property name="localInboundPort" value="50001"/>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>

Encryption

There are two ways to encrypt communication between the master and replica clusters:

In this section, we’ll explain the second method.

To enable SSL for data replication, you need to provide an SslContextFactory in both sender node and receiver node configurations.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <property name="sslContextFactory">
                            <bean class="org.apache.ignite.ssl.SslContextFactory">
                                <property name="keyStoreFilePath" value="/path/dr/server.jks"/>
                                <property name="keyStorePassword" value="123456"/>
                                <property name="trustStoreFilePath" value="/home/abudnikov/gridgain/configs/dr/trust.jks"/>
                                <property name="trustStorePassword" value="123456"/>
                            </bean>
                        </property>
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- dr storage -->
                                <property name="store">
                                    <bean class="org.gridgain.grid.dr.store.fs.DrSenderFsStore">
                                        <property name="directoryPath" value="/path/to/store"/>
                                    </bean>
                                </property>

                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote data center's nodes that will receive data updates
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001"));

drSenderCfg.setUseIgniteSslContextFactory(false);

SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setKeyStoreFilePath("/path/to/server.jks");
sslContextFactory.setKeyStorePassword("123456".toCharArray());

sslContextFactory.setTrustStoreFilePath("/path/to/trust.jks");
sslContextFactory.setTrustStorePassword("123456".toCharArray());

drSenderCfg.setSslContextFactory(sslContextFactory);

Sender Storage

Before updates are sent to the replica cluster, they are accumulated in a sender storage in the master cluster. The sender storage can be configured on a sender node in the connection configuration. If you configured replication to multiple clusters, we recommend configuring a different storage for each connection.

When the sender storage gets full (this can happen because of a broken connection to the replica cluster) or corrupted, the replication process will stop. You can listen to the EVT_DR_STORE_OVERFLOW or EVT_DR_STORE_CORRUPTED events to get notified if this situation occurs. Refer to the Data Replication Events section.

GridGain ships with two implementations of the sender storage:

  • DrSenderFsStore - A persistent storage that stores data updates on disk and can survive sender node restarts. This implementation is used by default.

  • DrSenderInMemoryStore - An in-memory storage that stores data updates in RAM.

To configure a sender storage, use the DrSenderConnectionConfiguration.setStore(store) method. Refer to the examples below for details.

Disk-Based Storage

The directory where cache updates are stored can be configured by setting the DrSenderFsStore.directoryPath property. The property has a default value, but we recommend setting it explicitly.

<bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
    <!-- this node is part of group1 -->
    <property name="senderGroups">
        <list>
            <value>group1</value>
        </list>
    </property>
    <!-- connection configuration -->
    <property name="connectionConfiguration">
        <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
            <!-- dr storage -->
            <property name="store">
                <bean class="org.gridgain.grid.dr.store.fs.DrSenderFsStore">
                    <property name="directoryPath" value="/path/to/store"/>
                </bean>
            </property>
            <!-- the ID of the remote cluster -->
            <property name="dataCenterId" value="2"/>
            <!-- Addresses of the remote cluster's nodes this node will connect to -->
            <property name="receiverAddresses">
                <list>
                    <value>172.25.4.200:50001</value>
                </list>
            </property>
        </bean>
    </property>
</bean>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote data center's nodes that will receive data updates
DrSenderConnectionConfiguration senderConnectionCfg = new DrSenderConnectionConfiguration()
        .setDataCenterId((byte) 2).setReceiverAddresses("172.25.4.200:50001");
senderConnectionCfg.setStore(new DrSenderFsStore().setDirectoryPath("/path/to/storage"));

drSenderCfg.setConnectionConfiguration(senderConnectionCfg);

In-Memory Storage

When in-memory implementation of the sender storage is used on a sender node and the node goes down, replication will stop. All updates that the storage kept when it went down will be lost, and you will have to do a full state transfer before starting it again.

3. Configure Caches

Once the connection between the clusters is configured, you can configure a specific cache to be replicated through this connection. This is done by specifying the sender group name in the cache’s configuration.

To configure replication for a specific cache, you must complete the following steps:

  • In the master cluster: Specify a sender group in the configuration of the cache. Data updates in that cache will be sent to the replica cluster through that sender group.

  • In the replica cluster: Create a cache with the same name.

  • Repeat this procedure for all caches you want to replicate.

Below is an example configuration of a cache in the master cluster. The cache will be replicated through the sender group ("group1") that we defined in the 2. Configure Connection Between Clusters section.

<property name="cacheConfiguration">
    <bean class="org.apache.ignite.configuration.CacheConfiguration">
        <!-- Setting up basic cache parameters -->
        <property name="name" value="myCache"/>
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="backups" value="1"/>

        <property name="pluginConfigurations">
            <bean class="org.gridgain.grid.configuration.GridGainCacheConfiguration">
                <!--
                    Activate cache replication.
                -->
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.cache.dr.CacheDrSenderConfiguration">
                        <property name="senderGroup" value="group1"/>
                    </bean>
                </property>

                <!-- Other parameters. -->
            </bean>
        </property>
    </bean>
</property>
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();

cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName("myCache");
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

// sender cache configuration
CacheDrSenderConfiguration cacheDrSenderCfg = new CacheDrSenderConfiguration();

//set the name of the sender group
cacheDrSenderCfg.setSenderGroup("group1");
cacheDrSenderCfg.setBatchSendSize(4 * 1024);

GridGainCacheConfiguration ggCacheCfg = new GridGainCacheConfiguration();
ggCacheCfg.setDrSenderConfiguration(cacheDrSenderCfg);

cfg.setPluginConfigurations(ggCacheCfg);

IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
var cfg = new CacheConfiguration()
{
    CacheMode = CacheMode.Partitioned,
    Name = "myCache",
    AtomicityMode = CacheAtomicityMode.Atomic,
    PluginConfigurations = new[]
    {
        new GridGainCachePluginConfiguration
        {
            DrSenderConfiguration = new CacheDrSenderConfiguration
            {
                SenderGroup = "group1",
                BatchSendSize = 4 * 1024
            }
        }
    }
};

var cache = ignite.GetOrCreateCache<int, string>(cfg);

The following table lists some parameters of CacheDrSenderConfiguration that you may want to change:

Parameter Description

stateTransferThrottleBytes

The maximum allowed total size of the unprocessed data that is waiting to be transmitted from the data node to the sender node as part of a full state transfer process for a given cache. When this value is exceeded, the full state transfer is paused until some messages are processed. The default value is 10MB.

For the complete list of parameters, refer to the CacheDrSenderConfiguration javadoc.

The replica cluster must have a corresponding cache with the same name. Define a cache in the replica cluster and activate the GridGainCacheConfiguration plugin in the cache’s configuration.

Below is an example configuration of such a cache in the replica cluster:

<property name="cacheConfiguration">
    <bean class="org.apache.ignite.configuration.CacheConfiguration">
        <!-- Setting up basic cache parameters -->
        <property name="name" value="myCache"/>
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="backups" value="1"/>

        <!-- Setting up DR related cache parameters -->
        <property name="pluginConfigurations">
            <bean class="org.gridgain.grid.configuration.GridGainCacheConfiguration">


            </bean>
        </property>
    </bean>
</property>

Dynamically Created Caches

If you want to replicate a dynamically created cache, you need to set the sender group for that cache and make sure that the corresponding cache is created in the replica cluster. The following procedure outlines the steps involved in this process.

  1. For caches created via the Java API, specify a sender group by calling the CacheDrSenderConfiguration.setSenderGroup(String) method, as shown in the example below.

    CacheDrSenderConfiguration senderCfg = new CacheDrSenderConfiguration();
    
    //setting the sender group name
    senderCfg.setSenderGroup("group1");
    
    GridGainCacheConfiguration cachePluginCfg = new GridGainCacheConfiguration()
            .setDrSenderConfiguration(senderCfg);
    
    CacheConfiguration cacheCfg = new CacheConfiguration<>().setPluginConfigurations(cachePluginCfg);
    var cacheCfg = new CacheConfiguration()
    {
        Name = "myCache",
        PluginConfigurations = new[]
        {
            new GridGainCachePluginConfiguration
            {
                DrSenderConfiguration = new CacheDrSenderConfiguration
                {
                    //setting the sender group name
                    SenderGroup = "group1"
                }
            }
        }
    };

    If you create caches using the CREATE TABLE command, the only way to specify the sender group name is to use a predefined cache template. You can create a cache template with the desired sender group (and other replication-specific properties) and pass it as a parameter to the CREATE TABLE command. The created cache will have the properties of the specified template. For more information and examples on how to use cache templates, see the Cache Template page.

  2. Create a similar cache in the replica(s) that will receive replicated data.

  3. Start using the cache. The data will be sent to the replica.

  4. If you started putting data into the cache before creating its counterpart in the replica(s), you have to transfer the cache’s content to the replica. This will ensure that the remote caches have exactly the same data and sending updates will not cause any issues.

    To do a state transfer for the new cache, use the following code snippet:

GridGain gg = ignite.plugin(GridGain.PLUGIN_NAME);

GridDr dr = gg.dr();

//perform a full state transfer for the "myCache" cache to Cluster 2
dr.stateTransfer("myCache", (byte) 2);
// Gets data center replication API
var dr = ignite.GetDataCenterReplication();

var task = dr.StartStateTransferAsync("myCache", 2);

// Wait while the state transfer is being completed.
task.Wait();

Existing Caches

If you have caches that had been created before you configured replication, there is no way to add data replication parameters to those caches, because cache configuration cannot be changed dynamically. To overcome this limitation, you can use the GG_DR_FORCE_DC_ID system property.

When you restart the cluster with this property enabled, all caches that do not have data replication parameters will acquire the default replication configuration and will be replicated via the default sender group (named "<default>").

The property must be set to the cluster ID of the current master cluster (the value of GridGainConfiguration.dataCenterId).

If you remove the GG_DR_FORCE_DC_ID property and restart the cluster, the caches without data replication configuration will stop being replicated.

Filtering Cache Entries

By default, all entries of the cache are replicated to the replica cluster. However, you can prevent specific entries from being replicated by providing an entry filter.

An entry filter is set per cache in the master cluster. If the filter returns true, the entry will be replicated. The following code example demonstrates how to set an entry filter.

CacheDrSenderConfiguration cacheDrSenderCfg = new CacheDrSenderConfiguration();

cacheDrSenderCfg.setEntryFilter(new CacheDrEntryFilter<Long, String>() {

    @Override
    public boolean accept(CacheDrEntry<Long, String> entry) {

        // ...

        //return true to replicate the entry
        return true;
    }
});

GridGainCacheConfiguration ggCacheCfg = new GridGainCacheConfiguration();
ggCacheCfg.setDrSenderConfiguration(cacheDrSenderCfg);

CacheConfiguration<Long, String> cacheCfg = new CacheConfiguration<Long, String>("myCache");
cacheCfg.setPluginConfigurations(ggCacheCfg);

IgniteCache<Long, String> cache = ignite.getOrCreateCache(cacheCfg);
[Serializable]
class CacheDrEntryFilter : ICacheDrEntryFilter<long, string>
{
    public bool Invoke(ICacheDrEntry<long, string> entry)
    {
        //return true to replicate the entry
        return true;
    }
}

....

var cacheCfg = new CacheConfiguration()
{
    Name = "myCache",
    PluginConfigurations = new[]
    {
        new GridGainCachePluginConfiguration
        {
            DrSenderConfiguration = new CacheDrSenderConfiguration
            {
                EntryFilter = new CacheDrEntryFilter()
            }
        }
    }
};

ignite.GetOrCreateCache<int, string>(cacheCfg);

Conflict Resolution

Conflicts should not occur in the active-passive replication mode because the replica cluster is only meant to receive updates from the master and does not handle user requests. Therefore, make sure that the replica cluster does not receive user requests that change the entries in the cache.

Conflicts can occur in the active-active replication mode because both clusters replicate data to each other as well as handle user requests. This can lead to a situation when a cache entry is updated in both clusters to different values by user requests. Since the caches must have identical content, one value has to be rejected. We refer to this situation as a conflict. To resolve the conflict, you need to implement a custom conflict resolver. Once configured, the conflict resolver is called when an update comes from the remote cluster.

A conflict resolver must implement the CacheConflictResolver interface. The following example shows how to configure a conflict resolver.

CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();

GridGainCacheConfiguration cacheCfg = new GridGainCacheConfiguration();

cacheCfg.setConflictResolver(new CacheConflictResolver<Integer, String>() {
    @Override
    public void resolve(CacheConflictContext<Integer, String> cacheConflictContext) {
        //always use the value that comes from the remote cluster
        cacheConflictContext.useNew();
    }
});

cfg.setPluginConfigurations(cacheCfg);

IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);

Expiry Policy

Data Center Replication will not work correctly if the cache in the master cluster and its counterpart in the replica cluster use different expiry policies or if an accessed-based expiry policy is used.

Here is what happens in both cases when an expiry policy is used for a cache:

  • Each entry has an expiration time that is defined by the expiry policy. When an entry is created, the expiration time of that entry is calculated by the master cluster and propagated to the replica cluster(s) together with the entry. The expiry policy itself is not transferred. In this case, the entry in the replica will expire based on the expiration time calculated by the master cluster.

  • In the case when the expiry policy is based on the last access time, if an entry is read in the master cluster, its expiration time will be updated. However, the expiration time in the replica cluster will not be updated, because read operations are not propagated to the replica (because they don’t change the data). Therefore, the entry in the replica cluster will be deleted when its expiration time ends, while the entry in the master cluster won’t.