GridGain Developers Hub

Configuring Replication

This chapter explains how to configure your GridGain clusters for data replication.

Enabling Replication

To enable replication, you need to perform the following steps:

1. Set Unique Cluster ID

Each cluster participating in the replication process must have a unique ID. The ID is specified in the configuration of each node of the cluster and must be the same within each cluster.

The following configuration example shows how to set the cluster ID.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
            </bean>
        </list>
    </property>
</bean>
IgniteConfiguration cfg = new IgniteConfiguration();

GridGainConfiguration ggCfg = new GridGainConfiguration();

ggCfg.setDataCenterId((byte) 1);

cfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(cfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
         new GridGainPluginConfiguration()
         {
             DataCenterId = 1
         }
     }
};

var ignite = Ignition.Start(cfg);
This API is not presently available for C++. You can use XML configuration.

Make sure to set the cluster ID on every node in the master cluster. Similarly, set a different cluster ID on every node of the replica cluster.

2. Configure Connection Between Clusters

To connect the master cluster to the replica, configure a few sender nodes in the master cluster and the same number of receiver nodes in the replica cluster. A sender group is a logical group of nodes identified by a name, in which each node is configured to connect to the nodes in the replica. To configure a sender group, specify the name of the group in the configuration of each node you want to be a part of the group.

We recommend that you use client nodes as senders and receivers.

Sender Nodes

A sender node is a node that can connect to the replica cluster and send data to it. Configure each node that you want to be a part of a specific sender group. Basic sender node configuration involves two properties:

  • The name of the sender group this node belongs to. This property is optional. If it’s not specified, the node will be included in the default group, named "<default>", which includes all nodes configured as sender nodes.

  • The connection configuration that establishes a connection to the replica cluster. You can have multiple connection configurations, each connecting to different nodes.

The connection configuration parameters are defined in a DrSenderConnectionConfiguration object. The following example illustrates how to configure a node to be a part of the sender group called "group1" and connect to two remote nodes.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote replica cluster's nodes that will receive data updates 
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001"));

GridGainConfiguration ggCfg = new GridGainConfiguration();
ggCfg.setDataCenterId((byte) 1);
ggCfg.setDrSenderConfiguration(drSenderCfg);

IgniteConfiguration igniteCfg = new IgniteConfiguration();

igniteCfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(igniteCfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrSenderConfiguration = new DrSenderConfiguration
            {
                SenderGroups = new[] { "group1" },
                ConnectionConfiguration = new[]
                {
                    new DrSenderConnectionConfiguration
                    {
                        DataCenterId = 2,
                        // the addresses of the remote data center's nodes that will receive data updates
                        ReceiverAddresses = new[] { "172.25.4.200:50001" }
                    }
                }
            }
        }
    }
};

var ignite = Ignition.Start(cfg);
This API is not presently available for C++. You can use XML configuration.

In the example above, the sender node will connect to one node in the replica cluster. For redundancy purposes, you can specify the addresses of multiple nodes. The sender node will try to load balance between all of them.

 <property name="connectionConfiguration">
    <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">

        <!-- the ID of the replica cluster -->
        <property name="dataCenterId" value="2"/>

        <!-- Addresses of the replica cluster's nodes that will receive data -->
         <property name="receiverAddresses">
            <list>
                <value>172.25.4.200:50001</value>
                <value>172.25.4.201:50001</value>
            </list>
        </property>
    </bean>
</property>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote replica cluster's nodes that will receive data updates 
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001", "172.25.4.201:50001"));
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrSenderConfiguration = new DrSenderConfiguration
            {
                SenderGroups = new[] { "group1" },
                ConnectionConfiguration = new[]
                {
                    new DrSenderConnectionConfiguration
                    {
                        DataCenterId = 2,
                        // the addresses of the remote data center's nodes that will receive data updates
                        ReceiverAddresses = new[] { "172.25.4.200:50001", "172.25.4.201:50001" }
                    }
                }
            }
        }
    }
};
This API is not presently available for C++. You can use XML configuration.

The connection parameters that can be specified in the DrSenderConnectionConfiguration object include:

Parameter Description

dataCenterId

The ID of the replica cluster.

receiverAddresses

A list of endpoints in the replica cluster this node will connect to. The nodes in the replica must be configured to listen on the specified ports. See Receiver Nodes.

localOutboundAddress

The network interface to use for cross-cluster communication.

loadBalancingMode

The load balancing policy that is used to distribute batches across the nodes in the replica cluster if more than one address is configured. Possible values include:

  • DR_RANDOM — Default. Random load balancing policy.

  • DR_ROUND_ROBIN — Round-robin load balancing policy.

Receiver Nodes

For each sender node in the master cluster, your replica cluster must have a node (or multiple nodes) that will receive data from that sender node. This node is called a receiver node.

To configure a node as a receiver, define the drReceiverConfiguration property of the GridGainConfiguration plugin. Below is an example of a receiver node configuration.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that receiver nodes be clients -->
    <property name="clientMode" value="true"/>

    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="2"/>
                <!--
                Setting up receiver node specific parameters.
                -->
                <property name="drReceiverConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrReceiverConfiguration">
                        <!-- TCP port receiver node of this cluster is bound to. -->
                        <property name="localInboundPort" value="50001"/>

                        <property name="messageQueueLimit" value="10000"/>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
    <!-- other properties -->
</bean>
IgniteConfiguration igniteCfg = new IgniteConfiguration();

GridGainConfiguration ggCfg = new GridGainConfiguration();

ggCfg.setDrReceiverConfiguration(new DrReceiverConfiguration().setLocalInboundPort(50001));

igniteCfg.setPluginConfigurations(ggCfg);

Ignite ignite = Ignition.start(igniteCfg);
var cfg = new IgniteConfiguration
{
    PluginConfigurations = new[]
    {
        new GridGainPluginConfiguration
        {
            DataCenterId = 1,
            DrReceiverConfiguration = new DrReceiverConfiguration
            {
                LocalInboundPort = 50001
            }
        }
    }
};

var ignite = Ignition.Start(cfg);
This API is not presently available for C++. You can use XML configuration.

Below are some important parameters of the receiver node configuration:

Parameter Description

messageQueueLimit

The maximum number of batches in the queue on the receiver node that haven’t been processed. If the queue is growing, it means that the receiver node gets more data than it can handle. When the limit is exceeded, the node will stop accepting new messages from the master cluster until the queue size drops below that value. The default value is 0, which means that the queue can grow indefinitely. We recommend you should set this parameter to 10,000 to avoid OutOfMemory errors on the receiver node.

For the complete list of parameters, refer to the DrReceiverConfiguration javadoc.

Active-Active Mode

In the active-active mode, both clusters send data to each other. It means that each cluster must have sender nodes and receiver nodes. The sender nodes in cluster 1 will connect to the receivers in cluster 2, and the sender nodes in cluster 2 will connect to the receiver nodes in cluster 1.

You can configure the same node to be both a sender and a receiver by combining the configuration properties described in the two previous sections.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <property name="sslContextFactory">
                            <bean class="org.apache.ignite.ssl.SslContextFactory">
                                <property name="keyStoreFilePath" value="/path/dr/server.jks"/>
                                <property name="keyStorePassword" value="123456"/>
                                <property name="trustStoreFilePath" value="/path/dr/trust.jks"/>
                                <property name="trustStorePassword" value="123456"/>
                            </bean>
                        </property>
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                    </bean>
                </property>
                <!-- receiver node parameters -->
                <property name="drReceiverConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrReceiverConfiguration">
                        <!-- TCP port receiver node of this cluster is bound to. -->
                        <property name="localInboundPort" value="50001"/>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>

Encryption

There are two ways to encrypt communication between the master and replica clusters:

In this section, we’ll explain the second method.

To enable SSL for data replication, you need to provide an SslContextFactory in both sender node and receiver node configurations.

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <!-- we recommend that sender nodes be clients -->
    <property name="clientMode" value="true"/>
    <property name="pluginConfigurations">
        <list>
            <bean class="org.gridgain.grid.configuration.GridGainConfiguration">
                <!-- Unique ID of this cluster -->
                <property name="dataCenterId" value="1"/>
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.configuration.DrSenderConfiguration">
                        <property name="sslContextFactory">
                            <bean class="org.apache.ignite.ssl.SslContextFactory">
                                <property name="keyStoreFilePath" value="/path/dr/server.jks"/>
                                <property name="keyStorePassword" value="123456"/>
                                <property name="trustStoreFilePath" value="/home/abudnikov/gridgain/configs/dr/trust.jks"/>
                                <property name="trustStorePassword" value="123456"/>
                            </bean>
                        </property>
                        <!-- this node is part of group1 -->
                        <property name="senderGroups">
                            <list>
                                <value>group1</value>
                            </list>
                        </property>
                        <!-- connection configuration -->
                        <property name="connectionConfiguration">
                            <bean class="org.gridgain.grid.dr.DrSenderConnectionConfiguration">
                                <!-- dr storage -->
                                <property name="store">
                                    <bean class="org.gridgain.grid.dr.store.fs.DrSenderFsStore">
                                        <property name="directoryPath" value="/path/to/store"/>
                                    </bean>
                                </property>

                                <!-- the ID of the remote cluster -->
                                <property name="dataCenterId" value="2"/>
                                <!-- Addresses of the remote cluster's nodes this node will connect to -->
                                <property name="receiverAddresses">
                                    <list>
                                        <value>172.25.4.200:50001</value>
                                    </list>
                                </property>
                            </bean>
                        </property>
                    </bean>
                </property>
            </bean>
        </list>
    </property>
</bean>
DrSenderConfiguration drSenderCfg = new DrSenderConfiguration();
drSenderCfg.setSenderGroups("group1");

// the addresses of the remote replica cluster's nodes that will receive data updates 
drSenderCfg.setConnectionConfiguration(new DrSenderConnectionConfiguration().setDataCenterId((byte) 2)
        .setReceiverAddresses("172.25.4.200:50001"));

drSenderCfg.setUseIgniteSslContextFactory(false);

SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setKeyStoreFilePath("/path/to/server.jks");
sslContextFactory.setKeyStorePassword("123456".toCharArray());

sslContextFactory.setTrustStoreFilePath("/path/to/trust.jks");
sslContextFactory.setTrustStorePassword("123456".toCharArray());

drSenderCfg.setSslContextFactory(sslContextFactory);

3. Configure Caches

Once the connection between the clusters is configured, you can configure a specific cache to be replicated through this connection. This is done by specifying the sender group name in the cache’s configuration.

To configure replication for a specific cache, you must complete the following steps:

  • In the master cluster: Specify a sender group in the configuration of the cache. Data updates in that cache will be sent to the replica cluster through that sender group.

  • In the replica cluster: Create a cache with the same name.

  • Repeat this procedure for all caches you want to replicate.

Below is an example configuration of a cache in the master cluster. The cache will be replicated through the sender group ("group1") that we defined in the 2. Configure Connection Between Clusters section.

<property name="cacheConfiguration">
    <bean class="org.apache.ignite.configuration.CacheConfiguration">
        <!-- Setting up basic cache parameters -->
        <property name="name" value="myCache"/>
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="backups" value="1"/>

        <property name="pluginConfigurations">
            <bean class="org.gridgain.grid.configuration.GridGainCacheConfiguration">
                <!--
                    Activate cache replication.
                -->
                <property name="drSenderConfiguration">
                    <bean class="org.gridgain.grid.cache.dr.CacheDrSenderConfiguration">
                        <property name="senderGroup" value="group1"/>
                    </bean>
                </property>

                <!-- Other parameters. -->
            </bean>
        </property>
    </bean>
</property>
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();

cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName("myCache");
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

// sender cache configuration
CacheDrSenderConfiguration cacheDrSenderCfg = new CacheDrSenderConfiguration();

//set the name of the sender group
cacheDrSenderCfg.setSenderGroup("group1");
cacheDrSenderCfg.setBatchSendSize(4 * 1024);

GridGainCacheConfiguration ggCacheCfg = new GridGainCacheConfiguration();
ggCacheCfg.setDrSenderConfiguration(cacheDrSenderCfg);

cfg.setPluginConfigurations(ggCacheCfg);

IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
var cfg = new CacheConfiguration()
{
    CacheMode = CacheMode.Partitioned,
    Name = "myCache",
    AtomicityMode = CacheAtomicityMode.Atomic,
    PluginConfigurations = new[]
    {
        new GridGainCachePluginConfiguration
        {
            DrSenderConfiguration = new CacheDrSenderConfiguration
            {
                SenderGroup = "group1",
                BatchSendSize = 4 * 1024
            }
        }
    }
};

var cache = ignite.GetOrCreateCache<int, string>(cfg);
This API is not presently available for C++. You can use XML configuration.

For the complete list of parameters, refer to the CacheDrSenderConfiguration javadoc.

The replica cluster must have a corresponding cache with the same name. Define a cache in the replica cluster and activate the GridGainCacheConfiguration plugin in the cache’s configuration. The cache in the replica cluster can have configuration properties that are different from those in the master cluster (because only data is replicated, not configuration parameters). If you want to replicate data to a cache with identical configuration, you must copy the cache configuration parameters from the master cluster.

Below is an example configuration of the cache configuration in the replica cluster:

<property name="cacheConfiguration">
    <bean class="org.apache.ignite.configuration.CacheConfiguration">
        <!-- Setting up basic cache parameters -->
        <property name="name" value="myCache"/>
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="backups" value="1"/>

        <!-- Setting up DR related cache parameters -->
        <property name="pluginConfigurations">
            <bean class="org.gridgain.grid.configuration.GridGainCacheConfiguration">
                <property name="drReceiverEnabled" value="true"/>
            </bean>
        </property>
    </bean>
</property>
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();

cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName("myCache");
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

GridGainCacheConfiguration ggCacheCfg = new GridGainCacheConfiguration();

cfg.setPluginConfigurations(ggCacheCfg);

IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);

Dynamically Created Caches

If you want to replicate a dynamically created cache, you need to set the sender group for that cache and make sure that the corresponding cache is created in the replica cluster. The following procedure outlines the steps involved in this process.

  1. For caches created via the Java API, specify a sender group by calling the CacheDrSenderConfiguration.setSenderGroup(String) method, as shown in the example below.

    CacheDrSenderConfiguration senderCfg = new CacheDrSenderConfiguration();
    
    //setting the sender group name
    senderCfg.setSenderGroup("group1");
    
    GridGainCacheConfiguration cachePluginCfg = new GridGainCacheConfiguration()
            .setDrSenderConfiguration(senderCfg);
    
    CacheConfiguration cacheCfg = new CacheConfiguration<>().setPluginConfigurations(cachePluginCfg);
    var cacheCfg = new CacheConfiguration()
    {
        Name = "myCache",
        PluginConfigurations = new[]
        {
            new GridGainCachePluginConfiguration
            {
                DrSenderConfiguration = new CacheDrSenderConfiguration
                {
                    //setting the sender group name
                    SenderGroup = "group1"
                }
            }
        }
    };
    This API is not presently available for C++.

    If you create caches using the CREATE TABLE command, the only way to specify the sender group name is to use a predefined cache template. You can create a cache template with the desired sender group (and other replication-specific properties) and pass it as a parameter to the CREATE TABLE command. The created cache will have the properties of the specified template. For more information and examples on how to use cache templates, see the Cache Template page.

  2. Create a similar cache in the replica(s) that will receive replicated data.

  3. Start using the cache. The data will be sent to the replica.

  4. If you started putting data into the cache before creating its counterpart in the replica(s), you have to transfer the cache’s content to the replica. This will ensure that the remote caches have exactly the same data and sending updates will not cause any issues.

    To do a state transfer for the new cache, use the following code snippet:

    GridGain gg = ignite.plugin(GridGain.PLUGIN_NAME);
    
    GridDr dr = gg.dr();
    
    //perform a full state transfer for the "myCache" cache to Cluster 2
    dr.stateTransfer("myCache", (byte) 2);
    // Gets data center replication API
    var dr = ignite.GetDataCenterReplication();
    
    var task = dr.StartStateTransferAsync("myCache", 2);
    
    // Wait while the state transfer is being completed.
    task.Wait();
    This API is not presently available for C++.

Filtering Cache Entries

By default, all entries of the cache are replicated to the replica cluster. However, you can prevent specific entries from being replicated by providing an entry filter.

An entry filter is set per cache in the master cluster. If the filter returns true, the entry will be replicated. The following code example demonstrates how to set an entry filter.

CacheDrSenderConfiguration cacheDrSenderCfg = new CacheDrSenderConfiguration();

cacheDrSenderCfg.setEntryFilter(new CacheDrEntryFilter<Long, String>() {

    @Override
    public boolean accept(CacheDrEntry<Long, String> entry) {

        // ...  

        //return true to replicate the entry 
        return true;
    }
});

GridGainCacheConfiguration ggCacheCfg = new GridGainCacheConfiguration();
ggCacheCfg.setDrSenderConfiguration(cacheDrSenderCfg);

CacheConfiguration<Long, String> cacheCfg = new CacheConfiguration<Long, String>("myCache");
cacheCfg.setPluginConfigurations(ggCacheCfg);

IgniteCache<Long, String> cache = ignite.getOrCreateCache(cacheCfg);
[Serializable]
class CacheDrEntryFilter : ICacheDrEntryFilter<long, string>
{
    public bool Invoke(ICacheDrEntry<long, string> entry)
    {
        //return true to replicate the entry
        return true;
    }
}

....

var cacheCfg = new CacheConfiguration()
{
    Name = "myCache",
    PluginConfigurations = new[]
    {
        new GridGainCachePluginConfiguration
        {
            DrSenderConfiguration = new CacheDrSenderConfiguration
            {
                EntryFilter = new CacheDrEntryFilter()
            }
        }
    }
};

ignite.GetOrCreateCache<int, string>(cacheCfg);
This API is not presently available for C++.

Full State Transfer over Snapshot

When data center replication works normally, you can let it handle all issues. However, is some rare scenarios, you may want to replicate a cluster state to replica cluster by transferring a snapshot of a master cluster to it, restoring data and then performing an incremental snapshot. To do this:

  • Create a full snapshot of the cluster you want to transfer data from (master cluster).

  • Disconnect the remote data center that contains the destination cluster. This can be done by shutting down receiver on the remote data center.

  • Copy snapshot to remote data center. Restore snapshot on the remote data center in the destination cluster:

    # Getting a list of all the available snapshots.
    # The output will include snapshots' IDs.
    {gridgain}/bin/snapshot-utility.sh list
    
    # Restoring the cluster to a specific snapshot passing its ID.
    {gridgain}/bin/snapshot-utility.sh restore -id=1483663276482
  • Reconnect the destination cluster.

  • Execute incremental state transfer with specifying snapshot id, which was restored on destination cluster during previous steps, to push the latest updates, which may occurred since snapshot was created, to the remote cluster:

GridGain gg = ignite.plugin(GridGain.PLUGIN_NAME);

GridDr dr = gg.dr();

GridDr.incrementalStateTransfer(String cacheName, long snapshotId, byte dataCenterId);

Conflict Resolution

Conflicts should not occur in the active-passive replication mode because the replica cluster is only meant to receive updates from the master and does not handle user requests. Therefore, make sure that the replica cluster does not receive user requests that change the entries in the cache.

Conflicts can occur in the active-active replication mode because both clusters replicate data to each other as well as handle user requests. This can lead to a situation when a cache entry is updated in both clusters to different values by user requests. Since the caches must have identical content, one value has to be rejected. We refer to this situation as a conflict. To resolve the conflict, you need to implement a custom conflict resolver. Once configured, the conflict resolver is called when an update comes from the remote cluster.

A conflict resolver must implement the CacheConflictResolver interface. The following example shows how to configure a conflict resolver.

CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();

GridGainCacheConfiguration cacheCfg = new GridGainCacheConfiguration();

cacheCfg.setConflictResolver(new CacheConflictResolver<Integer, String>() {
    @Override
    public void resolve(CacheConflictContext<Integer, String> cacheConflictContext) {
        //always use the value that comes from the remote cluster
        cacheConflictContext.useNew();
    }
});

cfg.setPluginConfigurations(cacheCfg);

IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
This API is not presently available for C#/.NET.
This API is not presently available for C++.

Expiry Policy

Data Center Replication will not work correctly if the cache in the master cluster and its counterpart in the replica cluster use different expiry policies or if an accessed-based expiry policy is used.

Here is what happens in both cases when an expiry policy is used for a cache:

  • Each entry has an expiration time that is defined by the expiry policy. When an entry is created, the expiration time of that entry is calculated by the master cluster and propagated to the replica cluster(s) together with the entry. The expiry policy itself is not transferred. In this case, the entry in the replica will expire based on the expiration time calculated by the master cluster.

  • In the case when the expiry policy is based on the last access time, if an entry is read in the master cluster, its expiration time will be updated. However, the expiration time in the replica cluster will not be updated, because read operations are not propagated to the replica (because they don’t change the data). Therefore, the entry in the replica cluster will be deleted when its expiration time ends, while the entry in the master cluster won’t.

Tombstone TTL

To ensure correctness of DR when deletes are present, you may need to tune the DEFAULT_TOMBSTONE_TTL property. The value of this property must be greater than the maximum expected lag between the sender and receiver clusters. It’s usually defined by the maximum amount of time you expect the replication to be down due to external causes. For example, if you need the DR to continue normally after the network between DC1 and DC2 has been down for 1 hour, your DEFAULT_TOMBSTONE_TTL value must be at least 1 hour. If the DR is down for more than DEFAULT_TOMBSTONE_TTL, you may need to perform a full-state transfer to make the clusters consistent again.

Switching to Incremental Replication

Newer clusters use incremental datacenter replication by default. If you configured replication in older releases (GridGain 8.8.4 or earlier), it may use an older approach. When updating to a newer GridGain version, you need to switch to incremental DR.

To perform the update:

  • Stop DataCenter Replication for the cluster in one or both of the following ways:

    • Use the stopReplication.

    • Stop all sender nodes.

  • Perform a rolling upgrade on the cluster.

  • Re-enable Datacenter Replication. Most of Dataceter Replication configuration parameters, values and classes are compatible, outdated params will be ignored.

  • Perform the same set of actions on each cluster you need to update.