GridGain Developers Hub
GitHub logo GridGain iso GridGain.com
GridGain Software Documentation

Persistence Tuning

This article summarizes best practices for Ignite native persistence tuning. If you are using an external (3rd party) storage for persistence needs please refer to performance guides from the 3rd party vendor.

Adjusting Page Size

GridGain’s DataStorageConfiguration.pageSize parameter should be set to a value no less than the lower of: the page size of your storage media (SSD, Flash, HDD, etc.) and the cache page size of your operating system.

The operating system’s cache page size can be easily checked using system tools and parameters.

The page size of the storage device such as SSD is usually noted in the device specification. If the manufacturer does not disclose this information, try to run SSD benchmarks to figure out the number or use 4 KB as GridGain’s page size. Many manufacturers have to adapt their drivers for 4 KB random-write workloads because a variety of standard benchmarks use 4 KB by default. This white paper from Intel confirms that 4 KB should be enough.

Once you pick the most optimal page size, apply it in your cluster configuration:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
  <property name="dataStorageConfiguration">
    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
      <!-- Set the page size to 4 KB -->
      <property name="pageSize" value="#{4 * 1024}"/>
    </bean>
  </property>

  <!--- Additional settings ---->
</bean>
// Ignite configuration.
IgniteConfiguration cfg = new IgniteConfiguration();

// Durable memory configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();

// Changing the page size to 4 KB.
storageCfg.setPageSize(4096);

// Applying the new configuration.
cfg.setDataStorageConfiguration(storageCfg);

Keep WALs Separately

Consider using separate drives for data files and Write-Ahead-Logging (WAL). GridGain actively writes to both the data and WAL files. Also, other features (such as Point-in-Time-Recovery may also write to the WAL files, requiring even more resources. Thus, by having separate physical disk devices for each, you can double the overall write throughput.

The example below shows how to achieve this:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
   ...
  <!-- Enabling Ignite Native Persistence. -->
  <property name="dataStorageConfiguration">
    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
      <!--
          Sets a path to the root directory where data and indexes are
          to be persisted. It's assumed the directory is on a separated SSD.
      -->
      <property name="storagePath" value="/var/lib/ignite/persistence"/>

      <!--
          Sets a path to the directory where WAL is stored.
          It's assumed the directory is on a separated HDD.
      -->
      <property name="walPath" value="/wal"/>

      <!--
          Sets a path to the directory where WAL archive is stored.
          The directory is on the same HDD as the WAL.
      -->
      <property name="walArchivePath" value="/wal/archive"/>
    </bean>
  </property>
    ...
</bean>
IgniteConfiguration cfg = new IgniteConfiguration();

// Configuring Ignite Native Persistence.
DataStorageConfiguration storeCfg = new DataStorageConfiguration();

// Sets a path to the root directory where data and indexes are to be persisted.
// It's assumed the directory is on a separated SSD.
storeCfg.setStoragePath("/var/lib/ignite/persistence");

// Sets a path to the directory where WAL is stored.
// It's assumed the directory is on a separated HDD.
storeCfg.setWalPath("/wal");

// Sets a path to the directory where WAL archive is stored.
// The directory is on the same HDD as the WAL.
storeCfg.setWalArchivePath("/wal/archive");

// Starting the node.
Ignition.start(cfg);

Increasing WAL Segment Size

The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation. Setting the segment size to a higher value (up to 2 GB) may help reduce the number of switching operations. However, the tradeoff is that this will increase the overall volume of the write-ahead log.

See Changing WAL Segment Size for details.

Changing WAL Mode

Consider other WAL modes as alternatives to the default mode. Each mode provides different degrees of reliability in case of node failure and that degree is inversely proportional to speed, i.e. the more reliable the WAL mode, the slower it is. Therefore, if your use case does not require high reliability, you can switch to a less reliable mode.

See WAL Modes for more details.

WAL Disabling

There are situations where disabling the WAL can help improve performance.

Pages Writes Throttling

GridGain periodically starts the checkpointing process that syncs dirty pages from memory to disk. A dirty page is a page that was updated in RAM but was not written to a respective partition file (an update was just appended to the WAL). This process happens in the background without affecting the application’s logic.

However, if a dirty page, scheduled for checkpointing, is updated before being written to disk, its previous state is copied to a special region called a checkpointing buffer. If the buffer gets overflowed, GridGain will stop processing all updates until the checkpointing is over. As a result, write performance can drop to zero as shown in​ this diagram, until the checkpointing cycle is completed:

Checkpointing Chainsaw

The same situation occurs if the dirty pages threshold is reached again while the checkpointing is in progress. This will force GridGain to schedule one more checkpointing execution and to halt all the update operations until the first checkpointing cycle is over.

Both situations usually arise when either a disk device is slow or the update rate is too intensive. To mitigate and prevent these performance drops, consider enabling the pages write throttling algorithm. The algorithm brings the performance of update operations down to the speed of the disk device whenever the checkpointing buffer fills in too fast or the percentage of dirty pages soar rapidly.

The example below shows how to enable write throttling:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
   ...
  <!-- Enabling Ignite Native Persistence. -->
  <property name="dataStorageConfiguration">
    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
      <!-- Enable write throttling. -->
      <property name="writeThrottlingEnabled" value="true"/>
    </bean>
  </property>
    ...
</bean>
IgniteConfiguration cfg = new IgniteConfiguration();

// Configuring Ignite Native Persistence.
DataStorageConfiguration storeCfg = new DataStorageConfiguration();

// Enabling the writes throttling.
storeCfg.setWriteThrottlingEnabled(true);

// Starting the node.
Ignition.start(cfg);

Adjusting Checkpointing Buffer Size

The size of the checkpointing buffer, explained in the previous section, is one of the checkpointing process triggers.

The default buffer size is calculated as a function of the data region size:

Data Region Size Default Checkpointing Buffer Size

< 1 GB

MIN (256 MB, Data_Region_Size)

between 1 GB and 8 GB

Data_Region_Size / 4

> 8 GB

2 GB

The default buffer size can be suboptimal for some of the write-intensive workloads because the page write throttling algorithm will slow down your writes whenever the size reaches the critical mark. To keep write performance at the desired pace while the checkpointing is in progress, consider increasing DataRegionConfiguration.checkpointPageBufferSize and enabling write throttling to prevent performance​ drops:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
   ...
  <!-- Enabling Ignite Native Persistence. -->
  <property name="dataStorageConfiguration">
    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
      <!-- Enable write throttling. -->
      <property name="writeThrottlingEnabled" value="true"/>

      <property name="defaultDataRegionConfiguration">
        <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
          <!-- Enabling persistence. -->
          <property name="persistenceEnabled" value="true"/>

          <!-- Increasing the buffer size to 1 GB. -->
          <property name="checkpointPageBufferSize"
                    value="#{1024L * 1024 * 1024}"/>
        </bean>
      </property>
    </bean>
  </property>
    ...
</bean>
IgniteConfiguration cfg = new IgniteConfiguration();

// Configuring Ignite Native Persistence.
DataStorageConfiguration storeCfg = new DataStorageConfiguration();

// Enabling the writes throttling.
storeCfg.setWriteThrottlingEnabled(true);

// Increasing the buffer size to 1 GB.
storeCfg.getDefaultDataRegionConfiguration().setCheckpointPageBufferSize(
  1024L * 1024 * 1024);

// Starting the node.
Ignition.start(cfg);

In the example above, the checkpointing buffer size of the default region is set to 1 GB.

Enabling Direct I/O

Usually, whenever an application reads data from disk, the OS gets the data and puts it in a file buffer cache first. Similarly, for every write operation, the OS first writes the data in the cache and transfers it to disk later. To eliminate this process, you can enable Direct I/O in which case the data is read and written directly from/to the disk, bypassing the file buffer cache.

The Direct I/O plugin in GridGain is used to speed up the checkpointing process, which writes dirty pages from RAM to disk. Consider using the Direct I/O plugin for write-intensive workloads.

To enable the Direct I/O plugin, move the {gridgain_dir}/libs/optional/ignite-direct-io folder to the upper level libs/optional/ignite-direct-io folder in your GridGain distribution. Use the ignite.{sh|bat} script to start the nodes after that. IGNITE_DIRECT_IO_ENABLED can be used at runtime to enable and disable the plugin if needed.

Get more details from the Ignite Direct I/O Wiki section.

Purchase Production-Level SSDs

Note that the performance of Ignite Native Persistence may drop after several hours of intensive write load due to the nature of how SSDs are designed and operate. Consider buying fast production-level SSDs to keep the performance high or switch to non-volatile memory devices like Intel Optane Persistent Memory.

SSD Over-provisioning

Performance of random writes on a 50% filled disk is much better than on a 90% filled disk because of the SSDs over-provisioning (see https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti).

Consider buying SSDs with higher over-provisioning rates and make sure the manufacturer provides the tools to adjust it.