GridGain Developers Hub

Control Script

GridGain provides a command line script —|bat — that you can use to monitor and control your clusters. The script is located under the /bin/ folder of the installation directory.

The control script syntax is as follows: <connection parameters> <command> <arguments>
control.bat <connection parameters> <command> <arguments>

Connecting to Cluster

When executed without connection parameters, the control script tries to connect to a node running on localhost (localhost:11211). If you want to connect to a node that is running on a remove machine, specify the connection parameters.

Parameter Description Default Value

--host HOST_OR_IP

The host name or IP address of the node.


--port PORT

The port to connect to.


--user USER

The user name.

--password PASSWORD

The user password.

--ping-interval PING_INTERVAL

The ping interval.


--ping-timeout PING_TIMEOUT

Ping response timeout.


--ssl-protocol PROTOCOL1, PROTOCOL2…​

A list of SSL protocols to try when connecting to the cluster. Supported protocols.


--ssl-cipher-suites CIPHER1,CIPHER2…​

A list of SSL ciphers. Supported ciphers.

--ssl-key-algorithm ALG

The SSL key algorithm.


--keystore-type KEYSTORE_TYPE

The keystore type.


--keystore KEYSTORE_PATH

The path to the keystore. Specify a keystore to enable SSL for the control script.

--keystore-password KEYSTORE_PWD

The password to the keystore.

--truststore-type TRUSTSTORE_TYPE

The type of the truststore.


--truststore TRUSTSTORE_PATH

The path to the truststore.

--truststore-password TRUSTSTORE_PWD

The password to the truststore.

Activation, Deactivation, and Topology Management

You can use the control script to activate or deactivate your cluster, and manage the Baseline Topology.

Activating Cluster

Activation sets the baseline topology of the cluster to the set of nodes available at the moment of activation. Activation is required only if you use native persistence.

To activate the cluster, run the following command: --set-state ACTIVE [--yes]
control.bat --set-state ACTIVE [--yes]

Deactivating Cluster

To deactivate the cluster, run the following command: --set-state INACTIVE [--yes]
control.bat --set-state INACTIVE [--yes]

Activating Cluster in Read Only Mode

You can also set cluster to read only mode - it will be active, but any cache updattes will be denied: --set-state ACTIVE_READ_ONLY [--yes]
control.bat --set-state ACTIVE_READ_ONLY

Getting Cluster State

The state of the cluster refers to whether it is activated or not.

To get the state of the cluster, run the following command: --state
control.bat --state

Getting Nodes Registered in Baseline Topology

To get the list of nodes registered in the baseline topology, run the following command: --baseline
control.bat --baseline

The output contains the current topology version, the list of consistent IDs of the nodes included in the baseline topology, and the list of nodes that joined the cluster but were not added to the baseline topology.

Command [BASELINE] started
Arguments: --baseline
Cluster state: active
Current topology version: 3

Current topology version: 3 (Coordinator: ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, Order=1)

Baseline nodes:
    ConsistentId=7d79a1b5-cbbd-4ab5-9665-e8af0454f178, State=ONLINE, Order=2
    ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, State=ONLINE, Order=1
Number of baseline nodes: 2

Other nodes:
    ConsistentId=30e16660-49f8-4225-9122-c1b684723e97, Order=3
Number of other nodes: 1
Command [BASELINE] finished with code: 0
Control utility has completed execution at: 2019-12-24T16:53:08.392865
Execution time: 333 ms

Adding Nodes to Baseline Topology

To add a node to the baseline topology, run the command given below. After the node is added, the rebalancing process will start. --baseline add consistentId1,consistentId2,... [--yes]
control.bat --baseline add consistentId1,consistentId2,... [--yes]

Removing Nodes from Baseline Topology

To remove a node from the baseline topology, use the remove command. Only offline nodes can be removed from the baseline topology: shut down the node first and then use the remove command. This operation starts the rebalancing process, which will re-distribute the data across the nodes that remain in the baseline topology. --baseline remove consistentId1,consistentId2,... [--yes]
control.bat --baseline remove consistentId1,consistentId2,... [--yes]

Setting Baseline Topology

You can set the baseline topology by either providing a list of nodes (consistent IDs) or by specifying the desired version of the baseline topology.

To set a list of node as the baseline topology, use the following command: --baseline set consistentId1,consistentId2,... [--yes]
control.bat --baseline set consistentId1,consistentId2,... [--yes]

To restore a specific version of the baseline topology, use the following command: --baseline version topologyVersion [--yes]
control.bat --baseline version topologyVersion [--yes]

Enabling Baseline Topology Autoadjustment

Baseline topology autoadjustment refers to automatic update of baseline topology after the topology has been stable for a specific amount of time.

For in-memory clusters, autoadjustment is enabled by default with the timeout set to 0. It means that baseline topology changes immediately after server nodes join or leave the cluster. For clusters with persistence, the automatic baseline adjustment is disabled by default. To enable it, use the following command: --baseline auto_adjust enable timeout 30000
control.bat --baseline auto_adjust enable timeout 30000

The timeout is set in milliseconds. The baseline is set to the current topology when a given number of milliseconds has passed after the last JOIN/LEFT/FAIL event. Every new JOIN/LEFT/FAIL event restarts the timeout countdown.

To disable baseline autoadjustment, use the following command: --baseline auto_adjust disable
control.bat --baseline auto_adjust disable

Transaction Management

The control script allows you to get the information about the transactions being executed in the cluster. You can also cancel specific transactions.

The following command returns a list of transactions that satisfy a given filter (or all transactions if no filter is provided): --tx --info <transaction filter>
control.bat --tx --info <transaction filter>

The transaction filter parameters are listed in the following table.

Parameter Description

--xid XID

Transaction ID.

--min-duration SECONDS

Minimum number of seconds a transaction has been executing.

--min-size SIZE

Minimum size of a transaction

--label LABEL

User label for transactions. You can use a regular expression.


Limit the scope of the operation to either server or client nodes.

--nodes nodeId1,nodeId2…​

The list of consistent IDs of the nodes you want to get transactions from.

--limit NUMBER

Limit the number of transactions to the given value.


The parameter that is used to sort the output.

To cancel transactions, use the following command: --tx <transaction filter> --kill
control.bat --tx <transaction filter> --kill

The command affects the following transactions:




For example, to cancel the transactions that have been running for more than 100 seconds, execute the following command: --tx --min-duration 100 --kill

Contention Detection in Transactions

The contention command detects when multiple transactions are in contention to create a lock for the same key. The command is useful if you have long-running or hanging transactions.


# Reports all keys that are point of contention for at least 5 transactions on all cluster nodes.|bat --cache contention 5

# Reports all keys that are point of contention for at least 5 transactions on specific server node.|bat --cache contention 5 f2ea-5f56-11e8-9c2d-fa7a

If there are any highly contended keys, the utility dumps extensive information including the keys, transactions, and nodes where the contention took place.


[node=TcpDiscoveryNode [id=d9620450-eefa-4ab6-a821-644098f00001, addrs=[], sockAddrs=[/], discPort=47501, order=2, intOrder=2, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]

// No contention on node d9620450-eefa-4ab6-a821-644098f00001.

[node=TcpDiscoveryNode [id=03379796-df31-4dbd-80e5-09cef5000000, addrs=[], sockAddrs=[/], discPort=47500, order=1, intOrder=1, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]
    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=CREATE, val=UserCacheObjectImpl [val=0, hasValBytes=false], tx=GridNearTxLocal[xid=e9754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439646, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1247], other=[]]
    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=8a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439656, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=6a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439654, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=7a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439655, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=4a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439652, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]

// Node 03379796-df31-4dbd-80e5-09cef5000000 is place for contention on key KeyCacheObjectImpl [part=0, val=0, hasValBytes=false].

Monitoring Cache State

One of the most important commands that|bat provides is --cache list, which is used for cache monitoring. The command provides a list of deployed caches and their affinity/distributiong parameters and distribution within cache groups. There is also a command for viewing existing atomic sequences.

# Displays a list of all caches|bat --cache list .

# Displays a list of caches whose names start with "account-".|bat --cache list account-.*

# Displays info about cache group distribution for all caches.|bat --cache list . --groups

# Displays info about cache group distribution for the caches whose names start with "account-".|bat --cache list account-.* --groups

# Displays info about all atomic sequences.|bat --cache list . --seq

# Displays info about the atomic sequnces whose names start with "counter-".|bat --cache list counter-.* --seq

Resetting Lost Partitions

You can use the control script to reset lost partitions for specific caches. Refer to Partition Loss Policy for details. --cache reset_lost_partitions cacheName1,cacheName2,...

Partition Reconcilliation

Partition reconciliation is a process of consistency checking, with the goal to verify internal data consistency invariants and fix inconsistent entries. The main difference between idle_verify and partition_reconciliation is that the latter one can work under the load.

If the topology is changed while the script is running or task execution fails, the command is automatically cancelled. --cache partition_reconciliation cache1,cache2,cache3
control.bat --cache partition_reconciliation cache1,cache2,cache3

If caches are not specified, the command will be executed for all caches.

The command has the following options:

Command Description

--repair {option}

If specified, fixes all inconsistent data. You can also choose the repair algorithm to use for keys where the valid key is not obvious. The following values can be used:

  • PRINT_ONLY mode. Default. Only conflicts where the key is present on all nodes will be fixed. Conflicts with missing keys will be printed in the report.

  • LATEST will pick the latest value.

  • PRIMARY will pick a value from primary partition. Be careful with this option, as keys may be removed as a result.

  • MAJORITY will pick the most common value, or one of the the most common values randomly if there is no consensus. If missing value is a majority, key will be removed.

  • REMOVE will remove a key if a conflict cannot be resolved without a user algorithm.


This option checks and repairs only partitions that did not pass validation during the last partition map exchange. If not specified, all partitions are taken into account.


The maximum number of threads that can be involved in reconciliation activities. If not specified, a number of cores of a node will be used.


The number of keys to retrieve within one job. The default value is 1000.


Print data to result with sensitive information: keys and values. The default value is false.


Amount of potentially inconsistent keys recheck attempts. Recommended value is between 1 and 5. The default value is 2.

Stopping Reconcilliation

You can safely stop the partition reconcilliation process with the partition_reconciliation_cancel command. The command stops the process. All changes done before the cancelletion are preserved. --cache  partition_reconciliation_cancel
control.bat --cache  partition_reconciliation_cancel

Consistency Check Commands|bat includes a set of consistency check commands that enable you to verify internal data consistency.

First, the commands can be used for debugging and troubleshooting purposes especially if you’re in active development.

Second, if there is a suspicion that a query (such as a SQL query, etc.) returns an incomplete or wrong result set, the commands can verify whether there is inconsistency in the data.

Finally, the consistency check commands can be utilized as part of regular cluster health monitoring.

Let’s review these usage scenarios in more detail.

Verifying Partition Checksums

The idle_verify command compares the hash of the primary partition with that of the backup partitions and reports any differences. The differences might be the result of node failure or incorrect shutdown during an update operation. If any inconsistency is detected, we recommend remove the incorrect partitions.

# Checks partitions of all caches that their partitions actually contain same data.|bat --cache idle_verify

# Checks partitions of specific caches that their partitions actually contain same data.|bat --cache idle_verify cache1,cache2,cache3

If any partitions diverge, a list of conflict partitions is printed out, as follows:

idle_verify check has finished, found 2 conflict partitions.

Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=5]
Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97506054, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=65957380, updateCntr=3, size=2, consistentId=bltTest0]]
Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=6]
Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97595430, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=66016964, updateCntr=3, size=2, consistentId=bltTest0]]

Checking Index Sizes on Cluster Nodes

Every entry in the SQL index has a constant size which is calculated during index creation, called index inline size. Having different sizes on different nodes in a cluster may lead to performance issues. The check_index_inline_sizes command checks that indexes inline size the same on all cluster nodes.|bat --cache check_index_inline_sizes

Validating SQL Index Consistency

The validate_indexes command validates the indexes of given caches on all cluster nodes.

The following is checked by the validation process:

  1. All the key-value entries that are referenced from a primary index has to be reachable from secondary SQL indexes.

  2. All the key-value entries that are referenced from a primary index has to be reachable. A reference from the primary index shouldn’t point to nowhere.

  3. All the key-value entries that are referenced from secondary SQL indexes have to be reachable from the primary index.

# Checks indexes of all caches on all cluster nodes.|bat --cache validate_indexes

# Checks indexes of specific caches on all cluster nodes.|bat --cache validate_indexes cache1,cache2

# Checks indexes of specific caches on node with given node ID.|bat --cache validate_indexes cache1,cache2 f2ea-5f56-11e8-9c2d-fa7a

If indexes refer to non-existing entries (or some entries are not indexed), errors are dumped to the output, as follows:

PartitionKey [grpId=-528791027, grpName=persons-cache-vi, partId=0] ValidateIndexesPartitionResult [updateCntr=313, size=313, isPrimary=true, consistentId=bltTest0]
IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=_key_PK], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=PERSON_ORGID_ASC_IDX], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
validate_indexes has finished with errors (listed above).

Working with Cluster Properties

The|bat script provides an ability to work with SQL statistics functionality.

To get the full list of available properties, use the --property list command. This command returns the list of all available properties to work with: --property list
control.bat  --property list

You can set property value with --property set command. For example, to enable or disable SQL statistics in cluster use, specify ON, OFF, or NO_UPDATE values: --property set --name 'statistics.usage.state' --val 'ON'
control.bat  --property set --name 'statistics.usage.state' --val 'ON'

You can also get property value with --property get command. For example: --property get --name 'statistics.usage.state'
control.bat --property get --name 'statistics.usage.state'

Offsetting Checkpoints

When you have a large distributed database with checkpoints set up in the same way on all nodes, all checkpoints triggering at the same time can create a spike in traffic and, consequently, negatively affect the performance.

To avoid it, you can set the checkpoint.deviation property. The property offsets checkpoints' time, so they happen within specified interval of checkpoint.frequency. For example, if your checkpoints trigger every 100 seconds, and you set checkpoint.deviation at 10%, your triggers happen every 95-105 seconds.

The example below offsets the checkpoints by 10% to reduce traffic issues: --property set --name checkpoint.deviation --val 10 --property set --name checkpoint.deviation --val 10

Cluster Diagnostics

The control script can provide diagnostic information for your cluster. Use the diagnostic command to get diagnostic information for it: --diagnostic
control.bat --diagnostic

Page Locks Information

The control script can also get information on which pages are currently locked, and put the informaion in a file.

You have the options of dumping current information or the log with dump and dump log subcommands.

Use the --path option to specify the absolute path to the folder to write data to. If not specified, the data will be saved in the work folder.

Use the --all option to get information on all nodes, or specify the list of nodes you need with --nodes option. --diagnostic pageLocks dump [--path path_to_directory] [--all] [--nodes|nodeId1,nodeId2,..] --diagnostic pageLocks dump_log [--path path_to_directory] [--all] [--nodes|nodeId1,nodeId2,..]
control.bat --diagnostic pageLocks dump [--path path_to_directory] [--all] [--nodes|nodeId1,nodeId2,..]

control.bat --diagnostic pageLocks dump_log [--path path_to_directory] [--all] [--nodes|nodeId1,nodeId2,..]

Connectivity Information

If you have issues with your nodes communicating with each other, use the connectivity subcommand. --diagnostic connectivity --diagnostic connectivity
control.bat --diagnostic connectivity

control.bat --diagnostic connectivity

Cluster Encryption

You can manage cluster encryption parameters from the control script. For more information about encryption key, see the Transparent Data Encryption page.

Managing Master Keys

Use the get_master_key_name and change_master_key subcommands to get or set the master key that is used to in cluster encryption: --encryption get_master_key_name --encryption change_master_key newMasterKeyName
control.bat --encryption get_master_key_name

control.bat --encryption change_master_key newMasterKeyName

Managing Cache Keys

If you use cache-level encryption keys as well. Change the encryption key with the change_cache_key subcommand and see the IDs for cache group encryption keys with the cache_key_ids subcommand. --encryption change_cache_key cacheGroupName --encryption cache_key_ids cacheGroupName
control.bat --encryption change_cache_key cacheGroupName

control.bat --encryption cache_key_ids cacheGroupName

Re-encryption Management

When re-encryption is initiated on the cluster, you can monitor it with the reencryption_status subcommand.

If you need to pause reencryption, use the suspend_reencryption subcommand. When you no longer need to keep it paused, restart it with resume_reencryption subcommand. --encryption reencryption_status cacheGroupName --encryption suspend_reencryption cacheGroupName --encryption resume_reencryption cacheGroupName
control.bat --encryption reencryption_status cacheGroupName

control.bat --encryption suspend_reencryption cacheGroupName

control.bat --encryption resume_reencryption cacheGroupName

Encryption Rate Limiting

Encryption requires a large amount of system resources. If you do not want to pause encryption to allow the system to handle the workload, set the reencryption rate limit with the reencryption_rate_limit subcommand. Reencryption rate is specified in MB/s. --encryption reencryption_rate_limit [new_limit]
control.bat --encryption reencryption_rate_limit [new_limit]

Rolling Upgrade Configuration

You can manage rolling upgrades from the control script. The rolling-upgrade command manages everything related to it.

Use the start and finish subcommands to enable or disable rolling upgrades. The --yes option automatially answers yes to all questions during the upgrade. --rolling-upgrade start [--yes] --rolling-upgrade finish [--yes]
control.bat --rolling-upgrade start [--yes]

control.bat --rolling-upgrade finish [--yes]

Enabling Force Rolling Upgrades

Use the force subcommand to force a node to stay in cluster. The use of this method is not recommended. --rolling-upgrade force [--yes]
control.bat --rolling-upgrade force [--yes]

Getting Status of Rolling Upgrade

The status subcommand provides the current status of the rolling upgrade on your cluster: --rolling-upgrade status
control.bat --rolling-upgrade status

Cluster IDs and Tags

You can set cluster ID and tag values with the change-id and change-tag commands respectively: --change-id newIdValue [--yes] --change-tag newTagValue [--yes]
control.bat --change-id newIdValue [--yes]
control.bat --change-tag newTagValue [--yes]

Data Center Replication

Control script can be used to track data center replication.

Data Center Replication Status

Use the state subcommand to get current state of datacenter replication. With the --verbose option, you can get extended status information. --dr state [--verbose]
control.bat --dr state [--verbose]

Current Topology

Print topology of the cluster with the details about Data Center Replication:




Display information about sender nodes.


Display information about receiver nodes.


Displays information about data nodes in the cluster.


Displays information about nodes not involved in DR at the moment. --dr topology [--sender-hubs] [--receiver-hubs] [--data-nodes] [--other-nodes]
control.bat --dr topology [--sender-hubs] [--receiver-hubs] [--data-nodes] [--other-nodes]

Pausing and Resuming DR

You can pause and restart data center replication on all caches in cluster with pause and resume commands. You need to specify the ID of the data center for the command to execute. --dr pause <remoteDataCenterId> [--yes] --dr resume <remoteDataCenterId> [--yes]

Full State Transfer

If the caches in the master cluster already have data, you need to perform a full state transfer. Execute full state transfer on all caches in a cluster with the full-state-transfer subcommand: --dr full-state-transfer [--yes]
control.bat --dr full-state-transfer [--yes]

Node Commands

You can also get information about the node status during data center replication.




Displays node configuration.


Displays node metrics.


Cleans store after command execution.


Automatically confirms all choices during command execution. --dr node <nodeId> [--config] [--metrics] [--clear-store] [--yes]
control.bat --dr node <nodeId> [--config] [--metrics] [--clear-store] [--yes]

Cache-specific Information

You can get specific data center replication information related details about caches with the cache subcommand.




Displays cache configuration.


Displays node metrics.


Specifies the cache filter. Possible values: ALL/SENDING/RECEIVING/PAUSED/ERROR.


Specifies the group of sender caches. Possible values: groupName/ALL/DEFAULT/NONE.


The action to perform. Possible values: stop/start/full-state-transfer.


Automatically confirms all choices during command execution. --dr cache <regExp> [--config] [--metrics] [--cache-filter ALL|SENDING|RECEIVING|PAUSED|ERROR] [--sender-group <groupName>|ALL|DEFAULT|NONE] [--action stop|start|full-state-transfer] [--yes]
control.bat --dr cache <regExp> [--config] [--metrics] [--cache-filter ALL|SENDING|RECEIVING|PAUSED|ERROR] [--sender-group <groupName>|ALL|DEFAULT|NONE] [--action stop|start|full-state-transfer] [--yes]

Shutdown Policy

You can use the control script to manage shutdown policies.

  • Set the shutdown policy to GRACEFUL to have nodes manage their ongoing tasks before it shuts down.

  • Set the policy to IMMEDIATE to have nodes shut down immediately instead. --shutdown-policy [IMMEDIATE|GRACEFUL]
control.bat --shutdown-policy [IMMEDIATE|GRACEFUL]

Warmup Configuration

You can disable cache warmup on the cluster by using the warm-up command. --warm-up --stop
control.bat --warm-up --stop

Persistence Configuration

The commands change default behavior of automatic deletion of the corrupted backup files. --persistence --persistence

The same information is printed when info subcommand is passed: --persistence info
control.bat --persistence info

Clean Directories --persistence clean corrupted --persistence clean all --persistence clean caches cache1,cache2,cache3
control.bat --persistence clean corrupted

control.bat --persistence clean all

control.bat --persistence clean caches cache1,cache2,cache3

Backup Data Files --persistence backup corrupted --persistence backup all --persistence backup caches cache1,cache2,cache3
control.bat --persistence backup corrupted

control.bat --persistence backup all

control.bat --persistence backup caches cache1,cache2,cache3


When persistance is enabled on the disk, you need to perform defragmentation of data in it. Use the defragmentation schedule command to plan defragmentation. You can run defragmentation on all caches of specified nodes, or be more specific with selecting caches to run defragmentation on. --defragmentation schedule --nodes consistentId0,consistentId1 --defragmentation schedule --nodes consistentId0,consistentId1 --caches cache1,cache2,cache3
control.bat --defragmentation schedule --nodes consistentId0,consistentId1

control.bat --defragmentation schedule --nodes consistentId0,consistentId1 --caches cache1,cache2,cache3

If the scheduled or active defragmentation interferes with operation, use the cancel subcomand to stop it.  --defragmentation cancel
control.bat --defragmentation cancel

Manage Binary Types --meta list
control.bat --meta list

Prints info about all available binary types:

  • typeId=<ID>

  • typeName=<name>

  • fields=<fields_count>

  • schemas=<schemas_count>

  • isEnum=<bool> --meta details (-- typeId <ID>| --typeName <name>)
control.bat --meta details (-- typeId <ID>| --typeName <name>)

Prints detailed information about the specified type. The type can be specified with a type name or a type ID.

Below is the command output example:

typeId=0x1FBFBC0C (532659212)
name=fld3, type=long[], fieldId=0x2FFF95 (3145621)
name=fld2, type=double, fieldId=0x2FFF94 (3145620)
name=fld1, type=Date, fieldId=0x2FFF93 (3145619)
name=fld0, type=int, fieldId=0x2FFF92 (3145618)
schemaId=0x6C5CC179 (1818018169), fields=[fld0]
schemaId=0x70E46431 (1894016049), fields=[fld0, fld1, fld2, fld3] --meta remove (-- typeId <ID>| --typeName <name>) [--out <file_name>])
control.bat --meta remove (-- typeId <ID>| --typeName <name>) [--out <file_name>]

Removes metadata for the specified type form the cluster and saves the removed metadata to the specified file. If the file name is not specified, the output file name is <typeId>.bin. This command requires confirmation. Note that all thin clients' sessions (ODBC, JDBC, thin clients) are closed to remove binary metadata from the local cache. --meta update [--in <file_name>]
control.bat --meta update [--in <file_name>]

Updates cluster metadata from the specified file. This command requires confirmation.