GridGain Developers Hub

Java Thin Client


The Java thin client is a lightweight client that connects to the cluster via a standard socket connection. It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute calculations. The thin client simply establishes a socket connection to a standard node​ and performs all operations through that node.

To start a single node cluster that you can use to run examples, refer to the Java Quick Start Guide.

Setting Up

If you use maven or gradle, add the ignite-core dependency to your application:


def igniteVersion = '8.9.6'

dependencies {
    compile group: 'org.apache.ignite', name: 'ignite-core', version: igniteVersion

Alternatively, you can use the ignite-core-8.9.6.jar library from the GridGain distribution package.

Connecting to Cluster

To initialize a thin client, use the Ignition.startClient(ClientConfiguration) method. The method accepts a ClientConfiguration object, which defines client connection parameters.

The method returns the IgniteClient interface, which provides various methods for accessing data. IgniteClient is an auto-closable resource. Use the try-with-resources statement to close the thin client and release the resources associated with the connection.

ClientConfiguration cfg = new ClientConfiguration().setAddresses("");
try (IgniteClient client = Ignition.startClient(cfg)) {
    ClientCache<Integer, String> cache = client.cache("myCache");
    // Get data from the cache

You can provide addresses of multiple nodes. In this case, the thin client randomly tries all the servers in the list and throws ClientConnectionException if none is available.

try (IgniteClient client = Ignition.startClient(new ClientConfiguration().setAddresses("node1_address:10800",
        "node2_address:10800", "node3_address:10800"))) {
} catch (ClientConnectionException ex) {
    // All the servers are unavailable

Note that the code above provides a failover mechanism in case of server node failures but with a caveat that cache queries may return duplicate results. Refer to the Handling Node Failures section for more information.

Automatic Server Node Discovery

Thin clients connect to one or more servers. The set of endpoints is defined in ClientConfiguration and cannot change after a client starts. However, clusters can change their topology dynamically: nodes can start and stop, IP addresses can change, etc. To account for that, thin clients can discover server nodes automatically when connected to any of them, and maintain an up to date list of servers at all times.

The automatic node discovery is enabled by default. To disable this behavior, set the ClientConfiguration#clusterDiscoveryEnabled property to "false".

The client performs the discovery as follows: . Connects to one or more endpoints from ClientConfiguration. . Retrieves the current topology. . Connects to every server from Step 2 it hasn’t connected to before: .. Uses node UUID to make sure it doesn’t connect to the same node twice. .. Drops the connection in case of a UUID mismatch. . For every response to every operation, compares the topology version from the header to the topology version from Step 2. . Connects to the newly discovered servers, removes connections to the removed servers. . Repeats Step 4.

Server discovery is an asynchronous process - it happens in the background. Moreover, thin client receives topology updates only when it performs operations (to minimize server load and network traffic from idle connections).

Partition Awareness

Partition awareness allows the thin client to send query requests directly to the node that owns the queried data.

Without partition awareness, an application that is connected to the cluster via a thin client executes all queries and operations via a single server node that acts as a proxy for the incoming requests. These operations are then re-routed to the node that stores the data that is being requested. This results in a bottleneck that could prevent the application from scaling linearly.

Without Partition Awareness

Notice how queries must pass through the proxy server node, where they are routed to the correct node.

With partition awareness in place, the thin client can directly route queries and operations to the primary nodes that own the data required for the queries. This eliminates the bottleneck, allowing the application to scale more easily.

With Partition Awareness

The partition awareness functionality helps avoid an additional network hop in the following scenarios:

  • Single-key operations API, like put(), get(), etc. However, the functionality has no effect on the operations within explicit transactions (initiated via ClientTransaction#txStart() described in the Transactions section).

  • ScanQuery and IndexQuery accept a partition number as a parameter with which the query is routed to a server node that stores the requested data. Refer to Executing Scan Queries and Executing Index Queries sections for more information.

The following code sample illustrates how to use the partition awareness feature with the java thin client.

ClientConfiguration cfg = new ClientConfiguration()
        .setAddresses("node1_address:10800", "node2_address:10800", "node3_address:10800")

try (IgniteClient client = Ignition.startClient(cfg)) {
    ClientCache<Integer, String> cache = client.cache("myCache");
    // Put, get or remove data from the cache...
} catch (ClientException e) {

If a list of server nodes is dynamically changing or scaling, then it is possible to configure the connection with custom implementation of ClientAddressFinder. It should provide a number of current server addresses every time a client asks for them. The following code sample illustrates how to use it.

ClientAddressFinder finder = () -> {
    String[] dynamicServerAddresses = fetchServerAddresses();

    return dynamicServerAddresses;

ClientConfiguration cfg = new ClientConfiguration()

try (IgniteClient client = Ignition.startClient(cfg)) {
    ClientCache<Integer, String> cache = client.cache("myCache");
    // Put, get, or remove data from the cache...
} catch (ClientException e) {

The code snippet shows how an example implementation might look like if you want clients to retrieve server addresses dynamically.

  • The ClientAddressFinder is a functional interface that provides the only method getAddresses().

  • The fetchServerAddress() is a custom function that dynamically provides server addresses.

  • Configure client with ClientConfiguration.setAddressFinder(finder).

Also, you can check a real example of the interface implementation. ThinClientKubernetesAddressFinder is created to handle scalable Kubernetes environment.

Using Key-Value API

The Java thin client supports most of the key-value operations available in the thick client. To execute key-value operations on a specific cache, you need to get an instance of the cache and use one of its methods.

Getting a Cache Instance

The ClientCache interface provides the key-value API. You can use the following methods to obtain an instance of ClientCache:

  • IgniteClient.cache(String): assumes a cache with the specified name exists. The method does not communicate with the cluster to check if the cache really exists. Subsequent cache operations fail if the cache does not exist.

  • IgniteClient.getOrCreateCache(String), IgniteClient.getOrCreateCache(ClientCacheConfiguration): get existing cache with the specified name or create the cache if it does not exist. The former operation creates a cache with default configuration.

  • IgniteClient.createCache(String), IgniteClient.createCache(ClientCacheConfiguration): create a cache with the specified name and fail if the cache already exists.

Use IgniteClient.cacheNames() to list all existing caches.

ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration().setName("References")

ClientCache<Integer, String> cache = client.getOrCreateCache(cacheCfg);

Basic Cache Operations

The following code snippet demonstrates how to execute basic cache operations from the thin client.

Map<Integer, String> data = IntStream.rangeClosed(1, 100).boxed()
        .collect(Collectors.toMap(i -> i, Object::toString));


assert !cache.replace(1, "2", "3");
assert "1".equals(cache.get(1));
assert cache.replace(1, "1", "3");
assert "3".equals(cache.get(1));

cache.put(101, "101");

assert cache.size() == 1;
assert "101".equals(cache.get(101));

assert 0 == cache.size();

Executing Scan Queries

Use the ScanQuery<K, V> class to get a set of entries that satisfy a given condition. The thin client sends the query to the cluster node where it is executed as a normal scan query.

The query condition is specified by an IgniteBiPredicate<K, V> object that is passed to the query constructor as an argument. The predicate is applied on the server side. If you don’t provide any predicate, the query returns all cache entries.

The results of the query are transferred to the client page by page. Each page contains a specific number of entries and is fetched to the client only when the entries from that page are requested. To change the number of entries in a page, use the ScanQuery.setPageSize(int pageSize) method (default value is 1024).

ClientCache<Integer, Person> personCache = client.getOrCreateCache("personCache");

Query<Cache.Entry<Integer, Person>> qry = new ScanQuery<Integer, Person>(
        (i, p) -> p.getName().contains("Smith"));

try (QueryCursor<Cache.Entry<Integer, Person>> cur = personCache.query(qry)) {
    for (Cache.Entry<Integer, Person> entry : cur) {
        // Process the entry ...

The IgniteClient.query(…​) method returns an instance of FieldsQueryCursor. Make sure to always close the cursor after you obtain all results.


Client transactions are supported for caches with AtomicityMode.TRANSACTIONAL mode.

Executing Transactions

To start a transaction, obtain the ClientTransactions object from IgniteClient. ClientTransactions has a number of txStart(…​) methods, each of which starts a new transaction and returns an object (ClientTransaction) that represents the transaction. Use this object to commit or rollback the transaction.

ClientCache<Integer, String> cache = client.cache("my_transactional_cache");

ClientTransactions tx = client.transactions();

try (ClientTransaction t = tx.txStart()) {
    cache.put(1, "new value");


Transaction Configuration

Client transactions can have different concurrency modes, isolation levels, and execution timeout, which can be set for all transactions or on a per transaction basis.

The ClientConfiguration object supports setting the default concurrency mode, isolation level, and timeout for all transactions started with this client interface.

ClientConfiguration cfg = new ClientConfiguration();

cfg.setTransactionConfiguration(new ClientTransactionConfiguration().setDefaultTxTimeout(10000)

IgniteClient client = Ignition.startClient(cfg);

You can specify the concurrency mode, isolation level, and timeout when starting an individual transaction. In this case, the provided values override the default settings.

ClientTransactions tx = client.transactions();
try (ClientTransaction t = tx.txStart(TransactionConcurrency.OPTIMISTIC,
        TransactionIsolation.REPEATABLE_READ)) {
    cache.put(1, "new value");

Working with Binary Objects

The thin client fully supports Binary Object API described in the Working with Binary Objects section. Use CacheClient.withKeepBinary() to switch the cache to binary mode and start working directly with binary objects to avoid serialization/deserialization. Use IgniteClient.binary() to get an instance of IgniteBinary and build an object from scratch.

IgniteBinary binary = client.binary();

BinaryObject val = binary.builder("Person").setField("id", 1, int.class).setField("name", "Joe", String.class)

ClientCache<Integer, BinaryObject> cache = client.cache("persons").withKeepBinary();

cache.put(1, val);

BinaryObject value = cache.get(1);

Refer to the Working with Binary Objects page for detailed information.

Executing SQL Statements

The Java thin client provides a SQL API to execute SQL statements. SQL statements are declared using the SqlFieldsQuery objects and executed through the IgniteClient.query(SqlFieldsQuery) method.

client.query(new SqlFieldsQuery(String.format(

int key = 1;
Person val = new Person(key, "Person 1");

client.query(new SqlFieldsQuery("INSERT INTO Person(id, name) VALUES(?, ?)").setArgs(val.getId(), val.getName())

FieldsQueryCursor<List<?>> cursor = client
        .query(new SqlFieldsQuery("SELECT name from Person WHERE id=?").setArgs(key).setSchema("PUBLIC"));

// Get the results; the `getAll()` methods closes the cursor; you do not have to
// call cursor.close();
List<List<?>> results = cursor.getAll(); -> {
    System.out.println("name = " + columns.get(0));

The query(SqlFieldsQuery) method returns an instance of FieldsQueryCursor, which can be used to iterate over the results. After getting the results, the cursor must be closed to release the resources associated with it.

Read more about using SqlFieldsQuery and SQL API in the Using SQL API section.

Logical Nodes Grouping

You can use the ClientClusterGroup interface of the cluster APIs to create various groups of cluster nodes. For instance, one group can comprise all servers nodes, while the other group can include only those nodes that match a specific TCP/IP address format. The example below shows how to create a group of server nodes located in the dc1 data center:

try (IgniteClient client = Ignition.startClient(clientCfg)) {
    ClientClusterGroup serversInDc1 = client.cluster().forServers().forAttribute("dc", "dc1");
    serversInDc1.nodes().forEach(n -> System.out.println("Node ID: " +;

Refer to the main cluster groups documentation page for more details on the capability.

Executing Compute Tasks

Presently, the Java thin client supports basic compute capabilities by letting you execute those compute tasks that are already deployed in the cluster. You can either run a task across all cluster nodes or a specific cluster group. The deployment assumes that you create a JAR file with the compute tasks and add the JAR to the cluster nodes' classpath.

By default, the execution of tasks, triggered by the thin client, is disabled on the cluster side. You need to set the ThinClientConfiguration.maxActiveComputeTasksPerConnection parameter to a non-zero value in the configuration of your server nodes and thick clients:

<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
    <property name="clientConnectorConfiguration">
        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
            <property name="thinClientConfiguration">
                <bean class="org.apache.ignite.configuration.ThinClientConfiguration">
                    <property name="maxActiveComputeTasksPerConnection" value="100" />
ThinClientConfiguration thinClientCfg = new ThinClientConfiguration()

ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration()

IgniteConfiguration igniteCfg = new IgniteConfiguration()

Ignite ignite = Ignition.start(igniteCfg);

The example below shows how to get access to the compute APIs via the ClientCompute interface and execute the compute task named MyTask:

ThinClientConfiguration thinClientCfg = new ThinClientConfiguration()

ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration()

IgniteConfiguration igniteCfg = new IgniteConfiguration()

Ignite ignite = Ignition.start(igniteCfg);

Executing Ignite Services

You can use the ClientServices APIs of the Java thin client to invoke an Ignite Service that is already deployed in the cluster.

The example below shows how to invoke the service named MyService:

try (IgniteClient client = Ignition.startClient(clientCfg)) {
    // Executing the service named MyService
    // that is already deployed in the cluster.
        "MyService", MyService.class).myServiceMethod();

The deployed service can be implemented using Java or .NET.

Handling Exceptions

Handling Node Failures

When you provide the addresses of multiple nodes in the client configuration, the client automatically switches to the next node if the current connection fails and retries any ongoing operation.

In the case of atomic operations, failover to another node is transparent to the user. However, if you execute a scan query or a SELECT query, it may return duplicate results. This can happen because queries return data in pages, and if the node that the client is connected to goes down while the client retrieves the pages, the client connects to another node and executes the query again. To avoid this, you have to write some code in your application that checks if the entries returned by the client are duplicated. Consider the following code:

Query<Cache.Entry<Integer, Person>> qry = new ScanQuery<Integer, Person>(
        (i, p) -> p.getName().contains("Smith"));

try (QueryCursor<Cache.Entry<Integer, Person>> cur = cache.query(qry)) {
    // Collecting the results into a map removes the duplicates
    Map<Integer, Person> res = cur.getAll().stream()
            .collect(Collectors.toMap(Cache.Entry::getKey, Cache.Entry::getValue));

Enabling Logging

You can enable logging of thin client’s events with the a logger implementation of your choice:

ClientConfiguration cfg = new ClientConfiguration()
    .setLogger(new JavaLogger());
IgniteClient client = Ignition.startClient(cfg);

Listening to Events

You can listen to events that are happening on cluster by using the ConnectionEventListener class. Create the event listener object and set it as event listener for the client configuration:

ConnectionEventListener lsnr = new ConnectionEventListener() {
    @Override public void onHandshakeSuccess(HandshakeSuccessEvent event) {
        System.out.println("onHandshakeSuccess: " + event.connectionDescription().serverNodeId());

    @Override public void onConnectionClosed(ConnectionClosedEvent event) {
        System.out.println("onConnectionClosed: " + event.connectionDescription().serverNodeId());

ClientConfiguration cfg = new ClientConfiguration().setAddresses("").setEventListeners(lsnr);

try (IgniteClient client = Ignition.startClient(cfg)) {
    // ...

For more information about the events, see the Events page.



To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the Enabling SSL/TLS for Thin Clients section for the instruction on the cluster configuration.

To enable encrypted communication in the thin client, provide a keystore that contains the encryption key and a truststore with the trusted certificates in the thin client configuration.

ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("");


try (IgniteClient client = Ignition.startClient(clientCfg)) {
    // ...

The following table explains encryption parameters of the client configuration:

Parameter Description Default Value





The path to the keystore file with the private key.



The type of the keystore.



The password to the keystore.



The path to the truststore file.



The type of the truststore.



The password to the truststore.



Sets the key manager algorithm that is used to create a key manager.



If this parameter is set to true, the certificates are not validated.



The name of the protocol that is used for data encryption.



Configure authentication on the cluster side and provide the user name and password in the client configuration.

ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("").setUserName("joe")

try (IgniteClient client = Ignition.startClient(clientCfg)) {
    // ...
} catch (ClientAuthenticationException e) {
    // Handle authentication failure


You can configure thin client authorization in the cluster. Refer to the Authorization page for details.

Async APIs

Most network-bound thin client APIs have an async counterpart, for example, ClientCache.get and ClientCache.getAsync.

IgniteClient client = Ignition.startClient(clientCfg);
ClientCache<Integer, String> cache = client.getOrCreateCache("cache");

IgniteClientFuture<Void> putFut = cache.putAsync(1, "hello");
putFut.get(); // Blocking wait.

IgniteClientFuture<String> getFut = cache.getAsync(1);
getFut.thenAccept(val -> System.out.println(val)); // Non-blocking continuation.
  • Async methods do not block the calling thread;

  • Async methods return IgniteClientFuture<T>, which is a combination of Future<T> and CompletionStage<T>;

  • Async continuations are executed using ClientConfiguration.AsyncContinuationExecutor, which defaults to ForkJoinPool#commonPool(). For example, cache.getAsync(1).thenAccept(val → System.out.println(val)) executes the println call using a thread from the commonPool.

Data Center Replication

Sender Groups

Configure sender groups for sender nodes by using the ClientCacheDrSenderConfiguration.

The below example adds the node to the group1 sender group:

IgniteClient client = Ignition.startClient(
        new ClientConfiguration().setAddresses("localhost"));

ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration()
        .setPluginConfigurations(new GridGainClientCacheConfiguration()
                .setDrSenderConfiguration(new ClientCacheDrSenderConfiguration()

ClientCache<Integer, Integer> cache = client.createCache(cacheCfg);