GridGain Benchmarks

Run on the Yardstick Benchmarking Framework

Yardstick Benchmarks

GridGain® and all other benchmarks are written on top of the Yardstick Benchmarking Framework.

GridGain on GitHub

Hosted On GitHub

Yardstick Framework is hosted on GitHub where you can find full documentation.

Yardstick is a framework for writing benchmarks. Specifically it helps with writing benchmarks for clustered or otherwise distributed systems.

The framework comes with a default set of probes that collect various metrics during benchmark execution. Probes can be turned on or off in configuration. You can use a probe for measuring throughput and latency, or a probe that gathers vmstat statistics, etc. At the end of benchmark execution, Yardstick automatically produces files with probe points.

Running Benchmarks

Yardstick framework comes with several scripts under ‘bin’ folder. The easiest way to execute benchmarks is to start ‘bin/’ script which will start remote server nodes if specified in ‘config/’ file, and local benchmark driver.

$ bin/ config/

Here is an example of ‘’ file which will start 2 server nodes on local host and execute GridGainPutBenchmark:

# Note that -dn and -sn, which stand for data node and server node, are 
#native Yardstick parameters and are documented in Yardstick framework.
CONFIGS="-b 1 -sm PRIMARY_SYNC -dn GridGainPutBenchmark -sn GridGainNode"

Generating Graphs

At the end of the run, Yardstick will generate results folder with benchmark probe points. To plot these points on a graph, you can execute ‘bin/’script and pass in one ore more benchmark result folders to it, like so:

bin/ -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35

You can view graphs by opening the ‘Results.html’ file in the generated folder.