GridGain Benchmarks

Run GridGain Benchmarks on the Yardstick Benchmarking Framework

Yardstick is a framework for writing benchmarks. Specifically it helps with writing benchmarks for clustered or otherwise distributed systems. It is very effective for running GridGain benchmarks.

The framework comes with a default set of probes that collect various metrics during benchmark execution. Probes can be turned on or off in configuration. You can use a probe for measuring throughput and latency, or a probe that gathers vmstat statistics, etc. At the end of benchmark execution, Yardstick automatically produces files with probe points.

Yardstick Benchmarks

GridGain® and all other benchmarks are written on top of the Yardstick Benchmarking Framework.

Hosted On GitHub

Yardstick Framework is hosted on GitHub where you can find full documentation.

Running Benchmarks

Yardstick framework comes with several scripts under ‘bin’ folder. The easiest way to execute benchmarks is to start bin/benchmark-run-all.sh script which will start remote server nodes if specified in config/benchmark.properties file, and local benchmark driver.

$ bin/benchmark-run-all.sh config/benchmark.properties

Here is an example of benchmark.properties file which will start 2 server nodes on local host and execute GridGainPutBenchmark:


                    HOSTS=localhost,localhost
                    # Note that -dn and -sn, which stand for data node and server node, are 
                    #native Yardstick parameters and are documented in Yardstick framework.
                    CONFIGS="-b 1 -sm PRIMARY_SYNC -dn GridGainPutBenchmark -sn GridGainNode"

Generating Graphs

At the end of the run, Yardstick will generate results folder with benchmark probe points. To plot these points on a graph, you can execute bin/jfreechart-graph-gen.sh script and pass in one ore more benchmark result folders to it, like so:

bin/jfreechart-graph-gen.sh -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35

You can view graphs by opening the Results.html file in the generated folder.

For further information, visit our Running Benchmarks page.