Daniel Mitterdorfer

Microbenchmarking in Java with JMH: Hello JMH

This is the fourth post in a series about microbenchmarking on the JVM with the Java Microbenchmarking Harness (JMH).

part 1: Microbenchmarking in Java with JMH: An Introduction

part 2: Microbenchmarks and their environment

part 3: Common Flaws of Handwritten Benchmarks

part 5: Digging Deeper

In the previous post I have shown different problems that we might miss when writing microbenchmarks from scratch. In this blog post I'll introduce JMH and show how it helps us to avoid these problems.

Java Microbenchmarking Harness: Hello World Walkthrough

By now you might think there is no way to write a correct microbenchmark on the JVM without being an engineer working on HotSpot. Fortunately, some people on the OpenJDK team, most prominently Aleksey Shipilёv, have written the Java Microbenchmarking Harness or JMH for short. JMH takes all sorts of countermeasures to eliminate or reduce the problems I have described earlier and helps you to concentrate on writing the microbenchmarking instead of satisfying the JVM. To get a grasp of JMHs approach to microbenchmarking, let's write a hello world benchmark which you can also find in the accompanying project on Github:

package name.mitterdorfer.benchmark.jmh;

import org.openjdk.jmh.annotations.Benchmark;

public class HelloJMHMicroBenchmark {
    public void benchmarkRuntimeOverhead() {
        //intentionally left blank

A JMH microbenchmark is a plain Java class. Each microbenchmark is implemented as a method that is annotated with @Benchmark (in earlier versions of JMH the annotation has been called @GenerateMicroBenchmark). But how to we run it? Before we can run the microbenchmark, we have some work to do. To see why, let's have a look at the basic workflow with JMH:

Runtime diagram of the forked JVM runs of JMH

This workflow might strike you as a bit odd at first. Why is JMH generating code? Why do we have to create a shaded JAR? Wouldn't it be easier to run a microbenchmark just like a JUnit test? Let's go through this process step by step.

We have already completed the first step by annotating a method with @Benchmark. The second step is carried out when the microbenchmarking class is compiled. JMH implements multiple annotation processors that generate the final microbenchmark class. This generated class contains setup and measurement code as well as code that's required to minimize unwanted optimizations of the JIT compiler in the microbenchmark. The generated class for name.mitterdorfer.jmh.HelloJMHMicroBenchmark is name.mitterdorfer.jmh.generated.HelloJMHMicroBenchmark_benchmarkRuntimeOverhead and can be found in the corresponding .java file below build/classes/main if you're curios. As you can see, JMH generates one class per method that is annotated with @Benchmark but that is transparent to JMH users.

JMH contains a Runner class somewhat similar to JUnit so it is possible to run embedded microbenchmarks using the JMH Java API. However, let's use the JAR-based workflow for now and create a shaded JAR which we'll run. JMH allows multiple microbenchmark classes in the same JAR and can run all of them in the same microbenchmarking run.

To run the microbenchmark we will now tacke step three and create a shaded JAR. I'll use Gradle for that as I prefer it over Maven. If you want or have to use Maven, just look at the JMH example POM or at the sample POM of my benchmarking project. Just type gradle shadow to create the shaded JAR, which means that a single JAR will be created that contains your microbenchmark and its dependencies. When you type java -jar build/libs/benchmarking-experiments-0.1.0-all.jar JMH runs the microbenchmarks that are contained in the JAR and prints something similar to this:

# Run progress: 0,00% complete, ETA 00:06:40
# Warmup: 20 iterations, 1 s each
# Measurement: 20 iterations, 1 s each
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: name.mitterdorfer.benchmark.jmh.HelloJMHMicroBenchmark.benchmarkRuntimeOverhead
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java
# VM options: -Dfile.encoding=UTF8
# Fork: 1 of 10
# Warmup Iteration   1: 1442257053,080 ops/s
# Warmup Iteration   2: 1474088913,188 ops/s
# Warmup Iteration  19: 435080374,496 ops/s
# Warmup Iteration  20: 436917769,398 ops/s
Iteration   1: 1462176825,349 ops/s
Iteration   2: 1431427218,067 ops/s

# Run complete. Total time: 00:08:06

Benchmark                                                   Mode   Samples        Score  Score error    Units
n.m.b.j.HelloJMHMicroBenchmark.benchmarkRuntimeOverhead    thrpt       200 1450534078,416 29308551,722    ops/s

You can see that JMH creates multiple JVM forks. For each for fork, it runs n warmup iterations (shown in blue in the picture below), which do not get measured and are just needed to reach steady state before m iterations are run (shown in red in the picture below). In this example, n is 20 and m is 20 but you can change this with command line parameters.

Runtime diagram of the forked JVM runs of JMH

At the end, JMH summarizes the result of all microbenchmarking runs. The two most important measures are "score" (which is the mean for the throughput benchmarking mode) which allows you to estimate the performance of the benchmarked code and the "score error" which allows you to estimate the "noisyness" of the measurements taken by the microbenchmark. As this post is not intended to give an introduction to statistics I suggest "Explained: Key Mathematic Principles for Performance Testers" written by Microsoft's patterns & practices group. If you are more the intuitive type you'll like the articles from Kalid Azad very much, especially "How To Analyze Data Using the Average".

That's basically the whole microbenchmarking process with JMH. Congratulations, you mastered the first step of writing microbenchmarks with JMH! In the next post we'll get to know more concepts of JMH.

Questions or comments?

Just ping me on Twitter