DEV Community

Cover image for Microbenchmarking Java code with JMH
Mihai Bojin
Mihai Bojin

Posted on • Originally published at mihaibojin.com on

Microbenchmarking Java code with JMH

πŸ”” This article was originally posted on my site, MihaiBojin.com. πŸ””


As a general best practice, it's a good idea to benchmark your code.

JMH, or Java Microbenchmark Harness, is a tool that can be used to analyze the performance of JVM languages.

Since I wanted to profile the performance of the Props library, I integrated JMH into the codebase.

The following is a simple step-by-step tutorial about integrating the JMH Gradle plugin in a Java codebase.

Gradle configuration

First, add the JMH Gradle plugin in your build.gradle.kts file:

plugins {
    id("me.champeau.jmh").version("0.6.6")
}
Enter fullscreen mode Exit fullscreen mode

Doing so will add a few tasks to your Gradle project:

  • gradle jmh: Runs all benchmarks
  • gradle jmhJar: Generates a portable JAR that you can run on a different machine

The second target is helpful for running the benchmarks on a dedicated machine (that is not your developer laptop), resulting in predictable, comparable, and reproducible results.

Writing a JMH benchmark

The plugin expects all the benchmark code to exist in src/jmh/java and src/jmh/resources. This avoids having to create a separate project and importing all the code while at the same time avoiding shipping the benchmark code with the main library in src/main/java.

Let's create the first benchmark. Save this file as src/jmh/java/Benchmark.java in your module.

@Fork(value = 1, warmups = 1)
@Warmup(iterations = 1)
@Measurement(iterations = 1)
@OutputTimeUnit(TimeUnit.SECONDS)
public class Benchmark {
  @Benchmark
  public static void oneBenchmark() {
    // do something
  }
}
Enter fullscreen mode Exit fullscreen mode

The code above is just scaffolding. Let's look at each annotation.

@Fork: configures how many times the current benchmark is forked. If value=0, the benchmark will be run in the same JVM. The warmups parameter defines how many times the benchmark is forked (but the results discarded).

The main benefit of warming up is to load all classes and cache them. Unfortunately, since the JVM uses lazy loading and Just In Time compiling, the first iteration of our benchmark would incur the cost of all these actions and skew the results.

@Warmup determines how many warmups are performed and discarded per fork.

@Measurement allows us to specify how many iterations to execute per benchmark.

And finally, @OutputTimeUnit allows us to specify the unit reported in the results.

There are more annotations and parameters, but I won't get into the weeds of it just yet.

"Consuming" results

There is a small caveat when writing benchmarking code in that the JVM is smart enough to optimize code that is not actually used.

For example, in the following code, the result of tested.get() is never used (consumed) so the running JVM may decide to simply skip the call altogether, making the benchmark invalid.

public class Benchmark {
  @Benchmark
  public static void oneBenchmark() {
    // assume an object under test
    tested.get();
  }
}
Enter fullscreen mode Exit fullscreen mode

JMH introduces the concept of a Blackhole. The code above can be rewritten, ensuring the results are always used and the code being benchmarked is executed:

public class Benchmark {
  @Benchmark
  public static void oneBenchmark(Blackhole blackhole) {f
    // the code under test is always executed
    blackhole.consume(tested.get());
  }
}
Enter fullscreen mode Exit fullscreen mode

You can now run all the benchmarks with the gradle jmh command.

Et voila! A super simple intro to JMH in a Gradle project!

Further reading

For additional details, see:

Until next time!


If you liked this article and want to read more like it, please subscribe to my newsletter; I send one out every few weeks!

Top comments (0)