DEV Community

Discussion on: Java Daily Coding Problem #001

Collapse
 
awwsmm profile image
Andrew (he/him) • Edited

Joan and Andrei both provided alternative solutions to this problem. I wondered which of our solutions was the best in the worst-case, so I created a Java Microbenchmark Harness (JMH) example which runs each of these pieces of code to time them. So I present...

A mini-tutorial on using the Java Microbenchmark Harness to benchmark Java code

Setup

Use Maven to set up a simple JMH project:

$ mvn archetype:generate \
    -DinteractiveMode=false \
    -DarchetypeGroupId=org.openjdk.jmh \
    -DarchetypeArtifactId=jmh-java-benchmark-archetype \
    -DgroupId=<org.sample> \
    -DartifactId=<test> \
    -Dversion=1.0

My package is daily, so I use that in place of <org.sample>, and I set the artifactId to 001 because this is the first Daily Coding Problem I'm solving in this fashion. This makes a directory called 001 with the following structure:

$ tree 001
001
├── pom.xml
└── src
    └── main
        └── java
            └── daily
                └── MyBenchmark.java

4 directories, 2 files

The contents of MyBenchmark.java are as follows:


/*
 * ...Oracle copyright...
 */

package daily;

import org.openjdk.jmh.annotations.Benchmark;

public class MyBenchmark {

    @Benchmark
    public void testMethod() {
        // This is a demo/sample template for building your JMH benchmarks. Edit as needed.
        // Put your benchmark code here.
    }

}

...but we're going to delete all of this and write our own code.

Benchmarking

Christophe Schreiber has a good example of using JMH on Dev.To. We need to send parameters to our methods, though. In the worst-case scenario, we'll have a very long array in which no two numbers add to k. These numbers should also all be unique so that the compiler can't do any optimisations and so that we need to continually add them to Andrei's Set.

I will be using this file on GitHub by Bruno Doolaeghe as a template:

package daily;

import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Set;
import java.util.TreeSet;

import java.util.concurrent.TimeUnit;

import java.util.stream.Collectors;
import java.util.stream.IntStream;

import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;

@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 10, time=500, timeUnit=TimeUnit.MILLISECONDS)
@Measurement(iterations = 10, time=500, timeUnit=TimeUnit.MILLISECONDS)
@Fork(1)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
public class MyBenchmark {

  @State(Scope.Benchmark)
  public static class Setup {

    @Param({ "10", "15", "20", "25", "30", "40", "60", "80", "100", "200",
             "400", "800", "1000", "10000", "100000", "1000000", "10000000" })
    int N;

    // create a List of numbers 1...length
    List<Integer> list = IntStream.rangeClosed(1, N).boxed().collect(Collectors.toList());

    // always larger than the largest possible sum of two elements
    int k = Integer.MAX_VALUE;

    // shuffle the array before returning it to the user
    int[] get_given() {

      // shuffle the List
      Collections.shuffle(list);

      // convert the List to an array
      int[] given = list.stream().mapToInt(i->i).toArray();

      return given;
    }

    // return k
    int get_k() {
      return k;
    }

  }

  //----------------------------------------------------------------------------
  //
  //  TESTS
  //
  //----------------------------------------------------------------------------

  @Benchmark
  public boolean testAndrew(Setup d) {

    // setup
    int[] given = d.get_given();
    int k = d.get_k();

    // algorithm

    int len = given.length;

    for (int outer = 0; outer < (len - 1); ++outer)
      for (int inner = outer + 1; inner < len; ++inner)
        if (given[outer] + given[inner] == k)
          return true;

    return false;
  }

  @Benchmark
  public boolean testAndrei(Setup d) {

    // setup
    int[] given = d.get_given();
    int k = d.get_k();

    // algorithm

    Set<Integer> set = new TreeSet<Integer>();

    for (int number : given){
      if (set.contains(number))
        return true;
      set.add(k - number);
    }

    return false;
  }

  @Benchmark
  public boolean testJoan(Setup d) {

    // setup
    int[] given = d.get_given();
    int k = d.get_k();

    // algorithm

    Arrays.sort(given);

    int i = 0;
    int j = given.length - 1;

    while (i < given.length && j >= 0) {

      int sum = given[i] + given[j];

      if      (sum == k) return true;
      else if (sum  > k) --j;
      else               ++i;

    }

    return false;
  }

}

All code is available on my GitHub. The results of the benchmarking are below. Please note that I don't use JMH very often so this may not be the optimal way to lay out this benchmark. If you have any suggestions, be sure to let me (politely) know :)

Benchmark                    (N)  Mode  Cnt  Score   Error  Units
MyBenchmark.testAndrei        10  avgt   10  0.053 ± 0.004  us/op
MyBenchmark.testAndrei        15  avgt   10  0.056 ± 0.002  us/op
MyBenchmark.testAndrei        20  avgt   10  0.052 ± 0.003  us/op
MyBenchmark.testAndrei        25  avgt   10  0.057 ± 0.008  us/op
MyBenchmark.testAndrei        30  avgt   10  0.053 ± 0.003  us/op
MyBenchmark.testAndrei        40  avgt   10  0.052 ± 0.001  us/op
MyBenchmark.testAndrei        60  avgt   10  0.053 ± 0.002  us/op
MyBenchmark.testAndrei        80  avgt   10  0.056 ± 0.007  us/op
MyBenchmark.testAndrei       100  avgt   10  0.057 ± 0.008  us/op
MyBenchmark.testAndrei       200  avgt   10  0.054 ± 0.006  us/op
MyBenchmark.testAndrei       400  avgt   10  0.055 ± 0.009  us/op
MyBenchmark.testAndrei       800  avgt   10  0.054 ± 0.001  us/op
MyBenchmark.testAndrei      1000  avgt   10  0.065 ± 0.031  us/op
MyBenchmark.testAndrei     10000  avgt   10  0.056 ± 0.002  us/op
MyBenchmark.testAndrei    100000  avgt   10  0.053 ± 0.003  us/op
MyBenchmark.testAndrei   1000000  avgt   10  0.063 ± 0.004  us/op
MyBenchmark.testAndrei  10000000  avgt   10  0.052 ± 0.001  us/op
MyBenchmark.testAndrew        10  avgt   10  0.051 ± 0.002  us/op
MyBenchmark.testAndrew        15  avgt   10  0.051 ± 0.002  us/op
MyBenchmark.testAndrew        20  avgt   10  0.055 ± 0.006  us/op
MyBenchmark.testAndrew        25  avgt   10  0.052 ± 0.001  us/op
MyBenchmark.testAndrew        30  avgt   10  0.053 ± 0.002  us/op
MyBenchmark.testAndrew        40  avgt   10  0.054 ± 0.005  us/op
MyBenchmark.testAndrew        60  avgt   10  0.050 ± 0.002  us/op
MyBenchmark.testAndrew        80  avgt   10  0.059 ± 0.009  us/op
MyBenchmark.testAndrew       100  avgt   10  0.053 ± 0.007  us/op
MyBenchmark.testAndrew       200  avgt   10  0.055 ± 0.003  us/op
MyBenchmark.testAndrew       400  avgt   10  0.051 ± 0.002  us/op
MyBenchmark.testAndrew       800  avgt   10  0.062 ± 0.017  us/op
MyBenchmark.testAndrew      1000  avgt   10  0.080 ± 0.034  us/op
MyBenchmark.testAndrew     10000  avgt   10  0.051 ± 0.002  us/op
MyBenchmark.testAndrew    100000  avgt   10  0.063 ± 0.038  us/op
MyBenchmark.testAndrew   1000000  avgt   10  0.062 ± 0.014  us/op
MyBenchmark.testAndrew  10000000  avgt   10  0.058 ± 0.005  us/op
MyBenchmark.testJoan          10  avgt   10  0.059 ± 0.007  us/op
MyBenchmark.testJoan          15  avgt   10  0.068 ± 0.010  us/op
MyBenchmark.testJoan          20  avgt   10  0.060 ± 0.002  us/op
MyBenchmark.testJoan          25  avgt   10  0.055 ± 0.003  us/op
MyBenchmark.testJoan          30  avgt   10  0.058 ± 0.008  us/op
MyBenchmark.testJoan          40  avgt   10  0.056 ± 0.003  us/op
MyBenchmark.testJoan          60  avgt   10  0.061 ± 0.010  us/op
MyBenchmark.testJoan          80  avgt   10  0.064 ± 0.013  us/op
MyBenchmark.testJoan         100  avgt   10  0.061 ± 0.006  us/op
MyBenchmark.testJoan         200  avgt   10  0.059 ± 0.014  us/op
MyBenchmark.testJoan         400  avgt   10  0.059 ± 0.008  us/op
MyBenchmark.testJoan         800  avgt   10  0.060 ± 0.001  us/op
MyBenchmark.testJoan        1000  avgt   10  0.058 ± 0.001  us/op
MyBenchmark.testJoan       10000  avgt   10  0.057 ± 0.002  us/op
MyBenchmark.testJoan      100000  avgt   10  0.066 ± 0.009  us/op
MyBenchmark.testJoan     1000000  avgt   10  0.059 ± 0.007  us/op
MyBenchmark.testJoan    10000000  avgt   10  0.058 ± 0.008  us/op

Even in the worst-case scenario with a 10-million element array, I see no difference between the three methods in terms of time:

MyBenchmark.testAndrei  10000000  avgt   10  0.052 ± 0.001  us/op
MyBenchmark.testAndrew  10000000  avgt   10  0.058 ± 0.005  us/op
MyBenchmark.testJoan    10000000  avgt   10  0.058 ± 0.008  us/op

Because of the uncertainties on the three benchmarks, they could all be the same (they could all be 0.053 us/op). That's anticlimactic!

Collapse
 
awwsmm profile image
Andrew (he/him) • Edited

Follow-up with arrays with more elements:

Benchmark                      (N)  Mode  Cnt  Score   Error  Units
MyBenchmark.testAndrei    10000000  avgt  100  0.071 ± 0.004  us/op
MyBenchmark.testAndrew    10000000  avgt  100  0.049 ± 0.001  us/op
MyBenchmark.testJoan      10000000  avgt  100  0.887 ± 0.365  us/op

MyBenchmark.testAndrei   100000000  avgt  100  0.057 ± 0.001  us/op
MyBenchmark.testAndrew   100000000  avgt  100  0.050 ± 0.001  us/op
MyBenchmark.testJoan     100000000  avgt  100  0.129 ± 0.080  us/op

MyBenchmark.testAndrei  1000000000  avgt  100  0.058 ± 0.001  us/op
MyBenchmark.testAndrew  1000000000  avgt  100  0.053 ± 0.001  us/op
MyBenchmark.testJoan    1000000000  avgt  100  0.057 ± 0.003  us/op

Does the double-for win??

Collapse
 
charles1303 profile image
charles1303

Interesting stats though. But could it be that the N squared approach never really got to it's worst case scenario by not iterating through all the elements?