DEV Community

Cover image for [Tiny] Elapsed Time in Java: currentTimeMillis vs nanoTime
Petr Filaretov
Petr Filaretov

Posted on

[Tiny] Elapsed Time in Java: currentTimeMillis vs nanoTime

When you need to measure how long some code takes to execute, the first thing that usually comes to mind is currentTimeMillis:

long start = System.currentTimeMillis();
doSomeWork();
long elapsedMillis = System.currentTimeMillis() - start;
Enter fullscreen mode Exit fullscreen mode

However, there is a potential problem here - elapsedMillis may become negative in some unfortunate scenarios. Or it could be positive but not correct.

This is because currentTimeMillis measures wall-clock time, i.e. system clock. And the system clock is always automatically corrected because no computer's clock is perfect. Or you can just change the system date and time manually.

So, if the correction happens during doSomeWork(), the result of calculating elapsedMillis will be incorrect.

On the other hand, there is the nanoTime method which is meant to be exactly for elapsed time measurements. As Javadoc says:

This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.

The usage is the same as with currentTimeMillis:

long start = System.nanoTime();
doSomeWork();
long elapsedNanos = System.nanoTime() - start;
Enter fullscreen mode Exit fullscreen mode

But wait, you don't have to trust me!

You can test it yourself to make sure that elapsed time can be negative with currentTimeMillis. Here is the code:

public class ElapsedTimeTest {
    public static void main(String[] args) throws Exception {
        ExecutorService executor = Executors.newSingleThreadExecutor();
        Future<Long> elapsedFuture = executor.submit(() -> {
            long start;
            long elapsed;
            while (true) {
                start = System.currentTimeMillis();

                // do some work
                Thread.sleep(1000);

                elapsed = System.currentTimeMillis() - start;
                if (elapsed < 0) {
                    return elapsed;
                }

                if (Thread.currentThread().isInterrupted()) {
                    return null;
                }
            }
        });

        try {
            Long elapsed = elapsedFuture.get(5, TimeUnit.MINUTES);
            System.out.println("Elapsed time: " + elapsed);
        } catch (TimeoutException e) {
            System.out.println("No luck so far");
        }

        executor.shutdownNow();
    }
}
Enter fullscreen mode Exit fullscreen mode

Here we submit a calculation of elapsed time in a separate thread and wait for the result for 5 minutes. Just to have enough time to tweak the system clock later.

Elapsed time is calculated with currentTimeMillis in an infinite loop, and the "work" takes approximately 1 second. (Thread.sleep(1000))

The loop will break, and the calculation will be completed if the elapsed time is negative or the thread is interrupted.

Now, run this class, open system clock settings, and update the time, say, by decrementing minutes. You will see that the negative elapsed time is printed, and the program completes.

If you change currentTimeMillis to nanoTime and repeat the experiment, nothing happens when you decrement minutes. The program runs until the TimeoutException on elapsedFuture.get().


Dream your code, code your dream.

Top comments (22)

Collapse
 
cicirello profile image
Vincent A. Cicirello • Edited

Both System.currentTimeMillis() and System.nanoTime() measure "wall clock" time, at least in the sense that the difference between 2 measurements is how much time has passed on A clock.

There are 2 main differences between them. The first is that they are at different levels of precision. The second, which your changing the clock trick demonstrates is that System.currentTimeMillis() gives you the current time (as in date and time of day) since Jan 1, 1970, which is why changing your computer's clock changes what it returns; while System.nanoTime() uses that instance of the JVM's time source to give you time since some origin time that varies per instance of a running JVM. It still gives you time that has elapsed on a clock, so in that sense still sort of "wall clock" time. It is just not the literal wall clock, so changing computer's clock shouldn't affect it.

If you really want to time code, ideally use a microbenchmarking framework like JMH. But otherwise, better than using either System.currentTimeMillis or System.nanoTime is to time elapsed CPU time with something like the following:

ThreadMXBean bean = ManagementFactory.getThreadMXBean();
long start = bean.getCurrentThreadCpuTime();
// do something to time here
long end = bean.getCurrentThreadCpuTime();
long elapsedCpuTimeInNanoseconds = end - start;
Enter fullscreen mode Exit fullscreen mode

This will give you how much time the code utilized the CPU.

Collapse
 
siy profile image
Sergiy Yevtushenko

There is a difference in resolution, not precision.

Collapse
 
cicirello profile image
Vincent A. Cicirello • Edited

No. Precision is the units the method returns. In this case the one method is in milliseconds and the other is in nanoseconds. Thus, precision is different for the 2 methods.

Resolution is how frequently the values change. System.nanoTime is defined as nanosecond precision in the documentation, but its resolution may be (and probably is) different than that. From the javadocs:

This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes)

The resolution of the 2 methods is probably also different. The resolution of System.nanoTime is whatever the resolution of the JVM's "high-resolution clock" which likely varies by system, but is guaranteed by definition to be at least the resolution of System.currentTimeMillis. From the javadocs:

no guarantees are made except that the resolution is at least as good as that of currentTimeMillis()

So precision is definitely different for the 2 methods, while resolution is probably different but could be the same.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

Hmm. That is somewhat different from what I was taught. Precision defines repeatability of the result, so even if you perform measurements in nanoseconds, but it actually changes in thousands of nanoseconds, precision remains "milliseconds". Resolution is what you see "written at scale" (in our case - units for value returned by each method). Other definition of resolution - smallest detectable difference. It also agrees with my version.
Another view on the issue: you can multiply value returned by .currentTimeMillis by 1000 and get nanosecond resolution, but this doesn't give you ability to measure half of millisecond, because precision remains unchanged.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello • Edited

I think you might be mixing up precision with accuracy. Here is a good concise explanation of difference (from wp.stolaf.edu/it/gis-precision-acc...

Precision is how close measure values are to each other, basically how many decimal places are at the end of a given measurement. Accuracy is how close a measure value is to the true value.

You'll notice that the javadocs for the 2 methods in question say absolutely nothing about accuracy.

I think your notion of resolution, "resolution - smallest detectable difference", is correct. Although System.nanoTime is nanosecond precision, its resolution may vary by system. The "smallest detectable difference" in your words is basically how frequently the high resolution clock changes.

Your example of multiplying milliseconds by 1000 increases precision, but doesn't increase accuracy or resolution. Although to go from milliseconds to nanoseconds you'd actually need to multiply by 1 million (1 second = 1000 milliseconds = 1 billion nanoseconds).

Thread Thread
 
siy profile image
Sergiy Yevtushenko

I don't think that I'm mixing these two. Even text quoted by you mentions "decimal places" for precision as number of valid digits in measurement.

You're right about scaling from millis to nanos. And no, scaling don't change precision, but changes resolution.
Here is one randomly chosen quote about the difference between two:

Precision lets the operator known how well-repeated measurements of the same object will agree with one another. Resolution is the total weighing range of a scale divided by the readability of the display.
(taken here). As you can see, this definition uses terms in the same way as me.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello • Edited

The term precision is used in a couple different contexts. The relevant context for this case, the precision of the values returned by different Java methods for timing, is essentially the precision of the representation of the time values, so we're talking about precision in computer science or in mathematics (and not the laboratory science context of the term).

mathworld.wolfram.com/Precision.html
simple.m.wikipedia.org/wiki/Arithm...
en.m.wikipedia.org/wiki/Precision_...

Thread Thread
 
siy profile image
Sergiy Yevtushenko

This variant is perfectly fine for arithmetic, where each new bit in representation increases precision. This is not the case with the measurements, which we have here. As I've already mentioned, you can scale milliseconds to nanoseconds, but this doesn't change precision in which you're measuring time. According to arithmetic context precision should increase when you scale value.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

The very javadocs of the method in question defines the "precision" of System.nanoTime as nanoseconds.

docs.oracle.com/en/java/javase/17/...

The computer science meaning of the term is absolutely what is meant here.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

It doesn't necessarily mean that such usage is correct. For some reason we forget that people working on JDK are just people and can make mistakes too.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

Their usage is correct.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

Usage correctness depends on the view. You prefer CS- specific for formal reasons. I prefer measurements-specific because we're talking about time measurement. Your version does not explain lack of change of precision despite scaling.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

A call to nanoTime or to currentTimeMillis is polling a clock. Such a call isn't measuring anything. The difference between 2 calls is a measurement of time, so I guess you can apply that interpretation to repetitions of a timing experiment. But the precision of those methods even in that case is akin to the precision of the measuring tool, which still leads to nanosecond precision for nanoTime and millisecond precision for currentTimeMillis.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

nanoTime() Javadoc:

Returns the current value of the running Java Virtual Machine's high-resolution time source, in nanoseconds. This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time
In other words, there is no other intended usage for this method, only measurements.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

"... can only be used to measure elapsed time...." To do that you need 2 calls to it. One call doesn't measure anything. One call just polls the "high resolution clock."

Thread Thread
 
siy profile image
Sergiy Yevtushenko

That's the norm for any relative value measurements. And time always measured as a relative value. Just for many practical cases, we have well-known starting points - AD, 12 AM, Unix epoch, etc. But as soon as you need to measure an interval which is not aligned to those well-known starting points, you perform the same steps - record start, record end, calculated difference. Returning to your answer: one call measures time since "start of epoch" and while you might not be truly interested in this particular value, this is still a measurement. Since in most cases we're interested in some other interval, which is not aligned to the start, we ought to perform 2 measurements.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

No. The measurement of time occurs when you calculate the difference between the result of 2 calls. A single call does not measure anything. It is like reading one number from one arbitrary line on a ruler. That isn't a measurement.

In the case of nanoTime there isn't even any well-defined epoch.

Thread Thread
 
siy profile image
Sergiy Yevtushenko • Edited

You can't measure some random values, subtract one from other and magically get time. And nanoTime doesn't have an AI to predict your intent either. In fact you measure time with every call, but to get elapsed time you need two measurements. It's exactly the same with ruler - every time you read the number (position), but to measure distance you need two readings unless you start from 0. But even in this case you must ensure that you placed ruler correctly at 0. This is nothing else than reading first position.

Thread Thread
 
cicirello profile image
Vincent A. Cicirello

And you've circled back to my earlier point on precision of the measuring tool, which is still nanosecond precision for the nanoTime method.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

Nope. We just agreed that nanoTime performs measurements, hence measurements-based usage is more appropriate.

Just consider the following scenario: let's assume that nanoTime() is just a wrapper around currentTimeMillis() which scales the result (according to nanoTime() documentation, it's a perfectly valid implementation). Then imagine that we start quickly calling nanoTime() in a loop. For the whole millisecond we'll be getting the same value, although each call requires some time. It means that the last few bits of the value returned by nanoTime() are not used and effective (real) precision (here I'm using your definition) is not even close to nanoseconds. It's actually milliseconds because that's the real precision (again, your definition) of the underlying source. And that exactly the point why in measurements-based approach makes more sense here. In the field of measurements, such a situation is pretty common. Let's take, for example, electronics - the field close enough to CS to share many similarities. If you open the description of any digital-to-analog converter (DAC) IC, you'll see "resolution XX bits". That number of bits is exactly the same thing as the number of bits in fixed-width arithmetic (which is called "precision" in CS). But real precision of the measurements performed with the same DAC IS in real circuit depends on many factors and usually sensibly less than the number of resolution bits.

The use of "precision" instead of "resolution" in CS might be justified by the following:

  • they are the same for the use case of arithmetic operations
  • "precision" refers to the precision of the arithmetic operations itself (which directly corresponds to the number of bits) not the precision of the input values But here we're talking not about precision of arithmetic operations, we're talking about measurements.
Thread Thread
 
cicirello profile image
Vincent A. Cicirello

There are three interrelated concepts here: the methods that can be used to poll clocks, the clocks themselves, and the use of these to measure elapsed time.

The Methods: The precision of System.nanoTime is nanoseconds. The precision of System.currentTimeMillis is milliseconds. This is not at all debatable. It is what it is.

The Clocks That These Methods Poll: System.currentTimeMillis polls the system's clock (as in the "wall" clock). The resolution of the clock (how frequently it changes) that System.currentTimeMillis polls varies by system. You can find lots of sites claiming that the resolution is in tens of milliseconds on most systems. I've never investigated this directly myself, but it will vary such as by OS, etc. In the case of System.nanoTime, the clock it polls is the "high-resolution time source". The only guarantee that is made is that the resolution of that clock is at least that of the clock polled by System.currentTimeMillis. On systems where I regularly run experiments, the "high-resolution time source" has a resolution of approximately 1 microsecond (1024 nanoseconds in particular). But that may be different on other systems.

Use of the Above to Measure Elapsed Time: This is what you've been focused on. Although precision of such measurements of elapsed time would be influenced by the characteristics of the underlying clock and any method used to poll it, it is also influenced by other things. For example, was the JVM warmed up by the point you began your timing experiment? What else is running on the system? Did the garbage collector kick in? Etc. You can certainly compute precision of such elapsed time. For example, repeat (poll-clock, do-the-thing-you-want-to-time, poll-clock, compute-difference) many times, compute the average along with confidence intervals, etc. What will you find from this? There isn't enough information to answer. It depends far more on the do-the-thing-you-want-to-time step than it does on how you are polling the clock. And also depends on other things outside of our control (e.g., background processes or other applications utilizing the CPU, the garbage collector, whether the JVM is warmed up sufficiently). Some of that can be dealt with by using neither of the methods we're talking about, and instead using ThreadMXBean.getCurrentThreadCpuTime (or ThreadMXBean.getThreadCpuTime if you need it for a specific thread in a multithreaded case). For example, that would eliminate the impact of background processes and other applications. Or you could use JMH or some other microbenchmarking framework that can handle JVM warmup, etc.

Thread Thread
 
siy profile image
Sergiy Yevtushenko

The Methods: The precision of System.nanoTime is nanoseconds. The precision of System.currentTimeMillis is milliseconds. This is not at all debatable. It is what it is.

I see no reason why it's not at all debatable. Actually, we started from this position from the very beginning.