DEV Community

Zach
Zach

Posted on

Benchmarking Algorithm Speed

Learning about Big O notation has helped me to understand just how shallow I've been thinking about my code. Which is fine - you don't know what you don't know. Besides, I haven't worked with humongous data sets or built speed-sensitive applications and so the world of efficient coding hasn't been, or felt, relevant to me. Obviously, that's changing.

Here's one thing I've learned: searching linearly through an ordered array for a specific item in that array is less efficient than applying a binary search to that problem. The former's performance can be described as O(n), and the latter's O(log n).

The theory is great. But I always like to see something work, and not just know about it. So I thought it'd be helpful -- and interesting -- to see it in action and so I wrote some very simple benchmarking tests applying two functions, linearSearch() and binarySearch() to the same data sets and comparing their performance (speed).

Well, it was a big dud. Instead of watching the search time scale linearly in the linearSearch() case, the results appeared exponential. I tried quite a few approaches in debugging. Apparently Javascript's built-in benchmarking tools aren't terribly consistent on small time scales, so I made sure to increase the number of tests I ran for every case, and tracked the averages of those large numbers of tests. That only strengthened the exponential trend. I thought maybe the function call itself (the call embedded between the 'start' and 'stop' tools of the benchmark tools) was slowing things down. So I stripped the search down to the bare for-loop and ran that. Same results.

Hm. Feeling stuck, I thought I'd conduct a sanity check. Here's what I did....

....

....

Ok, I'll be honest here. I went back into the code so that I could copy it and paste it here. And after looking at it, I think the code I was testing was actually scaling exponentially. So maybe I was reading accurate results, and just misinterpreting. I also got some other funky results. And then eventually I got some results that matched my expectations (read: hopes).

So here it is. Literally just a test to check whether the performance of an extremely bare-bones for-loop will scale linearly with its inputs. Geez I can't even call it a data set. Just the loop count. Ah well.


function sanityCheck(){

    for(let factor = 1; factor<122; factor+=5){
    const t0 = performance.now();
    var timeSum = 0;
    for (let x=1; x<501; x++){
      var zeroInteger = 0;
      for (let i = 0; i<10000*factor; i++){
          zeroInteger +=0;
        }
      const t1 = performance.now();
     timeSum += t1-t0;
    }
    console.log(timeSum/500);
  }
}
sanityCheck();

Enter fullscreen mode Exit fullscreen mode

As a reminder we were counting the time required to run through a useless loop several tens of thousands of times. The chart of the averages looks like this:

results chart

Looks pretty linear to me. 'Looks' probably isn't the most precise term, but after the struggle I had, you look for wins wherever you can get em.

This was pretty much my entire Saturday. I'd be embarrassed if I didn't expect to improve massively. And on some level I'm sure this moved me forward, as much as it feels like I have been running in place (or backwards) all day.

I have a few takeaways. Make sure you know what you're testing for and test for that. Carelessness (or stupidity) in testing can really throw you off. It can throw false positives that fool you into thinking you know what you don't know. It can also have you running in loops trying to make sense of results that don't make sense. I think I got a taste of that today.

Another lesson that I'm digesting as I type is that while all of this prose might be interesting to me, it sure takes up a lot of time. I mean, it might be cool to reflect on someday, but for the sake of productivity, I'm going to try to keep the narrative to myself and commit to posting code and working demonstrations of the things that I've learned.

Latest comments (0)