The concept of the Big O and Time Complexities is DAUNTING for a new software engineer, which is why I won't try to reteach it here. I will, however, dive a little into the two fastest "Order of N" complexities, with a concentration on using a Binary Search.
TL;RD - Constant vs. Logarithmic Complexities + Binary Search
I recently watched an excellent webinar from SkilledInc.com on the Big-O and Michael Mroczka broke down the concept in a fun ad interesting way. Many of you have probably seen this chart floating around on the internet:
If you look at the bottom of the graph, you'll see that the two fastest Time Complexities (TCs) are Constant O(1) and Logarithmic O(log N). "N" is the variable at play. In my Ruby project "Welcome to Westeros," the variable "house" below returns a parsed, JSON response, and serves as our "N" variable:
def display_house_details(house) puts "Name: " + house end
This method simply prints the name of the house in Game of Thrones. Fortunately, I drastically reduced the the number of houses returned by the API so I wasn't dealing with a larger Max Input (the highest constraint an algorithm can handle before timing out). The above example would constitute a Constant O(1) TC because only one action is carried out and will always execute in the same time, no matter the size of the input.
However, sometimes you have more complex methods. Take a LeetCode challenge during an interview. If you've ever noticed the below section at the bottom of the problem description:
This tells you that the minimum input will be 1 and the maximum will be 10,000. (Side note: the Max Input for anything in the "horrible" region in our chart below could not handle this input, as it is generally capped at 5,000. This eliminates the possibility using some algorithms, like a Bubble Sort.) We need to use anything in between "bad" and "excellent."
"Great, Natalie, but what does that mean?"
Let's take a look at the next step down the TC tree at Logarithmic O(log N), more specifically, a binary search, whose average complexity is O(log N). I was taught this by a very patient mock interviewer, and now I shall pass it on to you.
The concept of the binary search is to cut your workload in half with each pass of the loop. If you have a sorted array of numbers (our N), you won't know if it will contain 2, 12, or 2,000,000 numbers. If you have 2,000,000 names, a sequential search would have to perform 2,000,000 operations. Oh boy. Let that run and come back next week. Maybe it'll be done by then. But with the binary search, imagine going from 2,000,000 to 1 in roughly 21 movies. Much better than 2,000,000! Let's see it in action.
- The low is set at index 0.
- The high is set to length (17) - 1, which is index 16.
- Mid is set to (0 + 16) / 2, giving us index 8 (value is 23).
In the example, they are searching for the number 37. If 23 === 37, return 23. It's not, so we go on down to 37 > 23. It is, so we change our search area to by setting the low parameter to 8 + 1 (index 9 is a value of 29). If it hasn't been greater than 23, the high parameter would have changed. The loop continues in that manner until it is narrowed down to the target itself.
Since the binary search only iterates through a fraction of the original input, it is still relatively quick with way fewer steps. This concept could also be applied as a binary search tree, if you're into that kind of thing.
I hope I scratched the surface of understanding for you with regards to the Big O. I plan to blog again with other TCs as more examples unfurl. In the meantime, if you need a cheat sheet of how the TCs rank, consider this handy guide, which I heartily approve of:
Now go back and look at that joke in the header and see if it clicks. :)