Big O complexities of log N are important for a couple of reasons. First, they're a desirable complexity to shoot for when designing an function since the efficiency of O(log N) functions is close to that of O(1) functions. Second, it's a common runtime complexity, so it's important to be able to recognize them. Examples include Binary Searches, finding the smallest or largest value in a binary search tree, and certain divide and conquer algorithms.

Let's look at the binary search example. If we want to find a value in an ordered array we could just iterate through it until we found our value, but we might have to go through the entire array. A binary search offers a more efficient method.

## Binary Search

Here are the pseudocode steps that a binary search goes through:

- Set two variables. min = 0 and max = n - 1.
- Find the mid value between min and max by averaging min and max and rounding it down.
- If array[mid] === target, return mid.
- If array[mid] < target, set min = mid + 1.
- Otherwise set max = mid - 1.
- Go back to step 2.

That sounds simple enough, but let look at how it plays out with an actual array and target value.

Let's say array = [4, 8, 10, 14, 27, 31, 46, 52] and our target is 46.

So,

min = 0, max = 7, and mid = (0 + 7)/2 = 3.5 -> round to 3

array[3] = 14 and therefore less than 46, so min = mid + 1 = 4

min = 4, max = 7, and mid = (4 + 7)/2 = 5.5 -> round to 5

array[5] = 31 and therefore less than 84, so min = mid + 1 = 6

min = 6, max = 7, and mid = (6 + 7)/2 = 6.5 -> round to 6

array[6] = 46, which equals our target! Return mid.

## O(log N)

In the example we just looked at, we were able to find the target value in only 3 iterations of the code. The binary search algorithm accomplishes this by dividing the search area in half on each iteration. So at the start we have N elements to search. By the second step we only have N/2 elements to search, and by the third we only have N/4 elements to search.

In the above case that looks like this,

N = 8, [4, 8, 10, 14, 27, 31, 46, 52] //Compared and divide search area by 2

N = 4, [27, 31, 46, 52] //Compared and divide search area by 2

N = 2, [46, 52] //Compared mid to target. They matched, so returned mid.

Notice that this took three steps and it's dividing by 2 each time. If we multiplied by 2 each time we would have 2 x 2 x 2 = 8, or 2^{3} = 8.

2^{3} = 8 -> log_{2} 8 = 3

2^{k} = N -> log_{2} N = k

So we can see that since the code was dividing by 2 each time and we started with N elements in our ordered array, it will takes log N iterates of the binary search algorithm to find the target value. Therefore, the Big O complexity of a binary search is O(log N).

And you may be wondering how to notate the base of the log when you're writing O(log N). Well you don't, because it doesn't matter. Why doesn't it matter? Well the answer is a bit too long to go though here, but suffice it to say that it's not relevant.

So what does this mean about how to spot O(log N) complexities? It means that when you're evaluating the runtime complexity, if the algorithm is dividing the number of elements being considered by 2 on each iteration, then it most likely has a runtime complexity of O(log N).

## Conclusion

Takeaways,

- O(log N) is a common runtime complexity.
- Examples include binary searches, finding the smallest or largest value in a binary search tree, and certain divide and conquer algorithms.
- If an algorithm is dividing the elements being considered by 2 each iteration, then it likely has a runtime complexity of O(log N).

### Song of the Week

- Summer Luv - Slipfunc | Spotify | SoundCloud

## References

McDowell, Gayle Laakmann. Cracking the Coding Interview. CareerCup, LLC, 2019. (Pg 44)

Khan Academy - Implementing Binary Search of an Array

## Discussion