DEV Community

Waverley Leung
Waverley Leung

Posted on

Big-O Notation from a Non-CS Perspective

Hi everyone!

bear waving hello

Welcome to the second post in our Data Structures & Algorithm series! Last time we reviewed the crossovers in JavaScript arrays and strings. This time we will cover Big O notation, diving into time and space complexity.

Both of us (Megan and I) graduated from bootcamp, after learning Ruby on Rails, JavaScript, React, etc., we had to spend a lot of our time learning Big-O Notation through many online resources. We hope this will be the place for you if you are looking for a “plain English” explanation of Big-O Notation!

Introduction

In computer science, Big O notation is used to classify the run time or space requirements of an algorithm as their input size grows. For CS students at college, they have to learn different types of big- notation (Big O, Big Theta, Big Omega). But for the sake of software engineering technical interviews, all we care about is the best and worst case scenarios, which Big O provides the tightest description of the run time and space.

Big O Notation Graph
This graph demonstrates clearly on how the run time and space changes depending on the input of a Big-O Notation. O(1) and O(log n) have the best run time and space complexity while O(n!), O(n2) and O(2n) have the worst run time and space complexity.

In this article, we will break down all these notations with provided examples and Leetcode questions at the end of each part.

What does it mean by brute force and optimized solution?

Before we start, we would like to explain what brute force and optimized solution mean, as you might see these keywords later in the article.

The easiest way to understand what brute force solution is whatever solution comes to your head first. On the other hand, for optimized solution, after you have the brute force solution, you would think of an optimized solution to either simplify the code or minimize the time and space complexity if possible.

For instance, your brute force solution has a O(n2) time complexity and with optimized solution, you are able to reduce it to the time complexity of O(n).
Understanding this concept is important since this is something you would discuss with your interviewer on how you would make your solution from brute force to more optimized.

Complexity Comparison

Name Big O Notations
Constant Time O(1)
Logarithmic Time O(log n)
Linear Time O(n)
Linearithmic Time O(n log n)
Quadratic Time O(n2)
Exponential Time O(2n)
Factorial Time O(n!)

Constant Time: O(1)

Often referred to as “constant time”, O(1) has the least complexity. I like to think of this as no matter how large or small the input, you can always expect the same number of steps to execute inside the function.

Example:

function sayHelloToFirstFriend(friends) {
   return `Hello ${friend[0]}`
}

sayHelloToFirstFriend([spongebob, patrick, sandy, squidward, gary])
// “Hello spongebob”
Enter fullscreen mode Exit fullscreen mode

Logarithmic Time: O(log n)

Don’t be afraid of the math! When you see a logarithm it’s asking you, “What power must we raise this base to, in order to get this answer?" In other words, we use logarithms to solve for a variable when that variable is an exponent.

In terms of computer science this translates to: “How many times must we divide n in half in order to get back down to 1?” Therefore, solutions with O(log n) essentially divide the problem in half, determine which half it needs to continue, divide that section in half, repeating this same idea until it finds what it needs or ruling out the set. As a result, while these solutions do grow more than constant time, it nonetheless grows slowly compared to other time complexities.

Note: Notice that for all of these use cases the input is sorted and searching for something!

Linear Time: O(n)

Probably the most familiar is O(n), or “linear time”. This is because as the size of the input grows, the time the number of operations takes to execute also grows. In other words, if an array has 10 items a for loop will be executed 10 times whereas if the array has 10,000 items the same for loop would execute 10,000 times as well.

Example 1:

const binarySearch = (list, target) => {
  let start = 0
  let end = list.length - 1

  while (start <= end) {
    const middle = Math.floor((start + end) / 2)
    const guess = list[middle]

    if (guess === target) {
      return middle
    }

    if (guess > item) {
      // search the right side of the list
      end = middle - 1
    } else {
      // search the left side of the list
      start = middle + 1
    }
  }
  return null // if target is not found
}
Enter fullscreen mode Exit fullscreen mode

Example 2:

function sayHelloToFriends(friends) {
   for (let i = 0; i < friends.length; i++) {
      console.log(`Hello ${friends[i]}`)
   }
}

sayHelloToFriends([spongebob, patrick, sandy, squidward, gary])
// “Hello spongebob”
// “Hello patrick”
// “Hello sandy”
// “Hello squidward”
// “Hello gary”
Enter fullscreen mode Exit fullscreen mode

Linearithmic Time: O(n log n)

Building off of typical solutions for O(log n), the extra “n” comes from the extra time cost of sorting. Therefore, a lot of sorting algorithms have the complexity of O(n log n). On the other hand, while it does take more time than O(log n), it’s also important to remember that logarithms grow very slowly. As a result, its path is similar to that of linear time. To explain a bit more of the role n plays, let’s take a look at merge sort.

Starting the same as O(log n), in merge sort you start by dividing the array in half. Next you sort the two halves and then merge the two sorted halves into one sorted whole. However, in order to sort the two halves you repeat the same idea of dividing them, sorting them, merging the sorted halves until you’ve sorted everything.

Example:

function merge(left, right) {
    let arr = []
    // Break out of loop if any one of the array gets empty
    while (left.length && right.length) {
        // Pick the smaller among the smallest element of left and right sub arrays 
        if (left[0] < right[0]) {
            arr.push(left.shift())  
        } else {
            arr.push(right.shift()) 
        }
    }

    // Concatenating the leftover elements
    // (in case we didn't go through the entire left or right array)
    return [ ...arr, ...left, ...right ]
}

function mergeSort(array) {
  const half = array.length / 2

  // Base case or terminating case
  if(array.length < 2){
    return array 
  }

  const left = array.splice(0, half)
  return merge(mergeSort(left),mergeSort(array))
}
Enter fullscreen mode Exit fullscreen mode

Quadratic Time: O(n2)

A function with quadratic time complexity has a growth rate of n2. Meaning? If the input size is 2, then the function has to do 4 times. If the input size is 3, then the function has to be done 9 times. If the input size is 1000, then the function has to do 1,000,000 (1 million) times.
In other words, O(n2) is going to run really slow, especially since the input size is really large.

Most of the time, we would describe an algorithm that has quadratic time when we have to iterate an object at least twice, examples like nested for loops.

Find duplicates and bubble sort are two of the quadratic algorithms examples that you would run into. Bubble sort (as well as insertion sort and selection sort) is like the naive version of merge sort and quick sort. It is slow, but it is always the first concept you would first learn when learning sorting algorithms. It builds a great foundation for the rest of the more complicated sorting algorithms.

What bubble sort does is repeatedly swapping adjacent elements if they are in the wrong order. Let's say we are sorting an unordered array of numbers from smallest to largest. Bubble sort would examine the numbers if they are in the right order by swapping them one by one .

Example of Bubble Sort:

function bubbleSort(arr, n) {
  for (let i = 0; i < n - 1; i++) {
    for (let j = 0; j < n - i - 1; j++) {
      if (arr[j] > arr[j + 1]) {
        swap (arr, j, j+1);
      }
    }
  }
}

// swap helper method
function swap (arr, first, second) {
  let temp = arr[first];
  arr[first] = arr[second];
  arr[second] = temp;
}
Enter fullscreen mode Exit fullscreen mode

Comparing to Merge Sort, which the array would be cut in half , Bubble Sort would go through each element of the array one by one until everything is sorted in the right place (and then it will go through again one more time even though it is already sorted.)

Exponential Time: O(2n)

Base-2 Exponential running time means the calculations will go double with each input size grows.
22 => 4
23 => 8
24 => 16
...
210 => 1,267,650,600,228,229,401,496,703,205,376

As you can see whenever n is increased by 1, the result is doubled. Essentially, the number starts very low and to the end, the number will be very large.

In most cases, please avoid the use of exponential time since the run time is going to get slower. Not that it's the worst one, but obviously it's not great.

Fibonacci Example

function fib(n) {
  if (n <= 1) {
    return n
  }
  return fib(n - 1) + fib (n - 2)
}
Enter fullscreen mode Exit fullscreen mode

Factorial Time: O(n!)

If you understood how factorial works, this is how it's like:
5! = 5 x 4 x 3 x 2 x 1, in other words,
n! = n x (n - 1) x (n - 2) x (n - 3)... x 1

As the input size increases, the run time gets bigger and bigger and BIGGER! I personally have not encountered a factorial problem, therefore I would attach an example below with the link as reference.

Typical use cases
Permutations

Conclusion

We hope this article gives you a better understanding of Big-O Notation! This concept is important since often during interviews, you will need to analyze the Big-O Notation of your solution. Furthermore, knowing this can help you understand which solution has better or worse runtime as you come up with approaches. If you are still having trouble understanding, we have provided more resources down below for you to reference!


Resources

  1. Examples of Algorithms which has O(1), O(n log n) and O(log n) complexities 👀 (Stack Overflow)
  2. Big-O Cheat Sheet
  3. What is Big O Notation Explained: Space and Time Complexity (FreeCodeCamp)
  4. Big-O notation (Wikipedia)
  5. 8 time complexities that every programmer should know (With Videos and Examples)
  6. Comparing different solutions for Two Sum (Stanford)

Top comments (4)

Collapse
 
sylwiavargas profile image
Sylwia Vargas

This is such a fantastic blog post. Thank you both for putting in so much energy and passion into it!!

Collapse
 
wlcreate profile image
Waverley Leung

Thank you so much for the support Sylwia 💕

Collapse
 
carlosrafael22 profile image
Rafael Leitão

Thanks a lot for the post, Waverley! It's always tough to understand this! :)

Collapse
 
wlcreate profile image
Waverley Leung

Thank you for the feedback Rafael, I'm glad you enjoyed it 😁