DEV Community

Megan Lo
Megan Lo

Posted on • Edited on

Big-O Notation from a Non-CS Perspective

Hi everyone!

Bear Says Hello

Welcome to the second post in our Data Structures & Algorithm series! Last time we reviewed the crossovers in JavaScript arrays and strings. This time we will cover Big-O notation, diving into time and space complexity.

Since both of us (Waverley and I) graduated from bootcamp, after learning Ruby on Rails, JavaScript, React, etc., we had to spend a lot of our time learning Big-O Notation through many online resources. We hope this will be the place for you if you are looking for a “plain English” explanation of Big-O Notation!

Introduction

In computer science, Big-O notation is used to classify the run time or space requirements of an algorithm as their input size grows. For CS students at college, they have to learn different types of Big- notation (Big O, Big Theta, Big Omega).

But for the sake of software engineering technical interviews, all we care about is the best and worst case scenarios. Although Big O describes an upper bound on the time in the CS concept, the industry uses Big O to try to offer the tightest description of the runtime. (Cracking the Coding Interview by Gayle McDowell provides a really great summary in these concept -- Read P.39)

Big O Notation Graph
This graph demonstrates clearly on how the run time and space changes depending on the input of a Big-O Notation. O(1) and O(log n) have the best run time and space complexity while O(n!), O(n2) and O(2n) have the worst run time and space complexity.

In this article, we will break down all these notations with provided examples and Leetcode questions at the end of each part.

What does it mean by brute force and optimized solution?

Before we start, we would like to explain what brute force and optimized solution mean, as you might see these keywords later in the article.

The easiest way to understand what brute force solution is whatever solution comes to your head first. On the other hand, for optimized solution, after you have the brute force solution, you would think of an optimized solution to either simplify the code or minimize the time and space complexity if possible.

For instance, your brute force solution has a O(n2) time complexity and with optimized solution, you are able to reduce it to the time complexity of O(n).
Understanding this concept is important since this is something you would discuss with your interviewer on how you would make your solution from brute force to more optimized.

Complexity Comparison

Name Big O Notations
Constant Time O(1)
Logarithmic Time O(log n)
Linear Time O(n)
Linearithmic Time O(n log n)
Quadratic Time O(n2)
Exponential Time O(2n)
Factorial Time O(n!)

Constant Time: O(1)

Often referred to as “constant time”, O(1) has the least complexity. I like to think of this as no matter how large or small the input, you can always expect the same number of steps to execute inside the function.

Example:

function sayHelloToFirstFriend(friends) {
   return `Hello ${friend[0]}`
}

sayHelloToFirstFriend([spongebob, patrick, sandy, squidward, gary])
// “Hello spongebob”
Enter fullscreen mode Exit fullscreen mode

Logarithmic Time: O(log n)

Don’t be afraid of math! When you see a logarithm it’s asking you, “What power must we raise this base to, in order to get this answer?" In other words, we use logarithms to solve for a variable when that variable is an exponent.

In terms of computer science this translates to: “How many times must we divide n in half in order to get back down to 1?” Therefore, solutions with O(log n) essentially divide the problem in half, determine which half it needs to continue, divide that section in half, repeating this same idea until it finds what it needs or ruling out the set. As a result, while these solutions do grow more than constant time, it nonetheless grows slowly compared to other time complexities.

Note: Notice that for all of these use cases the input is sorted and searching for something!

Linear Time: O(n)

Probably the most familiar is O(n), or “linear time”. This is because as the size of the input grows, the time the number of operations takes to execute also grows. In other words, if an array has 10 items a for loop will be executed 10 times whereas if the array has 10,000 items the same for loop would execute 10,000 times as well.

Example 1:

const binarySearch = (list, target) => {
  let start = 0
  let end = list.length - 1

  while (start <= end) {
    const middle = Math.floor((start + end) / 2)
    const guess = list[middle]

    if (guess === target) {
      return middle
    }

    if (guess > item) {
      // search the right side of the list
      end = middle - 1
    } else {
      // search the left side of the list
      start = middle + 1
    }
  }
  return null // if target is not found
}
Enter fullscreen mode Exit fullscreen mode

Example 2:

function sayHelloToFriends(friends) {
   for (let i = 0; i < friends.length; i++) {
      console.log(`Hello ${friends[i]}`)
   }
}

sayHelloToFriends([spongebob, patrick, sandy, squidward, gary])
// “Hello spongebob”
// “Hello patrick”
// “Hello sandy”
// “Hello squidward”
// “Hello gary”
Enter fullscreen mode Exit fullscreen mode

Linearithmic Time: O(n log n)

Building off of typical solutions for O(log n), the extra “n” comes from the extra time cost of sorting. Therefore, a lot of sorting algorithms have the complexity of O(n log n). On the other hand, while it does take more time than O(log n), it’s also important to remember that logarithms grow very slowly. As a result, its path is similar to that of linear time. To explain a bit more of the role n plays, let’s take a look at merge sort.

Starting the same as O(log n), in merge sort you start by dividing the array in half. Next you sort the two halves and then merge the two sorted halves into one sorted whole. However, in order to sort the two halves you repeat the same idea of dividing them, sorting them, merging the sorted halves until you’ve sorted everything.

Example:

function merge(left, right) {
    let arr = []
    // Break out of loop if any one of the array gets empty
    while (left.length && right.length) {
        // Pick the smaller among the smallest element of left and right sub arrays 
        if (left[0] < right[0]) {
            arr.push(left.shift())  
        } else {
            arr.push(right.shift()) 
        }
    }

    // Concatenating the leftover elements
    // (in case we didn't go through the entire left or right array)
    return [ ...arr, ...left, ...right ]
}

function mergeSort(array) {
  const half = array.length / 2

  // Base case or terminating case
  if(array.length < 2){
    return array 
  }

  const left = array.splice(0, half)
  return merge(mergeSort(left),mergeSort(array))
}
Enter fullscreen mode Exit fullscreen mode

Quadratic Time: O(n2)

A function with quadratic time complexity has a growth rate of n2. Meaning? If the input size is 2, then the function will take 4 operations. If the input size is 3, then the function will take 9 operations. If the input size is 1000, then the function will take 1,000,000 (1 million) operations.

In other words, O(n2) is going to run really slow, especially since the input size is really large.

Most of the time, we would describe an algorithm that has quadratic time when we have to iterate within the object at least twice, like nested for loops.

Find duplicates and bubble sort are two of the quadratic algorithms examples that you would run into. Bubble sort (as well as insertion sort and selection sort) is like the naive version of merge sort and quick sort. It is slow, but it is always the first concept you would first learn when learning sorting algorithms. It builds a great foundation for the rest of the more complicated sorting algorithms.

What bubble sort does is repeatedly swapping adjacent elements if they are in the wrong order. Let's say we are sorting an unordered array of numbers from smallest to largest. Bubble sort would examine the numbers if they are in the right order by swapping them one by one .

Example of Bubble Sort:

function bubbleSort(arr, n) {
  // double-loop of size n, so n^2
  for (let i = 0; i < n - 1; i++) {
    for (let j = 0; j < n - i - 1; j++) {
      if (arr[j] > arr[j + 1]) {
        swap (arr, j, j+1);
      }
    }
  }
}

// swap helper method
function swap (arr, first, second) {
  let temp = arr[first];
  arr[first] = arr[second];
  arr[second] = temp;
}
Enter fullscreen mode Exit fullscreen mode

With the nested loop, we have a time complexity of O(n2)

Comparing to Merge Sort, which the array would be cut in half , Bubble Sort would go through each element of the array one by one until everything is sorted in the right place (and then it will go through again one more time even though it is already sorted.)

Exponential Time: O(2n)

Base-2 Exponential running time means the calculations will go double with each input size grows.
22 => 4
23 => 8
24 => 16
...
2100 => 1,267,650,600,228,229,401,496,703,205,376

As you can see whenever n is increased by 1, the result is doubled. Essentially, the number starts very low and to the end, the number will be very large.

In most cases, please avoid the use of exponential time since the run time is going to get slower. Not that it's the worst one, but obviously it's not great.

Fibonacci Example

function fib(n) {
  if (n <= 1) {
    return n
  }
  return fib(n - 1) + fib (n - 2)
}
Enter fullscreen mode Exit fullscreen mode

Factorial Time: O(n!)

If you understood how factorial works, this is how it's like:
5! = 5 x 4 x 3 x 2 x 1, in other words,
n! = n x (n - 1) x (n - 2) x (n - 3)... x 1

As the input size increases, the run time gets bigger and bigger and BIGGER! I personally have not encountered a factorial problem, therefore I would attach an example below with the link as reference.

Typical use cases
Permutations

Conclusion

We hope this article gives you a better understanding of Big-O Notation! This concept is important since often during interviews, you will need to analyze the Big-O Notation of your solution. Furthermore, knowing this can help you understand which solution has better or worse runtime as you come up with approaches. If you are still having trouble understanding, we have provided more resources down below for you to reference!


Resources

  1. Examples of Algorithms which has O(1), O(n log n) and O(log n) complexities 👀 (Stack Overflow)
  2. Big-O Cheat Sheet
  3. What is Big O Notation Explained: Space and Time Complexity (FreeCodeCamp)
  4. Big-O notation (Wikipedia)
  5. 8 time complexities that every programmer should know (With Videos and Examples)
  6. Comparing different solutions for Two Sum (Stanford)

Top comments (4)

Collapse
 
mehmehmehlol profile image
Megan Lo

Hey Lukáš! Thank you so much for the feedback!

For your first and second comment, I actually paraphrased those from the resources I found, I apologized about the inaccuracy as I intended to use less technical words that it might put off as inaccurate. I am going to fix that.
I would rearrange the words of what you said about the third comment, now that I am reading it, it does sound like two not nested loops.

Please let us know if there's any other inaccuracy you may find, we don't want to give false information to other learners who are also trying to understand big-o notation

Collapse
 
armstrongsubero profile image
Armstrong Subero

This was a great read! Good work!

When you state:

"For CS students at college, they have to learn different types of Big-O notation (Big O, Big Theta, Big Omega)."

This is a false statement. Theta, Big Omega, Little Omega, Little O et al. are NOT types of Big O notation they are different types of notation used in the analysis of algorithms to estimate their complexity asymtotically. Big O is ONE such notation as is theta notation etc. Big O is a subset of asymtotic analysis not the set from which the other forms of notation are derived.

Also I think your article title is contradictory when you say "non-cs perspective", Big O notation is a mathematical concept that is at the heart of computer science.

Thats like saying we will discuss medieval history from a non-history perspective. You could have said non-technical or a web programmer's perspective or standpoint?

Applications of asymptotic analysis includes the use of a subset which is Big O notation in web and mobile development, however it dosen't mean all CS is supposed to be applied in one particular industry or else its not useful, which is the perspective that is being pit across to me.

Computability, complexity and algorithmic analysis techniques form part of CS theory, they ARE CS.

Automata theory for example, we can take FSMs and CFGs which aren't encountered too often in web development but along with Markov models is rampant in AI and robotics. We also get cellular automaton from computational complexity theory which can be applied to entirely different fields altogether not just CS.

Also you can't say 'software engineering technical interviews', since in embedded software engineering, space complexity is a real concern in those memory constraint systems. Maybe phrase it as 'web and mobile programming ("engineering") technical interviews?

Your understanding of Big O is confusing. Look at Big Omega, Little Omega, Big O, Little O and Theta and asymptotic analysis in general a little more.

I think it's great what you are doing, but when you say:

"But for the sake of software engineering technical interviews, all we care about is the best and worst case scenarios."

It seems to me you are saying that other stuff is not necessary which is not true, not to mention Big O measures asymtotic upper bounds since it bounds the growth of the running time from the top.

You need to understand all forms of asymtotic notation for understanding algorithmic complexity. Computation, computability and complexity are all important so you can have a much larger toolbox to understand what your algorithm is doing and perform analysis on it, the most used dosen't mean it is the best in all scenarios.

"CS students in college" learn all of these because not all of them end up being web or mobile developers, some of them end up writing embedded systems software, designing compilers and eventually into academic research positions or R and D in industry and need to analyse their algorithms from differing performance metrics. Even a web developer can benefit from analysing how their algorithm performs from varying standpoints not just upper bound.

Great article by the way and I think its great that programmers can learn this kind of stuff from such a great community!

Collapse
 
wlcreate profile image
Waverley Leung

Same as Megan, thank you for the feedback and insight Lukáš!

Collapse
 
armstrongsubero profile image
Armstrong Subero

Agreed.