DEV Community

Cover image for Big-O For the Non-CS Degree - Part 2
Travis Ramos
Travis Ramos

Posted on • Updated on • Originally published at travislramos.com

Big-O For the Non-CS Degree - Part 2

If you’re reading this, and haven’t read part one in the series I recommend reading that first. There we went over constant, logarithmic, and linear time complexities as well as examples of each.

In this half of the series we will be going over:

  • O(n log n)
  • O(n^2)
  • O(2^n)
  • O(n!)

So let’s get right into it!

O(n log n) - Linearithmic Time

Linearithmic time complexity as you can probably tell by the name is a cross between linear and logarithmic time complexity. It takes the same divide and conquers approach as would a logarithmic algorithm, but instead, it will sort through every item in the dataset first by breaking your list down into individual sublists containing no more than two items.

In our example below, we have a list of 20 items. These items will first be broken down into 10 sublists each containing two items. This is where the linear portion comes into play by using each item in the dataset. Once each item is broken down into its sublist, we will sort each sublist, and then merge them continuously sorting along the way. This example of linearthimic time is called a merge sort.

function merge(left, right) {
    let arr = [];

    while (left.length && right.length) {
        if (left[0] < right[0]) {
            arr.push(left.shift());
        } else {
            arr.push(right.shift());
        }
    }
    return arr.concat(left.slice().concat(right.slice()));
}

function mergeSort(arrayToSort) {
    if (arrayToSort.length < 2) {
        return arrayToSort;
    }

    let middle = Math.floor(arrayToSort.length / 2);
    let left = arrayToSort.slice(0, middle);
    let right = arrayToSort.slice(middle);

    return merge(mergeSort(left), mergeSort(right));
}

const array = [10, 15, 2, 5, 17, 9, 14, 11, 6, 19, 4, 20, 1, 18, 3, 7, 13, 8, 12, 16];

mergeSort(array.slice());

Enter fullscreen mode Exit fullscreen mode

O(n^2) - Quadratic Time

Quadratic time complexity is when the performance of the algorithm is directly proportional to the squared size of the input data. Put simply, it is linear time complexity squared.

So for example, if our data set contains 2 items, there would be 4 operations on it. If the set contained 4 items, there would be 16 operations, 6 items would be 36 operations, and so on.

In our example below, we are performing a kind of quadratic time complexity known as a bubble sort. We do this by nesting a loop inside of another loop, sort through our array, and swap the adjacent elements if they are in the wrong order.

let arr = [89, 14, 3, 847, 153, 219, 18, 24, 473];

function bubbleSort(arr) {
    let swapped;
    do {
        swapped = false;
        for (let i=0; i < arr.length-1; i++) {
            if (arr[i] > arr[i+1]) {
                let temp = arr[i];
                arr[i] = arr[i+1];
                arr[i+1] = temp;
                swapped = true;
            }
        }
    } while (swapped);
      return arr;
}

bubbleSort(arr);
Enter fullscreen mode Exit fullscreen mode

For smaller datasets, this is a good method to use because it is easy to implement, but as your dataset grows in size the time it takes to execute grows exponentially. With that in mind, it becomes easier to see how a solution like this wouldn’t scale well.

O(2^n) - Exponential Time

Exponential time complexity is shown in algorithms whose calculations double each time a new input is added to your dataset. This is because this time complexity tries to brute force its way through a dataset by using recursion. With smaller datasets, this works well, but as your dataset grows the time your algorithm takes to finish executing could quickly get out of hand.

A good example of this would be the recursive calculation of Fibonacci numbers and it is what we are doing in our example below.

function fibonacci(n) {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

fibonacci(4); // returns 3
fibonacci(5); // returns 5
fibonacci(6); // returns 8
Enter fullscreen mode Exit fullscreen mode

O(n!) - Factorial Time

Factorial time complexity is when the calculations of an algorithm grow in a factorial way based on the size of the dataset. This is quite possibly the worse type of time complexity to use because the time it takes to execute grows astronomically compared to the growth of the dataset.

2! = 2 x 1 = 2;
3! = 3 X 2 X 1 = 6;
4! = 4 x 3 x 2 x 1 = 24;
...
8! = 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 40320;
Enter fullscreen mode Exit fullscreen mode

As you can see, the number of executions scales horribly with each addition to the input size.

A good example of this would be a simple recursive function. This function will take in an input size, and then multiply that input size by its function taking in itself minus 1. It will then continue to do this until the input size reaches 0 with each recursion performed adding its value to our original number. As you can see from our example below, as we add to our dataset, the time it takes to execute each function quickly gets out of hand.

const factorial = n => {
    let num = n;

    if (n === 0) return 1
    for (let i = 0; i < n; i++) {
      num = n * factorial(n - 1);
    };

    return num;
  };

factorial(1); // 1 millisecond
factorial(5); // 120 millisecond
factorial(9); // 362880 millisecond
factorial(11); // 39916800 millisecond
Enter fullscreen mode Exit fullscreen mode

Final Thoughts

It’s important to take Big O into account when coming up with an algorithmic solution to a problem. Not all algorithms will perform the same, and some will be more efficient than others depending on the size of the dataset being passed in.

If you enjoyed this series and would like to see more of what I have written, check out my blog! Also, connect with me on twitter if you want to see what I’m up to!

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.