DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Cover image for Algorithms: Time And Space Complexity
Michael N.
Michael N.

Posted on

Algorithms: Time And Space Complexity

when-you-reduce-the-time-complexity-of-your-algorithm-from-41896903.png

The concept of time and space is one I struggled to wrap my head around at first, but I prevailed and in this article, I'm going to do my best to share my understanding of these concepts with you. We will cover Big O notation, the most common types of time and space complexity, and of course what time and space complexity mean.

What is Time Complexity?

jim-carrey-jim.gif

Time complexity is not only about seconds, minutes, or hours; the goal of time complexity is "efficiency". Time complexity refers to how time efficient a data structure or algorithm is as its input (data it is working on) approaches infinity.

companyA.gif

Company (A) finds houses for people; it always takes the company a day to find houses for all its customers; it doesn't matter how many customers it has each day 50, 100, 200, 350, or 1000 they all get their homes in a day.

companyB.gif

Company (B) does the same thing, but they can get you a house in 5 hours. Each customer they get in a day increases the wait time by 5 hours; 10 customers are 50 hours; 50 customers are 250 hours.

Which company is more efficient? company (A) has shown that regardless of input size, the results will be given after 24 hours (a day); company (B) on the other hand increases wait time with an increase in the input size. This is why time complexity is so valuable for programming, if we look at the companies (A) and (B) as algorithms, we can say that when input size < 5 company (B) is more efficient, but when input size > 5 company (A) is far more efficient. In a situation where we know the input size will never be greater than 5 company (B) is the best option, but when we don't know or we expect the input size to be greater than 5, company (A) is a better choice in the long run.

The purpose of checking time complexity is not to determine which algorithm is better than the other, but which algorithm is better for which situation.

What is Space Complexity?

fat-cat-monday-morning.gif

Jeff and Bob are both given the task of sorting a box of coloured cubes. Each box has 2 different coloured cubes mixed up, and their task is to sort them so the cubes are layered by colour and not mixed up.

Jeffs Way.gif

Jeff's way of sorting is by grabbing another new box and putting all cubes of one colour in that box first before inserting the other coloured cubes on top.

Bobs Way.gif

Bob's way of sorting is by taking out all the cubes in the box and rearranging and returning them by colour so they are arranged. Space complexity asks the question, How much memory does an algorithm take when doing its task? In the early days of computer programming, memory was very limited in availability, so programmers had to write code that was as efficient as possible with the available memory.

jeff overload.gif

If we look at Jeff and Bob's approach to sorting as algorithms, if Jeff is given 20 boxes to sort, he's going to need 20 new boxes, 100 boxes will require 100 new boxes; that's a lot of boxes. Bob, on the other hand, doesn't need any new boxes in order to sort his boxes. In a situation where memory is limited, algo-Bob will be more efficient. Space complexity has to do with understanding the memory requirements and efficiency of algorithms.

Big O Notation

Now that we have an understanding of what time and space complexity are, how do we find the time and space complexity of ours and other people's algorithms, we need a way to convey the time and space efficiency of algorithms that is easily understood; this is what Big O-Notation does.

β€œBig O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others collectively called Bachmann–Landau notation or asymptotic notation.” wikipedia

Big O-Notation is a way of expressing how the time and space efficiency of an algorithm changes with the size of its input. There are a lot of different Big O notations that convey the time and space complexity of algorithms, but in this article, we are going to focus on the 6 most common notations and what they mean.

Constant O(1)

Constant complexity graph

Big O of 1, also known as constant complexity, is used to denote algorithms that are not affected by increases in the input size. An example of this is company (A) which gets its customers a house in 24 hours regardless of how many customers it gets in a day. Algorithms with a time or space complexity of O(1) are considered the best and most efficient.


function firstIndex(array){
   return array[0]
}

Enter fullscreen mode Exit fullscreen mode

The above code is an algorithm with a time complexity of O(1), no new arrays are created and the size of the array does not matter when we only return the first index of the array.

Logarithmic O(LogN)

Logarithmic complexity graph

Big O of Log n, also known as logarithmic complexity, is considered the best complexity after constant. Imagine we have another housing company (C), at company (C) it takes 10 hours to find a home for 1 customer, 20 hours for 10 customers, 30 hours for 100 customers, and 40 hours for 1000 customers; this is how logarithmic complexity works. In logarithmic complexity, the cost in time or space increases slower than the input size. Specifically, the cost of time or space is log2N where N is the size of the input. An example of an algorithm with logarithmic time complexity is a binary search.

Linear O(N)

Linear complexity graph

Big O of N, also known as linear complexity, is the next acceptable complexity after logarithmic. Algorithms with a linear complexity increase in the cost of time or space as the size of the input increases, housing Company B is a good example of linear time complexity as the wait time increases with the number of customers.


function printAll (array){
   for(let i =0; i < array.length; i++){
      console.log(array[i])
  }
}

Enter fullscreen mode Exit fullscreen mode

The above code is an example of an algorithm with linear time complexity.

Quadratic O(N2)

Quadratic complexity graph

Big O of N squared, also known as quadratic complexity, refers to an algorithm whose cost of time or space efficiency is equal to the size of the input squared. Imagine you wanted to throw a party and invite 10 people and tell each of them to bring enough food for 10 people; you'd have enough food for 100 people, that's how quadratic complexity works. An example of an algorithm with a quadratic time complexity is a nested loop shown below.


function party (array){
   for(let i =0; i < array.length; i++){
         for(let j =0; j < array.length; j++){
              console.log(array[i])
          }
     }
}

Enter fullscreen mode Exit fullscreen mode

Exponential O(2N)

Exponential complexity graph

Big O of 2 to the power of N, also known as exponential complexity. In an algorithm with exponential time complexity, for every additional increase in the input size, the cost in time grows exponentially.

A pizza restaurant has several toppings to choose from pepperoni, chilli peppers, and pineapple. Customers may choose any combination of toppings or none at all for their pizza. Now consider an algorithm that finds every possible unique combination of toppings. This is an exponential algorithm with time complexity O(2^n). Look how the possible combinations grow (exponentially) when you add a new topping to the menu:

0 toppings: 1 combination (no toppings at all)
1 toppings: 2 combinations (none, a)
2 toppings: 4 combinations (none, a, b, ab)
3 toppings: 8 combinations (none, a, b, c, ab, ac, bc, abc)
...
...
10 toppings: 1,024 combinations
20 toppings: 1,048,576 combinations
Enter fullscreen mode Exit fullscreen mode

So with just 20 types of toppings, there are over 1 million possible combinations! stack-overflow

Factorial O(N!)

Factorial complexity graph

Big O of N factorial, also known as factorial complexity, is the worst efficiency an algorithm can have; algorithms with factorial time complexity should be avoided if possible. Say we have an algorithm that takes a string and creates as many different strings as possible from the characters of the original string, if we have a string with 2 characters "ab" we can make 2 different strings "ab" and "ba"; watch how the number of possible strings grows as we add a new character to the original string.


createNewStrings('ab') // ab, ba...
// n = 2, f(n) = 2;
createNewStrings('abc') // abc, acb, bac, bca, cab, cba...
// n = 3, f(n) = 6;
createNewStrings('abcd') // abcd, abdc, acbd, acdb, adbc, adcb, bacd...
// n = 4, f(n) = 24;
createNewStrings('abcde') // abcde, abced, abdce, abdec, abecd, abedc, acbde...
// n = 5, f(n) = 120;

Enter fullscreen mode Exit fullscreen mode

A string with 10 characters will result in 3,628,800 different strings, which is the same as multiplying 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 or 10!. Using a string with 11 characters would result in over 38 Million different strings. Factorial algorithms are highly inefficient both time and space-wise, which is why they have the worst efficiency for algorithms.

Calculating Time And Space Complexity

Calculating the time and space complexity can be done easily by following these 2 simple rules

  • Ignore constants
  • The worst part of the algorithm determines the complexity of the whole algorithm.

Let's look at a couple of algorithms to understand what this means.


function printAll (array){
   for(let i =0; i < array.length; i++){
      console.log(array[i]) // will execute array.length or  O(n) times
  }

 return array[0] // will execute only once O(1)

}

Enter fullscreen mode Exit fullscreen mode

In the above function, the overall efficiency of the algorithm is impacted by the loop, we could technically say that the Big O notation for the whole function is O(n + 1) which would be correct but is not necessary because as the size of n approaches infinity that additional 1 is negligible so it's better to evaluate the time complexity of the function as O(n).

function partyPlus (array){

   for(let i =0; i < array.length; i++){
      console.log(array[i]) // O(n)
 }

  for(let i =0; i < array.length; i++){
      console.log(array[i]) // O(n)
  }

   for(let i =0; i < array.length; i++){
         for(let j =0; j < array.length; i++){
              console.log(array[i]) // O(n^2)
          }
     }
}

Enter fullscreen mode Exit fullscreen mode

The above function has 2 loops and a nested loop, is we follow the second rule that would mean the function will have a time complexity of O(n2) because the nested loop has the highest cost in efficiency and as the value of the array increase the additional cost of the other loops will become negligible.

Conclusion

Time and Space complexity is something all developers should have an understanding of because it teaches us to write code that is not just effective but also efficient. if like to learn about the time and space complexity of different algorithms, check out Big O cheat-sheet. If you would like to learn more about Big O notation, check out this awesome, easy-to-understand video. Thanks for reading and a wonderful day β€οΈπŸ‘‹.

bye gif

Top comments (0)

🌚 Friends don't let friends browse without dark mode.

Sorry, it's true.