DEV Community

Cover image for The Math Behind Fast Algorithms 📈
ShaaN
ShaaN

Posted on • Updated on

The Math Behind Fast Algorithms 📈

Release .2

Choosing the right algorithm for your problem statement is crucial to prevent TLE (Time Limit Exceed) and save you time. In this post, we will learn about time complexity and asymptotic notations, which are essential concepts for understanding and comparing the efficiency of different algorithms.

Introduction:
Let's say that you have a bag of apples and you want to count them. If you have a small bag of apples, you can count them quickly and easily. Because the number of steps you have to take to count them is also small.

But now let's say that you have a big bag of apples. It will take more time to count all the apples. This is because the number of steps you have to take to count them is big.

The time it takes you to count the apples is related to the number of apples. The more apples you have, the longer it takes to count them. This is the same idea as time complexity.
or......
More apples ∝ More time

Time complexity is a measure of how long an algorithm takes to run.
The time complexity of an algorithm is usually expressed as a function of the size of the input.

  • For example, an algorithm that runs in constant time takes the same amount of time to run regardless of the size of the input.
  • An algorithm that runs in linear time takes time that is proportional to the size of the input.
  • An algorithm that runs in quadratic time takes time that is proportional to the square of the size of the input
  • and so on....

It's required to get a hold of this concept so that you can choose the appropriate program for a problem. If you have a small problem, you can use an algorithm with a small time complexity. But if you have a big problem, you need to use an algorithm with a big time complexity.

Time Complexity Description

  • Constant time - The running time does not depend on the size of the input.
  • Linear time - The running time is proportional to the size of the input.
  • Quadratic time - The running time is proportional to the square of the size of the input.
  • Exponential time - The running time is proportional to some exponential function of the size of the input.

Asymptotic Notations:
Asymptotic notation gives us an idea about how good a given algorithm is compared to some other algorithm.

Now let's look at the mathematical definition of 'order of.' Primarily there are three types of widely used asymptotic notations.

  1. Big oh notation ( O )
  2. Big omega notation ( Ω )
  3. Big theta notation ( θ ) – Widely used one

In rookie terms:
Let's say that you are running a race. The big O notation is like the fastest time you could possibly run the race. The big omega notation is like the slowest time you could possibly run the race. The big theta notation is like the average time you would run the race.

In real world language:

  • Big O notation: Big O notation is used to describe the worst-case time complexity of an algorithm. This means that it is the upper bound of the algorithm's running time. For example, an algorithm that runs in linear time can be written as O(n), where n is the size of the input. This means that the algorithm will never take more than n steps to run, regardless of the size of the input.
  • Big omega notation: Big omega notation is used to describe the best-case time complexity of an algorithm. This means that it is the lower bound of the algorithm's running time. For example, an algorithm that runs in quadratic time can be written as Ω(n^2), where n is the size of the input. This means that the algorithm will never take less than n^2 steps to run, regardless of the size of the input.
  • Big theta notation: Big theta notation is used to describe both the worst-case and best-case time complexities of an algorithm. This means that the algorithm's running time will always be between the functions described by big O and big omega notation. For example, an algorithm that runs in linear time can also be written as θ(n), because its running time is always between n and n^2. These sure are big but are the precise definitions.

Best Case, Worst Case and Average Case of an Algorithm:
Analysis of the best case, expected case, and worst case for finding the maximum element in an array:

  • Best case: The best case for finding the maximum element in an array is when the array is already sorted in descending order. In this case, the maximum element is the first element in the array, and it can be found in O(1) time.
  • Expected case: The expected case for finding the maximum element in an array is when the array is randomly arranged. In this case, the maximum element could be anywhere in the array, and it will take on average n/2 comparisons to find it. This means that the expected case is O(n).
  • Worst case: The worst case for finding the maximum element in an array is when the array is in ascending order. In this case, the maximum element is the last element in the array, and it will take n comparisons to find it. This means that the worst case is also O(n).

As you can see, the best case for finding the maximum element in an array is O(1), the expected case is O(n), and the worst case is also O(n). This means that finding the maximum element in an array is a relatively efficient operation, and it will only take a small amount of time, regardless of the input data.

In general, it is important to consider the best case, expected case, and worst case when choosing an algorithm. The best algorithm for a given problem will depend on the specific requirements of the problem.

Space Complexity:
Space complexity is a measure of how much memory an algorithm uses, as a function of the size of its input. It is typically expressed using asymptotic notations, such as Big O notation.

It is different from time Complexity since Time complexity is a measure of how long an algorithm takes to run, as a function of the size of its input.

For example, the bubble sort algorithm ( We will learn about that in the future posts ) has a space complexity of O(1), where n is the size of the input array. This means that the amount of memory bubble sort uses is constant, regardless of the size of the input array.

On the other hand, the quicksort algorithm has a space complexity of O(log n), where n is the size of the input array. This means that the amount of memory quicksort uses grows logarithmically with the size of the input array.

As you can see, bubble sort is a much more space-efficient algorithm than quicksort. This is because bubble sort only needs to store a few temporary variables, while quicksort needs to store the entire sorted subarrays.

In general, time complexity is more important than space complexity for real-time applications. This is because real-time applications need to respond to user input quickly, and a slow algorithm can lead to a poor user experience.

However, space complexity can become important for memory-constrained applications. For example, a mobile app might not have enough memory to run an algorithm with a high space complexity.

In the end, the best way to choose an algorithm is to consider both time complexity and space complexity. The best algorithm for a given problem will depend on the specific requirements of the problem.

Happy Coding 👍🏻!
Thank You

Top comments (0)