DEV Community

Olha Zadorozhna
Olha Zadorozhna

Posted on

Time complexity of an algorithm

What is an algorithm?

An algorithm is a sequence of specific instructions for solving a given problem. No doubt this problem can be solved in different ways. Methods to solve the problem might differ by lines of code, readability, number of steps, additional resources used to solve the problem, but eventually all methods should give us the same result.

Obviously we want to solve the problem in the most efficient way. That's why we need some evaluation of this methods.

The two upmost used estimates are performance or time complexity (how long will it take, or how many operations should be done to get the result) and efficiency or space complexity (how many additional resources or storage will our method use while running).

In this article we will focus on the time complexity of an algorithm as a function of the input size.

For clarity, time complexity does not mean actual time (in seconds/ms) required to run actual code but rather the number of times a statement/operation executes.

How to estimate time complexity? Big O notation

Big O, also known as Big O notation, commonly used as a way to represent an algorithm's complexity.

Big O identifies how the performance of your algorithm will change as the input size grows to infinity.

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation.
(Wikipedia definition of O-notation)

Ω - O - Θ

Big O, Big Omega, or Ω, and Big Theta, or Θ, are notations used to express the computational complexity of an algorithm. Further in this article we will focus on Big O, but here is the summary of what every notation means:

Big O, Big Omega and Big Theta

Worst case, best case and average case scenarios

When we estimate time complexity we might run into some specific input set when our algorithm runs faster or slower depending on the input data.

For example, in the quick sort implementation, the worst occurs when the input array is already sorted and the best occurs when the pivot elements always divide the table into two halves.

For the insert sort, the worst case happens when the array is sorted in reverse order and the best case occurs when the array is sorted in the same order as the output.

It might happen that for some algorithms there is no worst and best case. For example, merge sort always performs O(nLogn) regardless of the dataset.

In short,

  • best-case if our method performs the minimum number of steps on input data of n elements.
  • worst case if the function performs the maximum number of steps on input data of size n.
  • average case if the algo runs an average number of steps on input set of n elements.

However, we should not be interested in those cases where our algorithm time complexity is good enough if the input set is, let's say, ordered. We usually care about worst case scenario, thus we can guarantee the given time complexity regardless of the input data characteristics.

Common time complexities, stack ranked from the faster to slower

  1. O(1) - Constant.
    Execution time is not based on the input size n, it runs independently if the input size (for ex, accessing element of an array).

  2. O(log(log n)) - Double logarithmic.
    Interpolation search; Van Embe Boas Trees (VEBT).

  3. O(log n) - Logarithmic. Algorithms when we divide initial input in two (or more) parts and then solve the problem only on one part (for ex, binary search).

  4. O(n) - Linear. Any algorithm wit one loop iterating through the input size n (for ex, finding max element in an array; print all the value in the array; find an element in a collection).

  5. O(n∙log n) - Quasi-linear. Any algorithm with divide and conquer strategy (break the problem into subproblems of the same type, recursively solve those subproblems and combine the answers). For ex, merge sort, heap sort.

  6. O(n^2) - Quadratic. Any algo with two nested loops iterating through the input set (for ex, bubble sort, insert sort).

  7. O(n^3) - Cubic. Any algo with three nested loops iterating through the input set.

  8. O(2^n) - Exponential. Find all subsets of a given set; travelling salesman problem using dynamic programming.

  9. O(n!) - Factorial. Find all possible permutations of a given string; travelling salesman problem using brute-force search.

Data Structures and Time Complexity

Algorithm complexity heavily depends on the data structures we are going to use. The right data structure guess might lead to better performance.

Data structures define how data is stored, accessed and maintained. Depending on which operations we are going to use most in our algo we might need different data structure to improve time complexity.

Most common operations are: search, access, insert, delete.

In each data structure, such operations require different amount of time to be executed, and it's cost is evaluated in terms of Big-O. Knowing the cost of each operation we might improve time complexity of the algo simply be switching to a data structure where cost of this operations are less that in our current implementation.

You can use the below most famous cheat sheet of different data structures and their complexities

Data Structures Cheat Sheet

Conclusion

In this post we tried to understand key points about algorithms time complexity and here are some takeaways:

  • time complexity doesn't mean actual time in milliseconds but number of elementary operations the algo performs
  • when analyzing time complexity always estimate worst-case scenario
  • time complexity might be improved by using different data structure which has lowest cost for the operation which is used most often in our algo

Thanks for reading to the end! Happy to answer questions in the comments!

Top comments (1)

Collapse
 
cicirello profile image
Vincent A. Cicirello

The explanation column of the image that you used to summarize differences of Big O, Big Omega, and Theta, is wrong.

The Myth

There is a common error that I see in posts like this that attempt to introduce asymptotic complexity. Namely, it is common to see explanations that indicate Big O is for worst case, Big Omega is for best case, and Theta is for average case. This myth is wrong.

The Facts

Big O specifies an upper bound. You can use Big O to specify an upper bound on any of worst case, best case, or average case.

Big Omega specifies a lower bound. You can use Big Omega to specify a lower bound on any of worst case, best case, or average case.

Theta specifies a tight bound. You can use Theta to specify a tight bound on any of worst case, best case, or average case.