In the programming, when we have a problem there are many ways to solve it. But which is the best way and how do we decide to pick the right algorithm.
Algorithms are set of instructions(or steps) which tells what to do and how to do it.
Some algorithms finishes within a second, another takes minutes with even smaller data sets. How do we compare different performances and come to conclusion?
Answer to this is to measure the time complexity of an algorithm. Time complexity represents number of times a statement is executed. The execution time of the code does not depend only upon the algorithm, it depends upon certain factors like programming language, operating software, processing power, etc etc. However, Keeping in mind the system configuration to be constant, time complexity is calculated on the execution of algorithm itself and its input.
To express the time complexity "Big O Notation" terminology is used. It is a function of input size (n) of Big O Notation, where n is input and O is the worst case scenario growth rate function.
Cheat sheet, to keep it handy.
Below is the list of some common time complexities terms, which we knowingly or unknowingly use in our day to day coding. It is arranged in order of there execution time when their input size increases.
|S.NO||Big O Notation||Name|
|1.||O(1)||Constant Time Complexity|
|2.||O(log n)||Logarithmic Time Complexity|
|3.||O(n)||Linear Time complexity|
|4.||O(n log n)||Linearithmic Time Complexity|
|5.||O(n^2)||Quadratic Time Complexity|
|6.||O(n^3)||Cubic Time Complexity|
|7.||O(n^y)||Polynomial Time Complexity|
|8.||O(2^n)||Exponential Time Complexity|
|9.||O(n!)||Factorial Time Complexity|
let's go one by one, decode & define them to have better understanding.
(1) O(1): When our algorithm takes same time regardless of the input size we term it as O(1) Or constant time complexity. It means even though we increase the input , the time remains same.
Example- add 2 numbers, find if a number is divisible by 2 or not.
See the below graph to have more understanding.
(2) O(log n): When input grows time increases linearly up to certain point and then becomes almost constant w.r.t horizontal axis. It means even though input increases drastically after certain interval of time it will take constant time.
Logarithmic time complexities usually apply to algorithms that divide problems in half every time. For instance, let’s say that we want to look for a "book" word in a dictionary. For that we need not to go through all the words before & after it. We just need to eliminate at every stage till we reach the word "book".
- Open the book in the middle and check the first word on it.
- If the word that you are looking for is alphabetically more significant, then look to the right. Otherwise, look in the left half.
- Divide the remainder in half again, and repeat step #2 until you find the word you are looking for.
Example - Binary Search
See the below example to have better understanding.
(3) O(n) : When input grows , the algorithm takes proportionally longer time to complete the operation. Linear time algorithms imply that the program visits every element from the input.
Example - to find maximum in an array. When the size of the input grows, time also grows proportionally.
This can be usually identified when the function has single for loop.
See the below graph to have better understading.
(4) O(n log n): Linearithmic time complexity, it’s slightly slower than a linear algorithm. However, it’s still much better than a quadratic algorithm.
Example, sorting algorithms like MergeSort, depicts this time complexity.
See the below graph, it's combination of all the graphs for different time complexities, in practicality, algorithms with Linearithmic time complexity are one of the well appreciated algorithm.
(5) O(n^2): In this type of algorithms, the time it takes to run grows directly proportional to the square of the size of the input, means when 1 unit of input grows, it takes 2 unit of time, 2 unit of input takes 4 unit of time and so on.
In most of the scenarios and particularly for large data sets, algorithms with quadratic time complexities take a lot of time to execute and should be avoided.
This can be identified when the function has nested for loop or two inner for loops.
Example - printing two dimensional arrays, or bubble sort.
(6) O(n^y): Polynomial time complexity is represented as O(n^y), when y > 1. here n is input , y is exponential integer greater than 1. As you already saw, two inner loops almost translate to O(n^2) since it has to go through the array twice in most cases. Are three nested loops cubic? If each one visit all elements, then yes! And that terms out to be Cubic Time Complexity O(n^3).
Usually, we want to stay away from polynomial running times (quadratic, cubic, n^y, etc.) since they take longer to compute as the input grows fast. However, they are not the worst :) .
(7) O(2^n): In exponential time algorithms, the growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements. Any time an input unit increases by 1, it causes you to double the number of operations performed. Basically calculations performed by an algorithm double every time as the input grows.
Example- Creating subset of string like 'abc' , 'abcd', 'abcd.....z'
(8) O(n!): Factorial is the multiplication of all whole numbers from our chosen number down to 1.
Example: 5! = 5 x 4 x 3 x 2 x 1 = 120
it grows when input increases
20! = 2,432,902,008,176,640,000
With small increase in input data set there is huge increase in the execution time. We should stay away from such algorithms as much as possible.
These were the basics Time Complexity terms you might come across daily while programming or writing algorithms.
Next step would be to deep dive to decode the function and write down time complexity of each line and summation of it, will give the exact time complexity of the program code which you have written.
To conclude, i would say, better the time complexity of an algorithm better will be performance of it. Before finalizing the algorithm for your programming code the worst case scenario should be considered. It would help in longer terms with respect to scalability and sustainability of the overall system.