As we know, there may be more than one solution to any problem. But it is hard to define, what is the best approach and method of solving that programming problem.

Writing an algorithm that solves a definite problem gets more difficult when we need to handle a large amount of data. How we write each and every syntax in our code matters.

There are two main complexities that can help us to choose the best practice of writing an efficient algorithm:

### 1. Time Complexity - Time taken to solve the algorithm

### 2. Space Complexity - The total space or memory taken by the system.

When you write some algorithms, we give some instructions to our machine to do some tasks. And for every task completion machine needs some time. Yes, it is very low, but still, it takes some time. So here, is the question arises, does time really matters.

Let's take an example, suppose you try to find something on google and it takes about 2 minutes to find that solution. Generally, it never happens, but if it happens what do you think what happens in the back-end. Developers at google understand the time complexity and they try to write smart algorithms so that it takes the least time to execute and give the result as faster as they can.

So, here is a challenge that arises, how we can define the time complexity.

## What is Time Complexity?:

It quantifies the amount of taken by an algorithm. We can understand the difference in time complexity with an example.

*Suppose you need to create a function that will take a number and returns a sum of that number upto that number.
Eg. addUpto(10);
it should return the sum of number 1 to 10 i.e. 1 + 2+ 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10;*

We can write it this way:

`function addUpTo(n) {`

let total = 0;

for (let i = 1; i <= n; i++) {

total += i;

}

return total;

}

addUpTo(5); // it will take less time

addUpTo(1000) // it will take more time

Now you can understand why the same function takes different time for different inputs. This happens because the loop inside the function will run according to the size of the input. If the parameter passed to input is 5 the loop will run five times, but if the input is 1000 or 10,000 the loop will run that many times. This makes some sense now.

But there is a problem, different machines record different timestamp. As the processor in my machine is different from yours and same with multiple users.

## So, how can we measure this time complexity?

Here, Big-O-Notation helps us to solve this problem. According to Wikipedia, *Big O Notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. The letter O is used because the growth rate of a function is also referred to as the
order of the function.*

According to Big O notation,

instead of O(2n) prefer O(n);

instead of O(n^2) prefer O(n);

instead of O(log n) prefer O(n);

instead of O(nlog n) prefer O(n);

For better understanding, please have a look at some algorithms which we use daily that have O(n),O(n^2), and O(log n) complexities?

In Quora, Mark Gitters said,

``

O(n): buying items from a grocery list by proceeding down the list one item at a time, where “n” is the length of the list

O(n): buying items from a grocery list by walking down every aisle (now “n” is the length of the store), if we assume list-checking time is trivial compared to walking time

O(n): adding two numbers in decimal representation, where n is the number of digits in the number.

O(n^2): trying to find two puzzle pieces that fit together by trying all pairs of pieces exhaustively

O(n^2): shaking hands with everybody in the room; but this is parallelized, so each person only does O(n) work.

O(n^2): multiplying two numbers using the grade-school multiplication algorithm, where n is the number of digits.

O( log n ): work done by each participant in a phone tree that reaches N people. Total work is obviously O( n ), though.

O( log n ): finding where you left off in a book that your bookmark fell out of, by successively narrowing down the range

``

and Arav said,

"

If you meant algorithms that we use in our day to day lives when we aren't programming:

O(log n): Looking for a page in a book/word in a dictionary.

O(n): Looking for and deleting the spam emails (newsletters, promos) in unread emails.

O(n ^ 2): Arranging icons on the desktop in an order of preference (insertion or selection sort depending on the person)."

I hope you are now familiar with the complexities.

I am not completing the topic in this article, I will make another in future.

If you have any questions and suggestions please write down the comment or feel free to contact me.

Thanks for giving your valuable time in reading this article.

## Discussion (9)

We will never be able to express the complexity of space-time with JavaScript 😄

Thank you Lars, Would you please explain to me why? It will really help me to learn more.

It's a joke 😄 I was referring to space-time.

Thanks that was some nice examples.

I have tried a few times to understand Big-O notation and logarithms, but I never have and don't think I ever will.

Do you know of any resource in particular that helped you?

I have learned from Udemy. Its the best online solution out there.

I think you got a mistake in your article,

You should prefer O(log n) to O(n), it's the principle of binary tree 😁

Yes, thanks Baptiste for the correction. Please have a look at the graph. I think there is a little difference between the two.

Awesome post

Thanks Ynoa