# The Big O notation - O(n)

### Paweł ・4 min read

## 1. Why I've chosen to write this article?

I have found the Big O notation could be scary for many beginner programmers (*including me some time ago*). The subject is really simple and it is worth to dedicate few minutes to read this article.

## 2. What is this notation and why is it important?

The Big O notation describes complexity of an algorithm. Basically it tells you how fast the algorithm is. Simple enough. The **O** in name stands for 'order of' or 'Ordnung'. It comes from the German number theoretician Edmund Landau and later it was adopted to Computer Science world.

As a programmer you use or would use many algorithms written by other people. Consider that you would use another person's solution (*i.e. from stackoverflow*). Then you would love to know how fast the algorithm is.

Take a look at the first trivial example. You have a sheet of paper and you have to divide it into 16 parts.

How would you do that? The very fist thought is that you can take a pen and write 16 squares one by one.

If you think about it as an algorithm you will realize that this is not an optimal solution. You would have to make 16 steps to finish the task.

Someone else could say that you can fold the sheet 4 times and you also would have the sheet divided in 16 squares.

Now you have 2 solutions, one better that another and you would like to describe them. With Big O notation it is really simple.

## 3. What Big-O notation really describes?

Imagine that you are a librarian and you have to find Mr. Smith's library card. What would you do if in a drawer there would be 1000 unordered cards and somewhere inside the Smith's card is hidden. First thought is that you would have to check every card one by one until you'll find the right card. Maybe the first one is the card that you are looking for, which is good, but maybe it is the last one... You would have to check 1000 then. It is the worst scenario and Big-O notation describes exactly this - **the worst scenario** of an algorithm.

If **n** in this notation is a number of operations (*in the worst case*) we can say that our algorithm has a complexity of **O(1000)** which in general is just **O(n)** because the number 1000 is a variable.

#### How would we describe an example with a sheet of paper?

**O(n)** - scenario with a pen and drawing squares one by one.

**O(log _{2}(n))** - scenario with folding the sheet of paper.

## 4. The most common complexities.

Below you have the most common examples of the Big-O notation shown in a graph. I decided to use this graph because it perfectly shows why i.e **O(n)** is better than **O(n!)**.

Source: lukasmestan.com |

The graph is simple to read. If there are more elements (**n**) the algorithm is obligated to make more operations. Let's compare the best case **O(log _{2}(n))** and the mentioned

**O(n)**. Binary search is a perfect example of

**O(log**complexity. We can see in the plot above that it is almost constant even if we multiply the number of elements in algorithm. See the example below where we are looking for a number

_{2}(n))**37**using binary search compared with sequential search.

Source: mathwarehouse |

The situation with binary search (**O(log _{2}(n))**) is almost perfect because no matter how many elements we would have, the number of made operations would not grow so much. Believe me or not but if we would have one milion elements in above example in binary search we would have to make at most 20 (

*) operations to find a number we are looking for. AMAZING!*

**O(log**= ~20_{2}(1 000 000))With this knowledge you can chose the best algorithm.

####
You can say that there is no chance that you decide to use **O(n!)** algorithm.

Well, there is a chance. There is a problem called Travelling Salesman Problem. It answered the question *,,Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?"*. This can be solved only by the algorithm with **O(n!)** complexity.

Despite of that you always would like to use algorithm of **O(log _{2}(n))** complexity, but sometimes it is not possible because some problems required more complicated solutions.

This article was inspired by a book ,,Grokking Algorithms" written by *Aditya Y. Bhargava*.

Thank you very much for the article! I finally understand what

`O(n)`

means. Lol, I thought it was more complex but it is rather simple.So Pawel what is the result of big O notation, if looking for the library card of Adam Smith? :D

Well, it depends on which algorithm you would chose. You can find the card with

O(n)but you also can solve the same problem with theO(logcomplexity._{2}(n))