Welcome to this series of articles dedicated to **Big O** scoring. What is it anyway? Big O? before knowing more on the subject, I would like to warn you that this concept can seem a bit confusing if you do not have a basic knowledge of data structure and algorithms! If that’s your case, I welcome you to check out some great resources on this topic at the bottom of this article!

**Big O** is a notion that describes the performance or complexities of an algorithm. We write it with a capital letter **O**, not a zero (Read as Big Oh), the **Big O** is a measure and a description of the time necessary for the execution of an algorithm… In short, we are talking about efficiency! We use it to evaluate codes for performance and space.

Oops! Does this definition seem a bit awkward to you ?? Ok let’s try to see an analogy!

Suppose you occasionally enjoy baking your own cake at home. For you, it will take you **“T”** time to bake a cake. Now your birthday is approaching, and you decide to invite 5 of your friends.

Easily you can provide 3 cakes which make **“3T”** to prepare everything. It’s not exaggerating, eh!

So imagine you decide to call 200 people, wow it’s very hard to get by because when the number of guests increases so does the time to bake the cakes for everyone.

“We used ’n’ to denote the variable size of the input of the algorithm. We use

Big Onotation also to describe how much space an algorithm uses.”

Then an alternative is available for you. You can order the cakes online. By ordering the number you want: whether it’s 10, 50 or 200 everything will take the same delivery time.

We then notice that even if the number of guests increases, the time to bake the cakes and the time to deliver the cakes remain constant.

We ask ourselves, what is the place of the algorithm and DataStructure? In section 2 we will see in detail the different types of data structures that can be used to store data and some algorithms.

**Take the case of a simple array, and let’s break that down!**

A collection of items sorted in a contiguous memory location. Each element can be identified by its index in the array. Since the array uses computer memory, we also need this memory optimally. This is why the **Big O** measures space complexity, which is the amount of storage a program wants. This explains how the size grows as the inputs increase.

There are several operations that we can do with an array such as for example adding elements in it, reading an element at a position (index), sorting by order or alphabetically, …

These operations are considered as algorithms that we can apply to achieve the desired result. This leads us to evaluate the time complexity. It represents the number of times a statement or operation is executed in a program. The Big O notation expresses the run time of an algorithm in terms of how quickly it grows in relation to the input.

It’s not easy to determine the exact runtime of an algorithm. It depends on the speed of the computer processor. Instead of talking about the run time directly, we use Big O notation to talk about how quickly the runtime grows. There are more runtimes than what we saw in this previous example. The best known are:

- Constant O (1),
- Logarithmic O (log N),
- Log-linear O (N log N),
- Linear O (N),
- Quadratic O (N² ),
- Cubic O (N³), and
- Exponential O (2^n)

This series of articles is devoted to the details and examples of these runtimes!

Without huge memory, we can address huge problems

## 1. O(1) — Constant time complexity. (Read as Big Oh of 1)

In this complexity, no matter how big or small your inputs are, the output time will always be the same.

`This algorithm takes the same amount of time to execute.`

## Helpful links

https://en.khanacademy.org/computing/computer-science/algorithms

https://www.udacity.com/course/data-structures-and-algorithms-in-python--ud513

## Discussion (0)