## Time Complexity, Space Complexity and Big O Notation.

Before we dive into time complexity, space complexity, and Big O notation, let’s talk about something slightly different—though not entirely outside coding. Since we’ve mentioned coding, let’s discuss it a bit more.

### When can we call a piece of code “good code” ?

**Answer:** A piece of code is considered "good" when it is readable, maintainable, and scalable.

**Now, the question arises: What exactly is readability, maintainability, and scalability? To explain simply:**

#### What is Readability?

**Answer:** Readability is writing your code in such a clean manner that others can easily understand it.

Readability means writing clean code in such a way that other developers can easily comprehend it.

**Maintainable code simply means “code that is easy to modify or extend.”**

`Scalable code`

refers to how well your code performs when the size of the input increases. Let’s try to understand this with an example: Say you write a solution to a problem, and the code works perfectly for `1,000`

inputs. After some time, you try it with `5,000`

inputs, and it still works fine. But when you use `10,000`

inputs, the program seems to slow down. Now, when you use `1 million`

inputs, you notice that the program isn’t providing any output for `4-5 seconds`

. After `10 seconds`

, you finally get the output. This means that your algorithm works well with small inputs but fails to perform efficiently with larger inputs.

So, from this example, we understand that when we write code or implement an algorithm, we need to consider how well it performs with small inputs and how it will handle a much larger volume of inputs. This helps us understand the scalability of the code.

**When implementing an algorithm, we need to consider a few things:**

Will my algorithm produce a result within the specified time?

What is the maximum size of input my algorithm can handle?

How much memory does my algorithm consume?

Keep these points in mind, as we’ll discuss them in more detail later.

## What is Time Complexity and Space Complexity?

Let’s first try to understand time complexity in detail.

**Here’s an example:**

Let’s say you have a low-configuration computer, and your friend has a MacBook. Both of you enjoy problem-solving. You both find a problem online to solve: the problem requires printing 1,000 data points from an array. You write a loop and print all the data. Your friend also writes the exact same code to solve the problem. But here’s the issue: although you both wrote the same code, your code takes 10 seconds to run, whereas your friend’s code takes only 1 second.

**Now, the question is, which computer has better time complexity?**

If your answer is: **Your friend’s MacBook**

Then, congratulations! Your answer is wrong.

If your answer is: **Your computer**

Then, congratulations again! Your answer is still wrong.

**So, what is the correct answer? The correct answer is: Both computers have the same time complexity.**

But how is this possible? How can both computers have the same time complexity? Let me explain. When we first start learning about time complexity, we often think that time complexity refers to how long it takes for a program to run. So, by that logic, we might assume that our friend’s MacBook has better time complexity.

#### So, What is Time Complexity?

**Answer:** Time complexity is a function that gives us the relationship between how time will grow as the input grows.

**Time Complexity ≠ Time Taken**

In simple terms, time complexity refers to how much time a function will take to run as the size of the input increases.

Look at the image below:

If we closely observe this image, we’ll see that the time taken increases linearly with the input size in both cases (the old computer and the MacBook). This is the essence of time complexity, and this is the mathematical function of time complexity.

####
**Now, let's discuss Big O Notation in detail and how to determine the time complexity of a program.**

## What is Big O Notation?

Big O notation is a mathematical notation that describes the limiting behavior of a function as the input approaches a particular value or infinity.

To explain more simply, let’s recall what we mentioned earlier. We learned that time complexity refers to how long a function will take to execute as the size of the input increases.

Now, how do we express the time complexity of a program? We express it using Big O Notation. Big O Notation is a mathematical way to express time complexity.

`Note:`

Even if you don’t fully grasp this concept yet, don't worry! Be patient and keep reading, and it will become clearer.

**Big O Notation is written like this: O(n)**

This is the graph of Big O Notation. Now, we will gradually discuss each notation, Insha'Allah.

#### Problem:

Let’s start by discussing a simple programming problem. Since you're still reading this post, I assume you already know how to solve this problem.

You have an array with n elements (let’s say n names). From these n elements, you need to print either the first or the last element of the array. You're probably thinking, "Is this even a problem? I can just print the first element by writing `arrayName[0]`

." (Let me know in the comments how to print the last element). You’re absolutely correct! This is exactly how we’ll solve the problem.

I’ll solve it using JavaScript. In future posts. Feel free to solve it using your preferred programming language.

#### Solution:

**javascript code:**

```
const students = ["Jhon", "Nozibul", "Islam", "Maria", "Samira"];
function printFirstElement(array){
console.log(array[0]);
}
printFirstElement(students);
```

**Now, the question is, what is the time complexity of this program?**

Answer: **The time complexity of this program is O(1) (Big O of one).**

**O(1) is known as a constant time algorithm.**

Now, let’s understand in which cases a program takes time to execute:

When declaring variables, e.g., var name = "John";

During operations, e.g., var a = 10; var b = 20; var c = a + b;

When calling a function, e.g., myFunction();

For each of these tasks, the time taken is O(1), meaning constant time. If a program contains several such constant time algorithms, we can say that the overall time complexity of the program is O(1).

If we want to visualize the time complexity of this program through a graph, the graph representation will look like this:

#### Let’s look at another program:

**Javascript code:**

```
var name = "Jhon"; // O(1)
console.log(name); // O(1)
function logFirstItem(array){
console.log(array[0]); // O(1)
}
logFirstItem([2, 4, 6]); // O(1)
// Total Complexity:
// O(1) + O(1) + O(1) + O(1)
// O(4)
// O(1)
// So, the final complexity is: O(1)
```

If the time complexity of a program is `O(4)`

, `O(100)`

, or `O(1000)`

, the key point is that it does not depend on the input size. For instance, in this program, regardless of the size of your array, the time complexity will always be `O(4)`

. Therefore, this algorithm is referred to as a constant time algorithm.

**Such constant time algorithms, like O(4), O(100), or O(1000), can simply be represented as O(1), meaning the complexity of this program is constant.**

#### Graph Representation:

So, now I hope you can understand why the time complexity of our first program is O(1).

**we discussed the constant time algorithm Big O(1). Now, we will talk about other time complexities, Insha’Allah.**

First, let’s discuss a programming problem. Suppose there are `1000 student names`

in an array, and you need to print the names of each student from that array. You might be smiling a little, thinking, “What’s the problem? I will first get the length of the array and then run a loop to print all the data.” Your idea is completely correct. We will solve the problem this way. However, for easier understanding of the program, we will consider the names of just 5 students for now.

**Let’s take a look at the following program:**

**Javascript code:**

```
let students = ["Jhon", "Nozibul", "Islam", "Maria", "Samira"];
let length = students.length;
for(let i=0; i<length; i++){
console.log(students[i]);
}
```

**What is the time complexity of this program?**

Answer: **Big O(n)**

Now, the question is, how is it Big`O(n)`

? If you observe the program carefully, you will see that the students array contains a total of 5 elements. Here, n is equal to 5. If there are `1,000`

or `100,000`

elements, regardless of how many elements there are, when we want to print each element, if we use iteration (the meaning of iteration is to touch each data in an array from start to finish using a loop), then the time complexity of that program will be Big `O(n)`

.

Now, let's think about the same program in another case. Suppose we want to check if a student named "Islam" exists in the array. If we find our desired data, we will break the loop. After that, even if there are 1 million data points, it doesn't matter because we have found our desired data.

Let's observe the following program:

**Javascript code:**

```
let students = ["Jhon", "Nozibul", "Islam", "Maria", "Samira"];
let length = students.length;
for (let i = 0; i < length; i++) {
if (students[i] === "Islam") {
console.log("Found!");
break;
}
}
```

**What is the time complexity of this program?**

Answer: **Big O(n)**

Now, **is there a bit of confusion?** Because this time we didn't need to run the loop until n; we found our data before that, and the loop didn't run any further. So how can the time complexity of this program be Big `O(n)`

?

Let's try to understand this better. When we solve a programming problem, the time and space complexity of that program can be categorized into `three types`

:

- Best Case
- Worst Case
- Average Case

The mathematical notation for Best Case, Worst Case, and Average Case is as follows:

- Best Case → Omega (Ω)
- Worst Case → Big O
- Average Case → Theta (θ)

#### When and Why are These Cases Used?

**Best Case → Omega (Ω):**

This represents the lower bound, meaning it is used to express the minimum time complexity of a program.**Worst Case → Big O:**

This represents the upper bound, meaning it is used to express the maximum time complexity of a program. A program's time complexity will not exceed this value in its worst-case scenario (it can be less, but not more).**Average Case → Theta (θ):**

This represents the combined upper and lower bounds. If the upper and lower bound time complexities of a program are the same, then we use Theta (θ) to express that time complexity mathematically.

`Note:`

For simplicity, I have only used the term "time complexity" here. These cases apply to both time complexity and space complexity.

**Let's return to our previous discussion. Why is the time complexity of our earlier program Big O(n)? Here, even though we found our desired data at the 3rd position, what could be the maximum worst-case scenario? Our desired data could be at the very last position, making the maximum time complexity Big O(n). Programmers should always consider what the worst-case scenario for the algorithm they develop could be. Big O notation is used to express the worst case of a program.**

In simple terms, if we iterate through n number of elements, then the time complexity of that program will be Big O(n).

**Graph representation:**

**An algorithm with a time complexity of Big O(n) is called a linear time algorithm.**

Now, let's talk about a new programming problem. How would pairs (combinations) of the letters A, B, and C look? They would be: AA, AB, AC, BA, BB, BC, CA, CB, CC. Now we will solve a problem like this.

Let’s look at the following program:

**Javascript code:**

```
let arr = ['a', 'b', 'c', 'd', 'e', 'f'];
let lengthOfArray = arr.length;
for (let i = 0; i < lengthOfArray; i++) {
for (let j = 0; j < lengthOfArray; j++) {
console.log(arr[i], arr[j]);
}
}
```

Does the above program seem too difficult? If you understand nested loops, it shouldn't seem too hard. I hope you know about nested loops.

**What is the time complexity of this program?**

Answer: **Big O(n²)**

#### How?

If we look closely at the above program, we can see that in the first line we declared an array. What is the time complexity of declaring a variable? It’s Big `O(1)`

, right? In the second line, we calculate the length of the array and again declare it in a variable, and its time complexity is also Big `O(1)`

.

Now, let’s analyze the two lines where we have the nested loops. Pay close attention to how the time complexity works here. When the first loop runs once, the second loop runs for n times. Again, when the first loop runs a second time, the second loop also runs for n times. This pattern continues, and when the first loop runs n times, the second loop will also run n times. This means that the two loops’ time complexities are multiplicative. Therefore, we can conclude that the time complexity of the loops is Big `O(n * n) = Big O(n²)`

.

**So, what is the overall time complexity of the program now?**

**Code Explanation:**

```
let arr = ['a','b','c','d','e','f']; // O(1)
let lengthOfArray = arr.length; // O(1)
for(let i = 0; i<lengthOfArray; i++){
for(let j = 0; j<lengthOfArray; j++){
console.log(arr[i], arr[j]);
}
} // O(n * n)
// Total Complexity:
// O(1) + O(1) + O(n * n)
// O(2) + O(n^2)
// O(n^2)
// so, final complexity is: O(n^2)
```

**This is a constant time operation:** O(1).

Length Calculation: let lengthOfArray = arr.length;

**This is also a constant time operation:** O(1).

**Nested Loops:**

The outer loop runs n times (where n is the length of the array).

For each iteration of the outer loop, the inner loop also runs n times.

Thus, the total operations for the nested loops: **O(n * n) = O(n²).

Total Complexity:**

**Combining these:**

`O(1) + O(1) + O(n²) = O(2) + O(n²) = O(n²).`

**Graph Representation**

The final time complexity of the above program is Big `O(n²)`

through simplification. In the next part, we will try to learn which programming language to study for data structures and algorithms!

## Top comments (0)