# Big O Notation

### Allen Shin ・4 min read

During my last blog post, I talked about a particular problem which involved using checking 3 different indices to check for the condition of 3 different numbers at the same time. The solution I proposed ended up using a for loop, within a for loop... within a for loop (I heard you like for loops). Let's take another look at the structure of the problem:

```
function pythagoreanCheck(array){
for(i = 0; i < array.length - 2; i++){
for(j = i + 1; j < array.length - 1; i++){
for(k = j + 1; k < array.length; k++){
*condition for returning true*
}
}
}
}
```

This function structure takes in an array, and checks for every unique combination of 3 numbers. As I mentioned before, this was an extremely inefficient way to solve the problem, even thought it technically works. To put it into specific terms, the way to talk about the inefficiency of the problem is called Big O Notation.

The 'notation' looks like this: O(n)

The O stands for the function and the n stands for the input and how it scales with our operation. Together they stand for the time it takes for an algorithm to run to completion.

To take a look at some basic examples for familiarity, the operation that has a O(1) runtime would be something that does one function no matter what the size of the input, since we are given a constant. That would be something like a basic console log, or any single operation. The size of inputs would be irrelevant to our runtime.

Next we have the O(n) which would be the runtime for one for loop. We can imagine that if we have a for loop which loops through the entire input, our runtime is proportional to the size of the input. Size n input would give us a runtime that is proportional to that size of the input. The important concept to remember here is that O(n) is measuring the runtime **in proportion to the input**.

If we take a look back at our previous example, we can see this proportionality in play. Since we have 3 iterations, we can consider the operation runtime as growing to the power of 3 in proportion to the original size of the input. If we had input of size 100, we would get a runtime that would have to run 100 operations per 100 operations of each of the 100 inputs. This would then have a Big(O) notation of O(n^{3).} This would scale down to O(n^{2)} if you had an operation that had only one nested iteration.

We considered the runtime when there was an input of size 100, because the next point we'll consider is that Big O Notation is looking for the growth of the runtime **as the input grows arbitrarily big**. The change of runtime isn't felt between O(n) and O(n^{2)} runtimes with an input sized 2, but it grows exponentially when we change that to an input size 100 or 1000. This is kind of "worst case scenario" that we want to differentiate when looking at different operations.

To cover this topic a bit more, let's consider this example:

```
let nums = [1,2,3,4,5,6,7,8,9,10]
function twoLoops(nums){
for(let i = 0; i < nums.length; i++){
console.log(nums[i])
}
for(let i = 0; i < nums.length; i++){
console.log(nums[i])
}
}
```

We have two for loops running in this function, but in this example they are not nested. In that case what would be our Big O Notation? Since we have for every input, two different operations running we should technically get O(2n) runtime. However, one thing to remember about Big(O) is that since we only care about the change is the input gets arbitrarily large, multiples of numbers are actually ignored. If you can imagine 100 inputs for this function, that would give us '200' runtime, whereas if it the loop was nested it would give us '10000'. In light of this the multiple of 2 is actually insignificant. So, this function is actually O(n) runtime.

This dropping of constants can be applied to less significant terms in general as well. For example if you had an function like this one provides another interesting case:

```
let nums = [1,2,3,4,5,6,7,8,9,10]
function anothaOne(nums){
for(let i = 0; i < nums.length; i++){
for(let j = 0; j < nums.length; j++ {
console.log(nums[i], nums[j])
}
}
for(let i = 0; i < nums.length; i++){
console.log(nums[i])
}
}
```

This function provides us with a nested for loop another iteration alongside that. That gives us a runtime of O(n^{2} + n), but just like how we can drop constants, we can also drop the less significant term of n because, again, as the function grows arbitrarily large the n becomes insignificant to the rest of the runtime.

Understanding Big O notation will seem insignificant at earlier levels of coding, but this is a very relevant issue for applications that deal with large levels of input and utilizing this information will be the difference between a application users will use and won't use. For anyone looking to find a job in programming, this will become an important topic to understand for the interview process as well as on the job. I expect that I will continue with more posts on this with more specific examples.