### What is Big O?

Big O notation is a mathematical tool commonly used in Computer Science for measuring the runtimes of computer algorithms and describing their complexity. It’s often used to determine what the efficiency of a particular approach to a problem will be. More specifically, Big O determines what the “worst-case scenario” is for an algorithm’s runtime and space as the input size grows.

### Time Complexity

Instead of measuring the actual time of an algorithm, we use time complexity which is the method of determining how many times an algorithm’s statements will run.

### Most common time complexities

*The n in O() represents the input size.*

**O(1)** — Constant runtime. This means the code executes instantly. No matter what the input size it executes in the same exact time. For example, if a group of programmers all decide to say hello world at the same time it would cost the same amount of time as only one programmer saying hello world. The following code executes in constant time:

**O(n)** — Linear runtime. Linear means to execute n amount of times. For example, if you can picture a long line of programmers standing in line for a job interview. Say you need to find out all of their names. You would go and ask each programmer their name individually. The following code executes in linear time:

**O(n²)** — Quadratic time. Quadratic means the runtime number of executions increases in relation to the square root of n. This is not an ideal runtime as it will only grow as the input size increases. An example in code would be nested for loop as follows:

**O(log n)** — Logarithmic time. This means the runtime will increase in relation to the logarithm of n (the input size). A real-world example of this runtime would be searching a phonebook for a specific number. You open the phonebook halfway then if you don't find it, repeat the process of halving the pages of only the section that needs to be checked until you eventually find the number you’re looking for. This would be considered binary search, which is a common algorithm that runs in logarithmic time. *Learn more about logarithms* *here*

### Drop the Constants and Non-Dominant Terms

When determining a Big O notation runtime, the constants should be dropped. For example, if you have a specific runtime expression *O(n + 1 + 1 + 1)* this would be boiled down to *O(n)* because Big O only describes the worst-case and including the constants wouldn’t make a big enough difference. Also, non-dominant terms should be dropped from the expression, for example, *O(n² + n)* would become *O(n²)*

### Space Complexity

In Big O space complexity describes the total amount of memory space used by an algorithm determined by its input size n. This is also expressed the same as time complexity for example an array of size n elements would be expressed as *O(n)* space.

### Analyzing Space Complexity

To determine the space complexity of this code, we can list the size in bytes of the variables used.

*int = 4 bytes, double = 8 bytes*

The space for this example code would be measured by 4n + 4 + 8 + 4. Since n is dominant this would evaluate to O(n) space. *Read more about Space Complexity* *here*

#### Additional Big O Resources

https://en.wikipedia.org/wiki/Big_O_notation

What is Big O Notation Explained: Space and Time Complexity

https://www.geeksforgeeks.org/understanding-time-complexity-simple-examples/

*Originally published at* *https://coderjay06.github.io* *on October 15, 2020.*

## Latest comments (0)