As I prepare for technical interviews, I have learned how important it is to include data structures and algorithms into the daily schedule. Big O notation is always a subject that comes in the tech interview realm. I'll be going over some of the basic concepts as I have encountered as I begin wrapping my head around Big O concepts.
Big O is the language used to describe how long it takes for an algorithm to run. The efficiency of an algorithm can be measured by runtime and amount of space required for the algorithm to execute.
Runtime is measured by how quickly the algorithm executes as the input gets larger or goes to infinity.
There are some common runtimes you'll encounter as you learn about Big O. Here are some I've learned about. N or n is the size of the input.
O(1) means the algorithm will always execute in same amount of time or space no matter what the size (n) is. The runtime is constant.
O(N) or O(n) means the runtime grows in direct proportion to the size of the input n. The runtime will increase linearly as size n increases.
O(N2) means the runtime grows in direct proportion to the square of the input size. This runtime can be seen in algorithms with nested loops.
O(2N) means the runtime will double for every additional input N.
O(log N) means the runtime will grow in proportion to the logarithm of the input size.
Algorithms are typically described in their worst-case scenario. This means considering the longest amount of time or most space the algorithm would take to execute.
There is much more to cover regarding Big O, data structures, and algorithms.