In plain English, Big O is a way to describe how complex and algorithm is using algebraic terms. It is a fundamental tool for software engineers and computer scientists to analyze the cost of an algorithm. When using Big O, we express the runtime of an algorithm in terms of - how quickly it grows in correlation to the input, as the input gets larger. When calculating Big O, we default to assuming the worst case scenario of the algorithm. We do this because the worst case is when it matters the most.
Constant time:
An algorithm that runs in constant time( or O(1) ), relative to its input. Simply, no matter the size of the input, this function will always require 1 step.
Linear time:
An algorithm that runs in linear time ( or O(n) ), grows at a steady pace, in direct proportion to the size of the input data set. N is the number of items that are given to us in the array. Simply, the number of items in the array is the number of times the array will print or the number of steps the function will have to take.
Quadratic time:
An algorithm that runs in quadratic time ( or O(n²) ), usually involves nested iteration over the data set. Simply, the performance of this algorithm is proportional to the square of the size of the input data set.
There may be times when we want to optimize an algorithm to use less memory instead of (or in addition to) using less time. Talking about memory cost is similar to talking about time cost. We look at the total size (relative to the size of the input) of any new variables we are allocating. Most of the time when we talk about memory cost, we’re talking about additional space, so there is no need to include space taken up by the inputs. There are always trade-offs when deciding to save memory or time, it’s up to the creator to choose which is more important in each scenario.
Big O is a complex topic, I tried my best in explaining the basic fundamentals of it. I hope you learned a little and enjoy the read. Thanks for stopping by!
Top comments (0)