- This series contains all my notes on anything related to computer science.
So, if you are like me you probably know a little something about
Big-O notation. You can just google Big-O notation and get a lot of results. However, I have found the definitions to be lacking. The definitions usually fall into one of two categories:
1) too academic with a large emphasis on the mathematics
2) too simple and does not go into any details
So hopefully this blog post will connect those two worlds
- I stumbled across the book
Introduction to Algorithms A Creative Approachby
Udi Manberand all this information comes from chapter 3 of that book.
Big-Onotation allows us to perform an algorithm analysis to predict the behaviour of an algorithm without implementing it on a specific machine. It is important to note that it is impossible to predict the exact behaviour of an algorithm, as there are simply too many influencing behaviours. Big-O simply allows us to get an approximation of the algorithm's run time.
With Big-O we use the running time of the algorithms for comparison. To be more specific we ignore everything and focus on the runtime of the algorithm as the input goes to infinity. So the result of Big-O will indicate how long the algorithm in question is expected to run for a particular input. The number of different possible inputs is enormous and most algorithms behave differently for different inputs(that is how we get best and worst run time scenarios). To simplify this wide range of inputs Big-O focuses on the worst case of possible inputs. So when we see an algorithm with O(n) we can assume that the range of inputs this algorithm can run on, has a worst case running time of O(n).
Why focus on worst cases inputs? Why not do best case or average case? As it turns out we don't focus on best case because best case inputs are usually trivial and not representative of real world. The average case is not focused because it is hard to determine what exactly is the
average caseand if not careful the average case can contain many inputs that would never happen. Worst case is mainly focused because the algorithm that achieves the best performance for the worst case input, also performs very well for all other cases of input
So to summarize,
O(logn)tells us the reader at the worst possible range of inputs. This algorithm's run time will perform at O(logn). However, That does not mean this algorithm can not perform better on a different range of inputs. It is simply stating the worst case scenario.
- Technically speaking
Big-Onotation is a mathematical notation. Which means it has a mathematical definition:
f(n) is O(g(n)) if there exists positive numbers c and N such that f(n) ≤ cg(n) for all n ≥ N
Which can be read as,
f is Big-O of g if there is a positive number c such that f is not larger than c times g for sufficiently large n's, that is, for all n's larger than some number N
Which is really just fancy for math talk stating, the
Big-Onotation of a algorithm will change based on the input. To mathematically prove this we can look at an example:
5n^2 + 15 = O(n^2). since 5n^2 + 15 ≤ 6n^2 for n≥4.
- However, the
Big-Ochanges when we use inputs 6 or above, resulting in:
5n^2 + 15 = O(n^3). since 5n^2 + 15 ≤ n^3 for all n≥4.
- You might now have the question, doesn't this mean that
Big-Onotation is kind of vague? Yes, you are correct, it states there must be a range of input but does not give us any details on their restrictions or how to calculate them. Which is another reason to why big-o focuses on worst case scenario. So we know that it will not change beyond that point.
- Thank you for taking the time out of your day to read this blog post of mine. If you have any questions or concerns please comment below or reach out to me on Twitter.