Just like writing your very first for loop, understanding time complexity is an integral milestone to learning how to write efficient complex progr...
For further actions, you may consider blocking this person and/or reporting abuse
Your article has a few common misunderstandings about what BigO is measuring.
The statement, "Big O notation gives us our algorithm's approximate run time in the worst case", is not true, not even in it's common use in programming. If it were, then a HashMap insertion would be
O(n^2)
instead of the commonly quotedO(1)
, which results from measuring an average complexity over a good data set, not the worst case. Similarly, adding an item to a vector is often quoted asO(1)
, which is it's amortized time, the worst-time beingO(n)
.It's often misused in programming to mean a tight bound, when it is actually an upper-bound. A simple for-loop can be
O(n)
, butO(n^3)
is technically also an upper bound for it. There's a related Ω for lower-bound, and Θ for a tight bound. When we measure an algorithm we typically want the Θ, not O, but this is commonly misused, probably just because we don't have Θ on our keyboards. Big-Oh is a good way to specify a limit, whereas Θ is a good way to measure it.Your initial diners solution is
O(n)
, notO(n^2)
. Adding a constant loop, like the 0..3 in this case, does not alter the complexity. That would be3*N
which simplifies toO(n)
. That is, ignoring theslice
andappend
, which have an unknown complexity here.Thank you for the response!
You're right that the way the term Big O is used in an informal context isn't completely mathematically correct. As I was writing this article, I had a lengthy discussion with someone who has an advanced mathematics background. We discussed the differences between upper and tight bounds, and amortized and worst-case complexities specifically. Ultimately, the goal of this article is to present the concepts of time complexity in a way that programmers would encounter in practical, daily terms. In terms of the wording, see the Wikipedia entry for Big O notation, which states:
"For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound."
Ultimately, in practical use, the colloquial "misuse" of Big O suffices. I found that going farther into asymptotic analysis in this article was not helpful from a pragmatic standpoint.
I probably should have mentioned in the article that these things were considered, but not included, so thank you for the feedback! I'll make better note of those things in the future.
I understand that the casual use in coding is is different than math, but saying that is the "worst case" is also wrong for common coding.
We tend to consider hash maps O(1), vector add as O(1), map insert as O(log N), quicksort as O(N * log N), but none of those are true in the "worst case". We use average input case, typical case and amortized case on a regular basis, likely more common than worst-case when it comes to collections and common algorithms.
It is not my domain of expertise, but assuming I am not wrong, maybe it's worth adding something like this: Big O measures the upper bound on a function. Big O can, but does not necessarily, refer to the running time in the worst case. We can determine Big O for various scenarios, including the best case, worst case, typical case, etc.
Update: I re-read the part of the sentence about this in the article:
I think @vickylai meant that the function would not have worse performance than the given upper bound, so it was a "worst case" in that sense.
However the input was not specified. For example, if we have a (contrived) function that is O( n2 ) for any even number as input, but O( n ) for odd-numbered input, the "worst case" in that sense is when it receives an even number and the "best case" is if receives an odd number.
The use of the term "worst case" refers to two different things in each of the two paragraphs above.
It doesn't always make a difference, but in practice we can't study the big O of any algorithm without some context around input. It's worth keeping that in mind.
Hello there!
A small mistake i've seen in your post is in the example of giving forks. The initial program is
O(n)
noO(n^2)
. That's because the first loop is constant (it is not looping over n).I suggest you to correct that, because that example, for someone who just started in algorithm analysis, could induce to think that every program with, let's say, k nested loops will be at least
O(n^k)
, when that it is not true.Out of that, great introduction to this awesome subject that is algorithm analysis!
Greetings :)
I really enjoyed this article - you have made these concepts a little more accessible. I have always had a hard time visualizing O(log n) and your explanation - reduces the size of the input at every step - really did the trick for me.
Thank you!
You're welcome! :) And I'm glad.
Great article, took me back to my DSA class. Oh how I miss not worrying about what Microprocessor Architecture to mug up next
I really appreciate the time in writing an article like this, while keeping it creative and interesting.
I am a newbie here in analyzing the complexity of the algorithms. But, isn't the last 2 giveForks example O(n) complexity? From the referenced Wikipedia article, there are some examples which are constant time complexity despite having multiple for loops.
Regarding the time improvement that you see, might it be because the second code is somehow cache-friendly possibly?
Hi Vicky, great article! However I think your pizza examples still have O(n) complexity although they have nested loops. The inner loops can be assumed to run in constant time O(1) as the number of slices per pizza and the number of pizzas per box is more or less constant.
Great post. Detailed yet simple to follow. Thanks for the write up.
I try and make things accessible. Glad that came across! :)
Hi Vicky, thanks for that article.
I always had a hard time understanding that complex notion of algorithm complexity.
Your article is very detailed and I think that I am going to read it a few more time to better understand all of this.
Great article. And the cover image is spot on.
Haha. Thanks Kasey!
This is excellent!!
I loved this post, thanks a lot Vicky :D
Exceptional explanation! I'm adding this to my list of helpful references.
(Also, you had me at TARDIS.)
Thank you Vicky for this write up, you've made big O very accessible to me 😃.
That's fantastic news :D
Fantastic post Vicky! The food analogies were quite awesome, although I was disappointed there wasn't one for binary search :).
Fantastic refresher! It's been a while since algorithms class. Easy to follow food examples that we can all relate to :D
Vicky this is great work thank you!
Just read it (at last :D), thanks for the update and great writing style!
Great explanation!
And, FYI, in Italy is pretty common for take away serve unsliced pizza. Often you have to explicitly ask for a sliced pizza. ;-)
:O That's good to know for when I make it to Italy one day!
Thanks! :)
Thanks for the article! Very accessible and easy to read :)
Great Post! Good explanation, thank you
Wonderful article
Thanks Chris! :)
This is amazing ! Thanks for taking the time to write this up. You have described a very complex topic in quite understandable terms :-)
Such a neat explanation of the Big O notation. Kudos!
well explained
Glad you thought so!
Thank you so much. I'd heard about its importance in interview questions but didn't know where to begin. This is brilliant! :)
Glad you find it useful Lorna! :)