What is our best-case scenario for this algorithm? If state is equal to danger, we will only perform one operation and return. That would be O(1). What if state is not equal to danger? Then we perform multiple operations and the order of alertColor() is n.
This is wrong. To claim that alertColor is O(n) in the worst case suggests that the function performance scales linearly with the size of the input—it doesn't. That's what Big-O (and other) notations really measure: How a routine's performance scales with the size of the input.
Even though you have potentially n or k or whatever number of if/else if statements in that function, that does not mean that the routine performs at O(n) complexity in the worst case.
Why? Because n is not variable—it remains constant regardless of how complex the input becomes. And hence the routine is really O(1).
Keep in mind that Big-O notation is about asymptotic performance. So if n, the number of if statements, is 5 or 1,000, it doesn't matter—that's a constant function of the input:
O(input size) = 1,000 = O(1)
One final remark, for the beginners out there:
Big O notation is not a big deal. It’s very easy to understand and you don’t need to be a math whiz to do so.
As much as I'd like this to be true, it's not. The examples you gave in this post are trivial. You don't need to be a math "whiz", but knowledge of math (specifically summations and the rules for simplifying them) is pretty important for more complex problems. Trying to present the examples in this post as anything but trivial is disingenuous.
I've probably read a dozen article on Big-O notation and this is the first one I was able to get through by reading it exactly once.
Jared's examples are intentionally trivial because his intended audience is beginners. Content of your comment makes it pretty clear that you are not the target audience.
Coming into an introductory article and saying, "the examples ... are trivial", as though it were a bad thing, is not helpful; in fact it's hurtful. Why should beginners be given complex examples first? What possible gain is there?
Thanks for catching my error, Aleksandr. That was a holdover from an earlier draft. The perils of being your own editor. But it generated a great discussion! Update forthcoming.
No, the only thing that matters is what you're doing inside the if statement that evaluates to true. If it's an O(n) routine, like calling a method that loops over an array input, then yes:
if(true){doBigOofNRoutine();}
But in this case, simply nesting if statements doesn't change the overall complexity. That just adds +1 to the number of atomic operations that have to be done in the worst case: you perform n atomic checks before you get to the else statement, then one more for the nested if inside, and +1 (presumably) for whatever that if is doing. I say presumably because I don't know—maybe it's doing two, three, four, or a variable number of operations.
But again, the n here is a constant, not a variable that depends on the input size. So it collapses to O(1) asymptotically. It's about limits.
Let's use k to denote the number of if/else if/else conditions we evaluate, to avoid confusing this with n, the length of the array we pass in to the function.
If we pass in an array of length 50, the worst-case complexity is O(k) = O(1), because all we're doing is evaluating k conditionals and returning k strings. If we pass in an array of length 1,000,000, we still only evaluate k conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's still O(k) = O(1).
Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach yetSomethingElse and it's true. In that case, the worst-case complexity of the overall function, doSomething, is O(n). Unlike k, notice that n is a variable and not a constant/fixed value—it depends on the size of the input.
It's also good to keep in mind that there is nothing special about n.
Saying an algorithm is O(n) without defining n is a little misleading, even though it is traditionally taken to be the size of the input.
In the example with all the if statements, if we define n to be the number of if statements, then the algorithm is in fact O(n). But n is just a constant, so it's really O(1).
yea i got you now. on your example since arrayOfLengthN is just being compared then run time would always be constant unless n is being run by or evaluated by other method such as forEach. In other words compare instruction is O(1) regardless of how many are being chained.
In the example with all the if statements, if we define n to be > the number of if statements, then the algorithm is in fact O(n). > But n is just a constant, so it's really O(1).
Im guessing the author is talking about n as number of ifs thats how I was understanding it. Thats why he had to create a hash for a constant look up.
This is wrong. To claim that
alertColor
isO(n)
in the worst case suggests that the function performance scales linearly with the size of the input—it doesn't. That's what Big-O (and other) notations really measure: How a routine's performance scales with the size of the input.Even though you have potentially
n
ork
or whatever number ofif/else if
statements in that function, that does not mean that the routine performs atO(n)
complexity in the worst case.Why? Because
n
is not variable—it remains constant regardless of how complex the input becomes. And hence the routine is reallyO(1)
.Keep in mind that Big-O notation is about asymptotic performance. So if
n
, the number ofif
statements, is 5 or 1,000, it doesn't matter—that's a constant function of the input:O(input size) = 1,000 = O(1)
One final remark, for the beginners out there:
As much as I'd like this to be true, it's not. The examples you gave in this post are trivial. You don't need to be a math "whiz", but knowledge of math (specifically summations and the rules for simplifying them) is pretty important for more complex problems. Trying to present the examples in this post as anything but trivial is disingenuous.
I've probably read a dozen article on Big-O notation and this is the first one I was able to get through by reading it exactly once.
Jared's examples are intentionally trivial because his intended audience is beginners. Content of your comment makes it pretty clear that you are not the target audience.
Coming into an introductory article and saying, "the examples ... are trivial", as though it were a bad thing, is not helpful; in fact it's hurtful. Why should beginners be given complex examples first? What possible gain is there?
Thanks for catching my error, Aleksandr. That was a holdover from an earlier draft. The perils of being your own editor. But it generated a great discussion! Update forthcoming.
question though. What if there's a nested
ifs
then It would be anO(n)
isn't it? since it will try to evaluate the nestedifs
?No, the only thing that matters is what you're doing inside the
if
statement that evaluates totrue
. If it's anO(n)
routine, like calling a method that loops over an array input, then yes:But in this case, simply nesting
if
statements doesn't change the overall complexity. That just adds +1 to the number of atomic operations that have to be done in the worst case: you performn
atomic checks before you get to theelse
statement, then one more for the nestedif
inside, and +1 (presumably) for whatever thatif
is doing. I say presumably because I don't know—maybe it's doing two, three, four, or a variable number of operations.But again, the
n
here is a constant, not a variable that depends on the input size. So it collapses to O(1) asymptotically. It's about limits.ohh. I really thought its
O(n)
because it will try to evaluate every ifs.Unless for switch statement, compiler will try its best to optimized and it will be much faster lookup. Im talking about c++ btw. thanks
Nope. A good way to understand why it's O(1) is to consider a function that accepts an array of length
n
and does the same thing as in this example:Let's use
k
to denote the number ofif/else if/else
conditions we evaluate, to avoid confusing this withn
, the length of the array we pass in to the function.If we pass in an array of length
50
, the worst-case complexity isO(k) = O(1)
, because all we're doing is evaluatingk
conditionals and returningk
strings. If we pass in an array of length 1,000,000, we still only evaluatek
conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's stillO(k) = O(1)
.Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach
yetSomethingElse
and it's true. In that case, the worst-case complexity of the overall function,doSomething
, isO(n)
. Unlikek
, notice thatn
is a variable and not a constant/fixed value—it depends on the size of the input.I hope that clears things up.
It's also good to keep in mind that there is nothing special about
n
.Saying an algorithm is
O(n)
without definingn
is a little misleading, even though it is traditionally taken to be the size of the input.In the example with all the if statements, if we define
n
to be the number of if statements, then the algorithm is in factO(n)
. Butn
is just a constant, so it's really O(1).yea i got you now. on your example since
arrayOfLengthN
is just being compared then run time would always be constant unlessn
is being run by or evaluated by other method such asforEach
. In other words compare instruction isO(1)
regardless of how many are being chained.Im guessing the author is talking about
n
as number ofifs
thats how I was understanding it. Thats why he had to create a hash for a constant look up.Exactly, unless they're doing something with the input data that would scale with the input size.
gotcha. much love sir. thanks.