No, the only thing that matters is what you're doing inside the if statement that evaluates to true. If it's an O(n) routine, like calling a method that loops over an array input, then yes:
if(true){doBigOofNRoutine();}
But in this case, simply nesting if statements doesn't change the overall complexity. That just adds +1 to the number of atomic operations that have to be done in the worst case: you perform n atomic checks before you get to the else statement, then one more for the nested if inside, and +1 (presumably) for whatever that if is doing. I say presumably because I don't know—maybe it's doing two, three, four, or a variable number of operations.
But again, the n here is a constant, not a variable that depends on the input size. So it collapses to O(1) asymptotically. It's about limits.
Let's use k to denote the number of if/else if/else conditions we evaluate, to avoid confusing this with n, the length of the array we pass in to the function.
If we pass in an array of length 50, the worst-case complexity is O(k) = O(1), because all we're doing is evaluating k conditionals and returning k strings. If we pass in an array of length 1,000,000, we still only evaluate k conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's still O(k) = O(1).
Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach yetSomethingElse and it's true. In that case, the worst-case complexity of the overall function, doSomething, is O(n). Unlike k, notice that n is a variable and not a constant/fixed value—it depends on the size of the input.
It's also good to keep in mind that there is nothing special about n.
Saying an algorithm is O(n) without defining n is a little misleading, even though it is traditionally taken to be the size of the input.
In the example with all the if statements, if we define n to be the number of if statements, then the algorithm is in fact O(n). But n is just a constant, so it's really O(1).
yea i got you now. on your example since arrayOfLengthN is just being compared then run time would always be constant unless n is being run by or evaluated by other method such as forEach. In other words compare instruction is O(1) regardless of how many are being chained.
In the example with all the if statements, if we define n to be > the number of if statements, then the algorithm is in fact O(n). > But n is just a constant, so it's really O(1).
Im guessing the author is talking about n as number of ifs thats how I was understanding it. Thats why he had to create a hash for a constant look up.
question though. What if there's a nested
ifs
then It would be anO(n)
isn't it? since it will try to evaluate the nestedifs
?No, the only thing that matters is what you're doing inside the
if
statement that evaluates totrue
. If it's anO(n)
routine, like calling a method that loops over an array input, then yes:But in this case, simply nesting
if
statements doesn't change the overall complexity. That just adds +1 to the number of atomic operations that have to be done in the worst case: you performn
atomic checks before you get to theelse
statement, then one more for the nestedif
inside, and +1 (presumably) for whatever thatif
is doing. I say presumably because I don't know—maybe it's doing two, three, four, or a variable number of operations.But again, the
n
here is a constant, not a variable that depends on the input size. So it collapses to O(1) asymptotically. It's about limits.ohh. I really thought its
O(n)
because it will try to evaluate every ifs.Unless for switch statement, compiler will try its best to optimized and it will be much faster lookup. Im talking about c++ btw. thanks
Nope. A good way to understand why it's O(1) is to consider a function that accepts an array of length
n
and does the same thing as in this example:Let's use
k
to denote the number ofif/else if/else
conditions we evaluate, to avoid confusing this withn
, the length of the array we pass in to the function.If we pass in an array of length
50
, the worst-case complexity isO(k) = O(1)
, because all we're doing is evaluatingk
conditionals and returningk
strings. If we pass in an array of length 1,000,000, we still only evaluatek
conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's stillO(k) = O(1)
.Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach
yetSomethingElse
and it's true. In that case, the worst-case complexity of the overall function,doSomething
, isO(n)
. Unlikek
, notice thatn
is a variable and not a constant/fixed value—it depends on the size of the input.I hope that clears things up.
It's also good to keep in mind that there is nothing special about
n
.Saying an algorithm is
O(n)
without definingn
is a little misleading, even though it is traditionally taken to be the size of the input.In the example with all the if statements, if we define
n
to be the number of if statements, then the algorithm is in factO(n)
. Butn
is just a constant, so it's really O(1).yea i got you now. on your example since
arrayOfLengthN
is just being compared then run time would always be constant unlessn
is being run by or evaluated by other method such asforEach
. In other words compare instruction isO(1)
regardless of how many are being chained.Im guessing the author is talking about
n
as number ofifs
thats how I was understanding it. Thats why he had to create a hash for a constant look up.Exactly, unless they're doing something with the input data that would scale with the input size.
gotcha. much love sir. thanks.