Let's use k to denote the number of if/else if/else conditions we evaluate, to avoid confusing this with n, the length of the array we pass in to the function.
If we pass in an array of length 50, the worst-case complexity is O(k) = O(1), because all we're doing is evaluating k conditionals and returning k strings. If we pass in an array of length 1,000,000, we still only evaluate k conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's still O(k) = O(1).
Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach yetSomethingElse and it's true. In that case, the worst-case complexity of the overall function, doSomething, is O(n). Unlike k, notice that n is a variable and not a constant/fixed value—it depends on the size of the input.
It's also good to keep in mind that there is nothing special about n.
Saying an algorithm is O(n) without defining n is a little misleading, even though it is traditionally taken to be the size of the input.
In the example with all the if statements, if we define n to be the number of if statements, then the algorithm is in fact O(n). But n is just a constant, so it's really O(1).
yea i got you now. on your example since arrayOfLengthN is just being compared then run time would always be constant unless n is being run by or evaluated by other method such as forEach. In other words compare instruction is O(1) regardless of how many are being chained.
In the example with all the if statements, if we define n to be > the number of if statements, then the algorithm is in fact O(n). > But n is just a constant, so it's really O(1).
Im guessing the author is talking about n as number of ifs thats how I was understanding it. Thats why he had to create a hash for a constant look up.
ohh. I really thought its
O(n)
because it will try to evaluate every ifs.Unless for switch statement, compiler will try its best to optimized and it will be much faster lookup. Im talking about c++ btw. thanks
Nope. A good way to understand why it's O(1) is to consider a function that accepts an array of length
n
and does the same thing as in this example:Let's use
k
to denote the number ofif/else if/else
conditions we evaluate, to avoid confusing this withn
, the length of the array we pass in to the function.If we pass in an array of length
50
, the worst-case complexity isO(k) = O(1)
, because all we're doing is evaluatingk
conditionals and returningk
strings. If we pass in an array of length 1,000,000, we still only evaluatek
conditionals, each one performing the same atomic operation as when we passed in a smaller array. So it's stillO(k) = O(1)
.Now, suppose instead that one of these conditionals runs a loop over the array:
The worst case is if we reach
yetSomethingElse
and it's true. In that case, the worst-case complexity of the overall function,doSomething
, isO(n)
. Unlikek
, notice thatn
is a variable and not a constant/fixed value—it depends on the size of the input.I hope that clears things up.
It's also good to keep in mind that there is nothing special about
n
.Saying an algorithm is
O(n)
without definingn
is a little misleading, even though it is traditionally taken to be the size of the input.In the example with all the if statements, if we define
n
to be the number of if statements, then the algorithm is in factO(n)
. Butn
is just a constant, so it's really O(1).yea i got you now. on your example since
arrayOfLengthN
is just being compared then run time would always be constant unlessn
is being run by or evaluated by other method such asforEach
. In other words compare instruction isO(1)
regardless of how many are being chained.Im guessing the author is talking about
n
as number ofifs
thats how I was understanding it. Thats why he had to create a hash for a constant look up.Exactly, unless they're doing something with the input data that would scale with the input size.
gotcha. much love sir. thanks.