Hello everyone, some weeks ago I started to study some computer science algorithms using JavaScript as the programming language, and normally after...
For further actions, you may consider blocking this person and/or reporting abuse
Like it - just a point of clarification - a sliced array is a shallow copy and changing the original array won't modify it as you seem to suggest:
If it's an array of objects, clearly it's a shallow copy so changing an object will change the one referenced by both arrays.
You're right! Thank you to share this clarification. 👍
Thx for the article. I'm sure it's very important for the frontend community.
People these days often use
.reduce
with spread...
. So it becomesO(n^2)
. There're lots of articles (even on dev.to) which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.Also I think that BigO of
.splice
depends on the arguments. I don't think e.g. that 1 to 1 replacement would causeO(n)
, because it's a pretty simple optimization.I would update this to can be a problem. Often time/space complexity only matters at scale and in many cases it will never be a problem; for instance, when populating a select-dropdown. In other cases, for instance big data, time/space complexity may always be a consideration, especially for an iterative process like that of graphing or machine learning.
Absolutely. That's often the case with Big-O. Big-O is the worst-case scenario, but worst-case may rarely be reached. Big-O means f(n) is less than a constant of O(g(n)). So in the worst case,
splice
may accept every element but on average it won't and a best-case scenario it would be zero or one element (if 0 isn't practical) at the end of the stack.For best-case big-omega is used (e.g.,
Ω(n)
) and for average-case big-theta is used (e.g.,Θ(n)
). Notice the internal ligature of the big-theta vs that of a big O as it may be easy to mistake one for the other.I'm not sure I understand what you mean, do have an example that would illustrate this kind of situation? Thanks!
I think he means something like:
Where
...agg
is potentially being spread for each iteration overusers
. Instead of an internal traversal, use any kind of loop for O(n) time instead of O(n^2).reduce
could be used, but was abandoned because it's absolutely unnecessary.for .. of
(or any loop structure) could be used instead of.forEach
. The latter was used as a manner of preference.Note: it's actually something closer to O(n(n+1)/2) + O(n) where once coefficients and smaller figures are removed simplifies as O(n^2).
What reference did you lean on to find their time complexity?
If their complexities are assumed or inferred, I'm not sure the assumptions hold, since each engine implements JS arrays differently. See: stackoverflow.com/a/61713477/380607
PS: I think it would be wise to mention that
.some()
is only O(n) in the worst case, but rather O(k) normally, where k is the index to be found, which on average will be the middle (median) index, so k = n/2. While a factor of n, it is different than other O(n) functions which necessarily will have to go through the entire array and thus amount to the full n.Big O is already the worst case. otherwise, we have Omega and Theta
Big O is not the worst case. Big O tells you the upper bound growth rate function. That's it. Worst case tells you a specific example of input which would make the algorithm run for the longest time. Big O can be used to describe the upper bound in worst case, average case, best case. Both are orthogonal.
This may be accurate, but for all practical purposes of conceptual understanding; upper-bound ≈ worst case.
It's simpler to describe Big-O vs Big-Theta vs Big-Omega and also stay away from little-o. Again, this is for simplistic terms and anything outside a data scientist and seasoned engineer should probably stick to the simplified understanding as colloquial terms.
This is targeting entry level devs, which I think does service in how it describes the topic:
educative.io/blog/a-big-o-primer-f...
Crisp and to-the-point post. Well done!
what about .join() method?
It should be O(n)
Very useful. I had about the same idea on the complexity but it's good to know that you took the time to summarize this data.
@luiscastillokc for the method "indexOf" I think it's O(log(n)) time, because it uses something like Binary search
Binary search can only be executed on a sorted collection
Very nice article. Straight and to the point. Thanks for sharing!