Thx for the article. I'm sure it's very important for the frontend community.
People these days often use .reduce with spread .... So it becomes O(n^2). There're lots of articles (even on dev.to) which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.
Also I think that BigO of .splice depends on the arguments. I don't think e.g. that 1 to 1 replacement would cause O(n), because it's a pretty simple optimization.
People these days often use .reduce with spread .... So it becomes O(n^2). There're lots of articles (even on dev.to) which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.
I'm not sure I understand what you mean, do have an example that would illustrate this kind of situation? Thanks!
Where ...agg is potentially being spread for each iteration over users. Instead of an internal traversal, use any kind of loop for O(n) time instead of O(n^2).
reduce could be used, but was abandoned because it's absolutely unnecessary. for .. of (or any loop structure) could be used instead of .forEach. The latter was used as a manner of preference.
Note: it's actually something closer to O(n(n+1)/2) + O(n) where once coefficients and smaller figures are removed simplifies as O(n^2).
I would update this to can be a problem. Often time/space complexity only matters at scale and in many cases it will never be a problem; for instance, when populating a select-dropdown. In other cases, for instance big data, time/space complexity may always be a consideration, especially for an iterative process like that of graphing or machine learning.
I think that BigO of .splice depends on the arguments
Absolutely. That's often the case with Big-O. Big-O is the worst-case scenario, but worst-case may rarely be reached. Big-O means f(n) is less than a constant of O(g(n)). So in the worst case, splice may accept every element but on average it won't and a best-case scenario it would be zero or one element (if 0 isn't practical) at the end of the stack.
For best-case big-omega is used (e.g., Ω(n)) and for average-case big-theta is used (e.g., Θ(n)). Notice the internal ligature of the big-theta vs that of a big O as it may be easy to mistake one for the other.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Thx for the article. I'm sure it's very important for the frontend community.
People these days often use
.reduce
with spread...
. So it becomesO(n^2)
. There're lots of articles (even on dev.to) which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.Also I think that BigO of
.splice
depends on the arguments. I don't think e.g. that 1 to 1 replacement would causeO(n)
, because it's a pretty simple optimization.I'm not sure I understand what you mean, do have an example that would illustrate this kind of situation? Thanks!
I think he means something like:
Where
...agg
is potentially being spread for each iteration overusers
. Instead of an internal traversal, use any kind of loop for O(n) time instead of O(n^2).reduce
could be used, but was abandoned because it's absolutely unnecessary.for .. of
(or any loop structure) could be used instead of.forEach
. The latter was used as a manner of preference.Note: it's actually something closer to O(n(n+1)/2) + O(n) where once coefficients and smaller figures are removed simplifies as O(n^2).
I would update this to can be a problem. Often time/space complexity only matters at scale and in many cases it will never be a problem; for instance, when populating a select-dropdown. In other cases, for instance big data, time/space complexity may always be a consideration, especially for an iterative process like that of graphing or machine learning.
Absolutely. That's often the case with Big-O. Big-O is the worst-case scenario, but worst-case may rarely be reached. Big-O means f(n) is less than a constant of O(g(n)). So in the worst case,
splice
may accept every element but on average it won't and a best-case scenario it would be zero or one element (if 0 isn't practical) at the end of the stack.For best-case big-omega is used (e.g.,
Ω(n)
) and for average-case big-theta is used (e.g.,Θ(n)
). Notice the internal ligature of the big-theta vs that of a big O as it may be easy to mistake one for the other.