Hello everyone, some weeks ago I started to study some computer science algorithms using JavaScript as the programming language, and normally after finished to implement an algorithm I like to calculated complexity with the Big 0 notation. That is the reason why I wanted to write this post, to understand the time complexity for the most used JS Array methods.
So, let's start with a quick definition of the method, his time complexity, and a small example.
Mutator Methods.
1. push()
- 0(1)
Add a new element to the end of the array.
const names = ['Luis','John','Jose'];
names.push("Aaron");
console.log(names); // (4) ["Luis", "John", "Jose", "Aaron"]
2. pop()
- 0(1)
Delete the last element of the array
const names = ['Luis','John','Jose','Aaron'];
console.log(names.pop()); //Aaron
console.log(names); // (3) ["Luis", "John", "Jose"]
3. shift()
- 0(n)
Delete the first element of the array
const names = ['Luis','John','Jose','Aaron'];
console.log(names.shift()); // Luis
console.log(names); // (3) ["John", "Jose", "Aaron"]
4. unshift()
- 0(n)
Add one or more elements in the beginning of the array
const names = ['Luis','John','Jose'];
console.log(names.unshift("Aaron")); // 4
console.log(names); // (4) ["Aaron", "Luis", "John", "Jose"]
5. splice()
- 0(n)
Remove, add or replace a new element indicate by index.
const names = ['Luis','John','Jose','Aaron'];
console.log(names.splice(0,0,"Fernando")); // Add Michelle
console.log(names.splice(0,1,"Michelle")); // replace Fernando to Michelle
console.log(names.splice(0,1)); // remove Michelle
console.log(names);
6. sort()
- 0(n log(n))
Modify the array, ordered by a compare Function, or if this compare function is not provided the default order is by the position of the Unicode values in the array.
const names = ['Luis','Jose','John','Aaron'];
console.log(names.sort()); // (4) ["Aaron", "John", "Jose", "Luis"]
/*complex sorting*/
const users = [
{name:'Luis', age:25},
{name:'Jose', age:20},
{name:'Aaron', age:40}
];
const compareFuc = (item1,item2) => {
return item1.age - item2.age;
};
console.log(users.sort(compareFuc));
/**
[{name: "Jose", age: 20}, {name: "Luis", age: 25}, {name: "Aaron", age:40}]
*/
Accessor methods
1. concat()
- 0(n)
Create a new array with the union of two or more arrays.
const names1 = ["Luis","Jose"];
const names2 = ["John","Aaron"];
const newArray = names1.concat(names2,["Michelle"]);
console.log(newArray); // (5) ["Luis", "Jose", "John", "Aaron", "Michelle"]
2. slice()
- 0(n)
Return a copy of a sub array between two index, start and end.
Important Note: if you modify the original array, the value also will be modify in the copy array.
const users = [
{name:'Luis', age:15},
{name:'Jose', age:18},
{name:'Aaron', age:40}
];
const adults = users.slice(1, users.length);
console.log(adults); // (2) [{name: "Jose", age: 18}, {name: "Aaron", age: 40}]
3. indexOf()
- 0(n)
Return the first index of the element that exists in the array, and if not exists return-1
.
const names = ['Luis','Jose','John','Aaron'];
console.log(names.indexOf("John")); // 2
console.log(names.indexOf("Michelle")); // -1
Iteration methods
1. forEach()
- 0(n)
Just execute a function for each element in the array.
const names = ['Luis','Jose','John','Aaron'];
names.forEach(item => {
console.log(item);
});
/* Print all user names
Luis Jose John Aaron
*/
2. map()
- 0(n)
Create a new array with the result of the callback function (this function is executed for each item same as forEach)
const users = [
{name:'Luis', age:15},
{name:'Jose', age:18},
{name:'Aaron', age:40}
];
const userDescriptions = users.map(item => {
return `Hello my name is ${item.name} and I have ${item.age} years old.`
});
console.log(userDescriptions);
/*["Hello my name is Luis and I have 15 years old.",
"Hello my name is Jose and I have 18 years old.",
"Hello my name is Aaron and I have 40 years old."] */
3. filter()
- 0(n)
Create a new array with the elements that apply the given filter condition as true.
const users = [
{name:'Luis', admin:true},
{name:'Jose', admin:true},
{name:'Aaron'}
];
const adminUsers = users.filter(item => item.admin);
console.log(adminUsers); // [{name: "Luis", admin: true},{name: "Jose", admin: true}]
4. reduce()
- 0(n)
Return a single value after applying the reduction function for each element.
const users = [
{name:'Luis', age:15},
{name:'Jose', age:18},
{name:'Aaron', age:40}
];
const reducer= (accumulator, item)=> accumulator + item.age;
const totalAge = users.reduce(reducer,0);
const ageAverage = totalAge / users.length;
console.log(`Total ${totalAge}, Average ${ageAverage}`); // Total 73, Average 24.333333333333332
Bonus!!!
1. some()
- 0(n)
Return a boolean value as true if found one or more item that apply the given condition, and return false if not (also if the array is empty).
const users = [
{name:'Luis', admin:true},
{name:'Jose'},
{name:'Aaron'}
];
const adminExists = users.some(item => item.admin);
console.log(adminExists); // true
2. every()
- 0(n)
This function Return a boolean value as true if all the items apply the given condition, and false if not.
const users = [
{name:'Luis', active:true},
{name:'Jose', active:true},
{name:'Aaron', active:false}
];
const isAllUsersActive = users.every(item => item.active);
console.log(isAllUsersActive); // false
Conclusion
I think that it is very important to understand the time complexity for the common Array methods that we used to create our algorithms and in this way we can calculte the time complexity of the whole structure.
I hope that this information was helpful for you. If you have any questions please, left in the comment section. All the comments are welcome.😉
Top comments (17)
Like it - just a point of clarification - a sliced array is a shallow copy and changing the original array won't modify it as you seem to suggest:
If it's an array of objects, clearly it's a shallow copy so changing an object will change the one referenced by both arrays.
You're right! Thank you to share this clarification. 👍
Thx for the article. I'm sure it's very important for the frontend community.
People these days often use
.reduce
with spread...
. So it becomesO(n^2)
. There're lots of articles (even on dev.to) which showcase such an approach. My experience of interviewing says me that people don't understand that there's a problem.Also I think that BigO of
.splice
depends on the arguments. I don't think e.g. that 1 to 1 replacement would causeO(n)
, because it's a pretty simple optimization.I would update this to can be a problem. Often time/space complexity only matters at scale and in many cases it will never be a problem; for instance, when populating a select-dropdown. In other cases, for instance big data, time/space complexity may always be a consideration, especially for an iterative process like that of graphing or machine learning.
Absolutely. That's often the case with Big-O. Big-O is the worst-case scenario, but worst-case may rarely be reached. Big-O means f(n) is less than a constant of O(g(n)). So in the worst case,
splice
may accept every element but on average it won't and a best-case scenario it would be zero or one element (if 0 isn't practical) at the end of the stack.For best-case big-omega is used (e.g.,
Ω(n)
) and for average-case big-theta is used (e.g.,Θ(n)
). Notice the internal ligature of the big-theta vs that of a big O as it may be easy to mistake one for the other.I'm not sure I understand what you mean, do have an example that would illustrate this kind of situation? Thanks!
I think he means something like:
Where
...agg
is potentially being spread for each iteration overusers
. Instead of an internal traversal, use any kind of loop for O(n) time instead of O(n^2).reduce
could be used, but was abandoned because it's absolutely unnecessary.for .. of
(or any loop structure) could be used instead of.forEach
. The latter was used as a manner of preference.Note: it's actually something closer to O(n(n+1)/2) + O(n) where once coefficients and smaller figures are removed simplifies as O(n^2).
What reference did you lean on to find their time complexity?
If their complexities are assumed or inferred, I'm not sure the assumptions hold, since each engine implements JS arrays differently. See: stackoverflow.com/a/61713477/380607
PS: I think it would be wise to mention that
.some()
is only O(n) in the worst case, but rather O(k) normally, where k is the index to be found, which on average will be the middle (median) index, so k = n/2. While a factor of n, it is different than other O(n) functions which necessarily will have to go through the entire array and thus amount to the full n.Big O is already the worst case. otherwise, we have Omega and Theta
Big O is not the worst case. Big O tells you the upper bound growth rate function. That's it. Worst case tells you a specific example of input which would make the algorithm run for the longest time. Big O can be used to describe the upper bound in worst case, average case, best case. Both are orthogonal.
This may be accurate, but for all practical purposes of conceptual understanding; upper-bound ≈ worst case.
It's simpler to describe Big-O vs Big-Theta vs Big-Omega and also stay away from little-o. Again, this is for simplistic terms and anything outside a data scientist and seasoned engineer should probably stick to the simplified understanding as colloquial terms.
This is targeting entry level devs, which I think does service in how it describes the topic:
educative.io/blog/a-big-o-primer-f...
Crisp and to-the-point post. Well done!
what about .join() method?
It should be O(n)
Very useful. I had about the same idea on the complexity but it's good to know that you took the time to summarize this data.
@luiscastillokc for the method "indexOf" I think it's O(log(n)) time, because it uses something like Binary search
Binary search can only be executed on a sorted collection
Very nice article. Straight and to the point. Thanks for sharing!