I recently became curious about the subtle differences in use and performance between the various methods of accessing the DOM in Javascript. Here I'm going to take a look at getElementById
, querySelector
, getElementsByClassName
, getElementsByTagName
, and querySelectorAll
and try to sort out the differences. Perhaps the most obvious difference is that querySelector
and querySelectorAll
accept a wide range of search terms and can be far more precise than the other functions. While each of the other functions is a specialist (they only search by one selector) querySelector
and querySelectorAll
can make use of all of the fancy CSS selecting magic; check out this article for a more complete list.
Single Element Search
Let's begin with the functions that only return a single element from the DOM: getElementById
, querySelector
. Both of these functions return the HTML element matching the given search term or null
if no there is no matching element in the DOM. getElementById
will return the one element with the provided ID and querySelector
will return the first node it finds that matches the search criteria. Let's take them for a spin and see which is faster!
<div id="div1"></div>
// use querySelector 5 million times and time it
function querySelectorLoop() {
let t0 = console.time("querySelector");
for (let i = 0; i < 5000000; i++) {
document.querySelector("#div1");
}
let t1 = console.timeEnd("querySelector");
}
// use getElementById 5 million times and time it
function getByIdLoop() {
let t0 = console.time("getElementById");
for (let i = 0; i < num; i++) {
const query = document.getElementById("div1");
}
let t1 = console.timeEnd("getElementById");
}
querySelectorLoop();
// => querySelector: 653.566162109375 ms
getByIdLoop();
// => getElementById: 567.281005859375 ms
(Note: All tests were done on Chrome version 87.0.4280.67 non-reported tests were also done on safari with similar results.)
Well, that settles it, querySelector
is slower than getElementById
.... sort of. It took querySelector
about 86ms longer to access the DOM 5 million times. That is not a lot of time. The reason for the discrepancy is likely because many browsers cache all of the ids when the DOM is first accessed and getElementById
has access to this information while querySelector
performs a depth-first search of all nodes until it finds what it's looking for. This suggestss that searching for a more complexly nested HTML element might increase the performance discrepancy.
Multiple Element Search
Before we investigate getElementsByClassName
, getElementsByTagName
, and querySelectorAll
we need to talk about what each of these functions returns. getElementsByClassName
, getElementsByTagName
, each return an HTML Collection and querySelectorAll
returns a Node List. These are both array-like, ordered, collections of values. They both have a length
method and can be accessed via numbered indices. The major difference between an HTML Collection and a Node List is that an HTML Collection is a Live collection while a Node List is not. A live collection accurently reflects the current state of the DOM, while a not-live collection serves a snapshot. For example:
<ul>
<li id= "first-li" class=list> Cheddar </li>
<li class=list> Manchego </li>
<li class=list> gruyere </li>
</ul>
let htmlCollection = document.getElementsByClassName("list");
let nodeList = document.querySelectorAll(".list");
htmlCollection.length // => 3
nodeList.length // => 3
// Remove the first li
document.getElementById("first-li").remove();
// Re-check lengths
htmlCollection.length // => 2
nodeList.length // => 3
As we can see the HTML Collection made with getElementsByClassName
was updated simply by updating the DOM while our Node List remained static.
Now let's see how our functions measure up on speed.
<div id="div1"></div>
// Make a div to hold newly created elements
const div = document.createElement("div");
let p;
// Create 5,000 new <p></p> elements with class="p" and append them to a div.
for (let i = 0; i < 50000; i++) {
p = document.createElement("p");
p.className = "p";
div.appendChild(p);
}
// Append our 5,000 new p elements in a div to our existing div on the DOM
const oldDiv = document.getElementById("div1");
oldDiv.appendChild(div);
// Time getElementsByClassName creating an HTML Collection w/ 5,000 elements
function getByClass() {
let t0 = console.time("Class");
for (let i = 0; i < 5000; i++) {
document.getElementsByClassName("p");
}
let t1 = console.timeEnd("Class");
}
// Time getElementsByTagName creating an HTML Collection w/ 5,000 elements
function getByTagName() {
let t0 = console.time("Tag");
for (let i = 0; i < 5000; i++) {
document.getElementsByTagName("p");
}
let t1 = console.timeEnd("Tag");
}
// Time querySelectorAll creating an Node List w/ 5,000 elements
function getByQuery() {
let t0 = console.time("Query");
for (let i = 0; i < 5000; i++) {
document.querySelectorAll("p");
}
let t1 = console.timeEnd("Query");
}
// Now run each function
getByQuery(); // => Query: 458.64697265625 ms
getByTagName(); // => Tag: 1.398193359375 ms
getByClass();// => Class: 2.048095703125 ms
Now there's a performance difference!
So what's going on here? It all has to do with the difference between Node Lists and HTML Collections. When a Node List is made each element is collected and stored, in order, in the Node List; this involves creating the Node List then filling it up within a loop. Whereas the live HTML Collections are made by simply registering the collection in a cache. In short, it's a trade-off; getElementsByTagName
and getElementsByClassName
have very low overhead to generate but have to do all of the heavy lifting of querying the DOM for changes every time an element is accessed (More detailed info about how this actually done here). Let's run a quick experiment to see this. This is pretty simple to do if we modify our code above to have return values.
//modifying the above functions to return collections like so...
...
return document.getElementsByClassName("p");
...
return document.getElementsByTagName("p");
...
return document.querySelectorAll("p");
...
// Assigning the returns to variables
const queryP = getByQuery();
const tagP = getByTagName();
const classP = getByClass();
// See how long it takes to access the 3206th element of each collection
console.time("query");
queryP[3206];
console.timeEnd("query");// => query: 0.005126953125 ms
console.time("tag");
tagP[3206];
console.timeEnd("tag");// => tag: 0.12109375 ms
console.time("class");
classP[3206];
console.timeEnd("class");// => class: 0.18994140625 ms
As expected accessing an element fromquerySelectorAll
is much faster - accessing an element fromgetElementsByTagName
and getElementsByClassName
is nearly 100 times slower! However, being 100 times slower than something really fast isn't necessarily slow, a tenth of a millisecond is hardly something to complain about.
Wrapping It Up
querySelector
and querySelectorAll
are both slower than other functions for accessing the DOM when they are first called; although querySelector
is still not slow. querySelectorAll
is much faster than getElementsByTagName
and getElementsByClassName
when accessing a member of the collection because of the differences in how live and non-live collections are stored. But again, getElementsByTagName
and getElementsByClassName
are not slow.
So which selectors to use? That will depend on your particular use case. The querySelector
functions are much more versatile and have the ability to be far more precise but it may come with a performance cost and some situations are more suited for live collections than others.
Top comments (3)
Wow -- thanks for this! The difference between
querySelectorAll
and the other mutli-selectors is kinda crazy. I usually want to select for elements with a particular attribute rather than by tag or class name, so I often default toquerySelectorAll
-- good to know I should be a bit more nuanced if performance is critical. It usually isn't, but right now I'm working on some code that registers event listeners and syncing logic periodically (usually on page load), so a 500ms delay loading a large page into an SPA could cause some hitching, or exacerbate the on-load processing time of other parts of my code.5000
<p>
tags is a lot, as is 5000 elements in general, but its not inconceivable that a page might be so large. I'm going to have to do some testing myself to see what the difference is selecting only a few of 5000 elements vs. all 5000. I assume the time will be comparable, since the function needs to traverse the same number of elements to run its checks?I'd also like to look into query selector complexity. I've been using
querySelectorAll
multiple times in a row checking for different attributes, but I'm guessing that's not going to scale well. I imagine running it once with a more complex query selector and then branching my logic from there would be more performant, but I wonder if there may be other tradeoffs?When there is a parent element containing all the queried elements then
const parentElement = document.querySelector('#parentElementId');
const childElements = parentElement.querySelectorAll('.selector');
is about ten times faster than to use
const childElements = document.querySelectorAll('.selector');
Thanks for the tip!