DEV Community

Discussion on: Challenge: Write your worst program

Collapse
 
prodigalknight profile image
RevanProdigalKnight • Edited

Once, about a year ago (maybe less), I was writing a GreaseMonkey/TamperMonkey script to replace all occurrences of a given string in a web page, and I was treating the list of as-of-yet unscanned nodes as a queue. This was before I learned that Array.prototype.shift is about as ineffecient as you can get for getting elements out of a large array in JS. Suffice to say I learned the hard way just how bad it gets

const
  nodes = [document.body],
  res = [];

let
  scanned = 0,
  n;

while (n = nodes.shift()) {
  scanned++;
  if (!n.scanned) {
    if (n.nodeType === Node.TEXT_NODE && !/^s*$/.test(n.nodeValue)) {
      n.scanned = true;
      const repl = replacers.filter(r => r.willRun(n.data));
      if (repl.length) {
        res.push({ node: n, replacers: repl });
      }
    } else if (
      n.hasChildNodes() &&
      ignoreTags.indexOf(n.tagName.toLowerCase()) === -1 &&
      ignoreClasses.every(c => !n.classList.contains(c))
    ) {
      nodes.push(...n.childNodes);
    }
  }
}

With the amount of work that things like replacers.filter(...) were doing, even after offloading the actual replacing work until after the loop, scanning the page kangax.github.io/compat-table/es6 (which at the time had roughly 194,000 nodes on it [and has only grown since then]) took anywhere between 60 and 90 seconds on page load.

After switching from a queue-based array to a stack-based array (by replacing nodes.shift() with nodes.pop(), the same page took about 4 seconds to scan.

I actually had an even worse-designed algorithm before this where I attempted to use nothing but Array.prototype methods and ended up iterating the result arrays multiple times, so the function to scan the page had something like a O(15n) runtime complexity. It was a very readable algorithm, though.