DEV Community

Cover image for In-Place Algorithms; what are they, and why use them?
Sean Welsh Brown
Sean Welsh Brown

Posted on

In-Place Algorithms; what are they, and why use them?

In the world of computer science, one of the most important aspects of algorithmic thinking to take into consideration is optimization, both of system memory and of runtime complexity.

Every time another iteration or loop is run within a function, or another data structure is added, the storage and runtime footprint of that function becomes larger. While this may not be noticeable on a small scale and likely won't affect small applications, this has a large incremental effect that can and will affect large applications with thousands or millions of function calls.

A good example to demonstrate this effect is an in-place algorithm.

An in-place algorithm is an algorithm which transforms its input using no auxiliary data structure. Essentially, this means that if an in-place algorithm is written to take in an array as an argument, it must modify that array in its original place in memory; no extra arrays, objects, or pointers in memory may be used.

The strictest definition of an in-place algorithm requires a constant and predictable amount of memory to be used, although the limitations this imposes can be more restrictive than they are productive. Looser definitions allow for auxiliary variables and function calls to aid in calculation/incrementation.

The reason this is important is that allocation and destruction of pointers in memory, particularly for arrays and objects, are relatively memory-heavy operations. If we can modify an array in-place, without adding additional and unnecessary copies, we can create a tangible savings on memory and runtime complexity.


Let's use an example in context. Let's use a common interview algorithm, one where we remove duplicates from a sorted array and then return the number of unique elements.

Say we're given the following array a as input in a JavaScript context:

const a = [0, 0, 1, 1, 2, 2, 3, 3]

We could try and solve this quickly using one of JavaScript's built in array functions, includes(), as well as modern ES6 syntax:

function removeDuplicates(a) {

  const b = [];

  for (const element of a) {
    if (!b.includes(element)) {
      b.push(element);
    }
  }

  return b.length;
}

The output of this would, in fact, be the proper shortened array with all duplicates removed. However, the potential issue here is that we created a second array variable in memory, pushed elements onto it, and then left the original array in memory as well. This works as getting us the answer, but in an enterprise level piece of software it would take up far more memory than we need-- as the array being passed in could be hundreds or thousands of elements long.

Instead, we could rewrite this to modify the original passed in array in-place, like so:

function removeDuplicates(a) {

  let i = 0;
  for (j = 1; j < a.length; j++) {
      if (a[i] != a[j]) {
          i++;
          a[i] = a[j];
      }
  }

  return i + 1;

};

In this case we've gone about the problem differently; instead of creating allocating a new array and passing elements to it and then checking the length, we're sorting the original array in-place and then returning the number of sorting operations that we did (plus one)-- returning the same result but doing so with a smaller memory footprint.

To reiterate, this may seem like an unnecessary refactoring of logic to achieve the same result, but the importance comes from understanding the fundamental computer science logic at play. This is an algorithm that is now language-agnostic (it can be repeated without relying on built-in functions), and operates with a minimal amount of storage comparatively-- something that would be important on a larger scale application.

Writing In-Place Algorithms is a concept that comes up in numerous programming interview questions and algorithms for this very reason. It's important to understand not just how to get the answer, but also why considering memory footprint and runtime complexity make a difference.


If you've come this far, thank you for reading! I'll be writing more blog posts like this one exploring Computer Science fundamentals as I learn them myself. :)

Top comments (5)

Collapse
 
mxldevs profile image
MxL Devs • Edited

Some languages like ruby come with "in-place" variations of a method. Arrays for example have tons of common stuff sort, reverse, etc. I could say array.sort which returns a new array, or I could just array.sort! which simply modifies it in-place. Initially I never understood the difference, but then it made more sense after reading about it.

Wonder if Javascript has something similar.

Collapse
 
seanwelshbrown profile image
Sean Welsh Brown

This is a really interesting point! It makes sense for higher level languages like Ruby that focus on "developer friendliness" to have options like that built in rather than coding them in yourself, and it's interesting to dig a bit deeper into the concept and see why having the option is important in the first place.

Thanks for the comment!

Collapse
 
hminaya profile image
Hector Minaya

Not everything is as it seems, in Ruby you can add elements dynamically to an array, which in reality just creates a new Array (based on the new size) and may keep the old one around until the garbage collector disposes of it.

Even JS has the ability to add/remove elements from an Array, because it's actually not an Array. It's a dictionary with an integer key and an object as it's value.

Collapse
 
varsity6177 profile image
Fredo

In JavaScript, array.sort() and array.reversed() are in-place. The out-of-place variants would be, respectively, toSorted() and toReversed().

Source: MDN

Collapse
 
hminaya profile image
Hector Minaya

Very practical example of In-Place Algorithms, 👍