loading...

Removing duplicates in an Array of Objects in JS with Sets

Marina Mosti on February 04, 2019

The other day at work I was faced with what I think is a rather common problem when dealing with data coming from an API. I was getting from my as... [Read Full]
markdown guide
 

Without getting into cryptic one-liners, there's a pretty straight forward linear time solution as well.


const seen = new Set();
const arr = [
  { id: 1, name: "test1" },
  { id: 2, name: "test2" },
  { id: 2, name: "test3" },
  { id: 3, name: "test4" },
  { id: 4, name: "test5" },
  { id: 5, name: "test6" },
  { id: 5, name: "test7" },
  { id: 6, name: "test8" }
];

const filteredArr = arr.filter(el => {
  const duplicate = seen.has(el.id);
  seen.add(el.id);
  return !duplicate;
});
 

Hey Matt, nice solution! Yeah, all ways leads to Rome :)

 

Well... yes and no.

I feel it's important to distinguish that the OP-s solution loops over the whole array twice. I know there are times to make a trade-off between performance and readability, but I don't feel this needs to be one of those times :)

 

Interesting challenge. So I've three very similar solutions. They are all based on the same principle of reducing the array into a key-value structure and re-creating the array from the values only.

Approach 1: Classical Reducer

(reducer maintains immutability)

/**
 * classic reducer
 **/
const uniqByProp = prop => arr =>
  Object.values(
    arr.reduce(
      (acc, item) =>
        item && item[prop]
          ? { ...acc, [item[prop]]: item } // just include items with the prop
          : acc,
      {}
    )
  );

// usage:

const uniqueById = uniqByProp("id");

const unifiedArray = uniqueById(arrayWithDuplicates);

Depending on your array size, this approach might easily become a bottleneck in your app. More performant is to mutate your accumulator object directly in the reducer.

Approach 2: Reducer with object-mutation

/**
 * using object mutation
 **/
const uniqByProp = prop => arr =>
  Object.values(
    arr.reduce(
      (acc, item) => (
        item && item[prop] && (acc[item[prop]] = item), acc
      ), // using object mutation (faster)
      {}
    )
  );

// usage (same as above):

const uniqueById = uniqByProp("id");

const unifiedArray = uniqueById(arrayWithDuplicates);

The larger your input array, the more performance gain you'll have from the second approach. In my benchmark (for an input array of length 500 - with a duplicate element probability of 0.5), the second approach is ~440 x as fast as the first approach.

Approach 3: Using ES6 Map

My favorite approach uses a map, instead of an object to accumulate the elements. This has the advantage of preserving the ordering of the original array:

/**
 * using ES6 Map
 **/
const uniqByProp_map = prop => arr =>
  Array.from(
    arr
      .reduce(
        (acc, item) => (
          item && item[prop] && acc.set(item[prop], item),
          acc
        ), // using map (preserves ordering)
        new Map()
      )
      .values()
  );

// usage (still the same):

const uniqueById = uniqByProp("id");

const unifiedArray = uniqueById(arrayWithDuplicates);

Using the the same benchmark conditions as above, this approach is ~2 x as fast as the second approach and ~900 x as fast as the first approach.

Conclusion

Even if all three approaches are looking quite similar, they have surprisingly different performance footprints.

You'll find the benchmarks I used here: jsperf.com/uniq-by-prop

 

Hi ! One other one line path to Rome, from France :


arr = arr.filter((power, toThe, yellowVests) => yellowVests.map(updateDemocracy => updateDemocracy['id']).indexOf(power['id']) === toThe)

console.log(arr)


`

 
 
 

Here's another possibility using the Map class constructor and values method:

const arr = [
  { id: 1, name: "test1" },
  { id: 2, name: "test2" },
  { id: 2, name: "test3" },
  { id: 3, name: "test4" },
  { id: 4, name: "test5" },
  { id: 5, name: "test6" },
  { id: 5, name: "test7" },
  { id: 6, name: "test8" }
];

const uniqueObjects = [...new Map(arr.map(item => [item.id, item])).values()]

 

I have also this solution O.o

const dupAddress = [
    {
        id: 1,
        name: 'Istanbul'
    },
    {
        id: 2,
        name: 'Kocaeli'
    },
    {
        id: 3,
        name: 'Ankara'
    },
    {
        id: 1,
        name: 'Istanbul'
    }
]

let addresses = [...new Set([...dupAddress.map(address => dupAddress[address.id])])]

console.log(addresses)

But this only works with address.id, so this doesn't work with address.name

Really, why this doesn't work like that?

let addresses = [...new Set([...dupAddress.map(address => dupAddress[address.name])])]
 

Well, you're passing [address.id] as an index to the dupAddress array, that's just not going to work because the id !== index. Try changing it to address.id or address.name without accessing the array

 

Okay, I tried it didn't work actually I was wonder why that didn't work. Thanks.

 

I found myself with this issue recently and though I've always used the same code to find distinct primitives (before we had the Set object), this code required me to adhere to the C# API where you pass in a comparison function T -> T -> boolean. This solution felt relatively clean though obviously not in linear time.
github.com/jreina/ShittyLINQ.js/bl...

 

Thanks for the code snippet, Marina...I'm getting to errors when I attempt to use it. The first is "Set is only referred to a type but is being used as a value here"

when I use the "REDUCE" example I get the following in the console:

Maximum call stack size exceeded
at Array.reduce

 
[deleted]
 

Thanks! This is exactly what I was looking for

 
code of conduct - report abuse