DEV Community

Konrad Lisiczyński
Konrad Lisiczyński

Posted on • Updated on

Taming network with redux-requests, part 4 - Automatic normalisation

In the previous part of this series we discussed the problem of race conditions and how requests aborts can prevent them.

In this part we will cover the normalisation concept and how it could be automated with redux-requests.

What is normalisation?

Normalisation is a way to store data in such a way, that information are not duplicated. So, in case of edition, you need to update them only in one place, which doesn't require any synchronisation. For instance this is the way commonly used in SQL databases like PostgreSQL.

The opposite concept is denormalisation, which stores data in a way already convenient to consume, which could improve reading performance, at the cost of information duplication. It is commonly used in noSQL databases like Mongo or Cassandra.

Normalisation in Redux

Normalisation is not only relevant to databases. It can be used in any data context, including stores in Redux apps. But why would we do that? Imagine you have many API endpoints, like /books, /favourite-books, /books/{id}, /author/{id}/books and so on. Now, imagine you use those endpoints at the same time and they contain books with the same ids. What would you do to update a book title? You would need to update it in all relevant places, which would be time-consuming and error-prone. This is because of mentioned duplicated information when data is denormalized.

So what could we do? Well, we could normalize your data! How? The most common way in Redux world is to use normalizr, to normalize data before saving in reducer and denormalizing it back inside selectors. The problem is, that this has to be done manually. What if there is another way, an automated way? It turns out that there are already ways to have your data normalized automatically. In GraphQL world, projects like Apollo client or Relay support automatic normalisation, utilizing static typings of queries and mutations. But what about REST and other ways to communicate with servers? Why only GraphQL developers should have this luxury? Well, not anymore!

Automatic normalisation in redux-requests

If it is possible for GraphQL, why not for other ways to communicate with servers? We don't have static types for REST, but why not to utilize dynamic types? When you fetch something for the first time from a REST endpoint, you can remember the structure and compute types yourself! This is approach used in redux-requests and actually the outcome is identical to apollo or relay.

Now, imagine with have two queries:

const fetchBooks = () => ({
  type: FETCH_BOOKS,
  request: { url: '/books' },
  meta: { normalize: true },
});

const fetchBook = id => ({
  type: FETCH_BOOK,
  request: { url: `/books/${id}` },
  meta: { normalize: true },
})

and getQuery returns the following data:

import { getQuery } from '@redux-requests/core';

const booksQuery = getQuery(state, { type: 'FETCH_BOOKS' });
// booksQuery.data is [{ id: '1', title: 'title 1'}, { id: '2', title: 'title 2'}]

const bookDetailQuery = getQuery(state, { type: 'FETCH_BOOK' });
// bookDetailQuery.data is { id: '1', title: 'title 1'}

Now, imagine you have a mutation to update a book title. Normally you would need to do something like that:

const updateBookTitle = (id, newTitle) => ({
  type: UPDATE_BOOK_TITLE,
  request: { url: `books/${id}`, method: 'PATCH', data: { newTitle } },
  meta: {
    mutations: {
      FETCH_BOOKS: (data, mutationData) => data.map(v => v.id === id ? mutationData : v),
      FETCH_BOOK: (data, mutationData) => data.id === id ? mutationData : data,
    },
  },
})

assuming mutationData is equal to the book with updated title.

Now, because we have queries normalized, we can also use normalization in mutation:

const updateBookTitle = (id, newTitle) => ({
  type: 'UPDATE_BOOK_TITLE',
  request: { url: `books/${id}`, method: 'PATCH', data: { newTitle } },
  meta: { normalize: true },
})

No manual mutations! How does it work? By default all objects with id key are organized by their ids. Now, if you use normalize: true, any object with key id will be normalized, which simply means stored by id. If there is already a matching object with the same id, new one will be deeply merged with the one already in state. So, if only server response data from UPDATE_BOOK_TITLE is { id: '1', title: 'new title' }, this library will automatically figure it out to update title for object with id: '1'.

It also works with nested objects with ids, no matter how deep. If an object with id has other objects with ids, then those will be normalized separately and parent object will have just reference to those nested objects.

Required conditions

In GraphQL world, automatic normalisation in Apollo and Relay just works due to enforced static types. In order to make automatic normalisation work for REST for example, the following conditions must be meet:

  1. you must have a standardized way to identify your objects, usually this is just id key
  2. ids must be unique across the whole app, not only across object types, if not, you will need to append something to them, the same has to be done in GraphQL world, usually adding _typename
  3. objects with the same ids should have consistent structure, if an object like book in one query has title key, it should be title in others, not name out of a sudden

Two functions which can be passed to handleRequest can help to meet those requirements, shouldObjectBeNormalized and getNormalisationObjectKey.

shouldObjectBeNormalized can help you with 1st point, if for instance you identify objects differently, for instance by _id key, then you can pass shouldObjectBeNormalized: obj => obj._id !== undefined to handleRequest.

getNormalisationObjectKey allows you to pass 2nd requirement. For example, if your ids are unique, but not across the whole app, but within object types, you could use
getNormalisationObjectKey: obj => obj.id + obj.type or something similar. If that is not possible, then you could just compute a suffix yourself, for example:

const getType = obj => {
  if (obj.bookTitle) {
    return 'book';
  }

  if (obj.surname) {
    return 'user';
  }

  throw 'we support only book and user object';
}

{
  getNormalisationObjectKey: obj => obj.id + getType(obj),
}

Point 3 should always be met, if not, your really should ask your backend developers to keep things standardized and consistent. As a last resort, you can amend response with meta.getData.

Normalisation of arrays

Unfortunately it does not mean you will never use meta.mutations. Some updates still need to be done manually like usually, namely adding and removing items from array. Why? Imagine REMOVE_BOOK mutation. This book could be present in many queries, library cannot know from which query
you would like to remove it. The same applies for ADD_BOOK, library cannot know to which query a book should be added, or even as which array index. The same thing for action like SORT_BOOKS. This problem affects only top level arrays though. For instance, if you have a book with some id and another key like likedByUsers, then if you return new book with updated list in likedByUsers, this will work again automatically.

Should we normalize all data?

Of course it doesn't mean we should normalize all data, this just depends. For example, if you have some objects which will be never be updated, them normalisation won't give you anything so so perhaps it would be better to keep them denormalized.

What next?

In the next tutorial we will cover GraphQL usage together with redux-requests. We will also check how normalisation works for GraphQL too and you will see that indeed it used just like in apollo.

Top comments (0)