DEV Community

Cover image for 🔄 Tradeoff between Normalisation and Denormalisation when Scaling Your API 📈🔁
Srikanth
Srikanth

Posted on • Updated on

🔄 Tradeoff between Normalisation and Denormalisation when Scaling Your API 📈🔁

There's something I have managed to crystallise for myself. As a Backend Engineer, I deal with designing relational database models often. Modelling the database is a crucial step in getting the API right in any modern application.

There is a delicate trade-off between normalisation and denormalisation while modelling a relational database. It's a crucial consideration that can significantly impact the performance and efficiency of a system as it grows. 🌱💡

This trade-off is something that my intuition took some time to catch. It is also something that has rarely been discussed in the early part of my career.

Normalisation aims to eliminate data redundancy and ensure data integrity by breaking down data into separate tables and establishing relationships through keys. On the other hand, denormalisation involves combining related data into a single table to improve query performance and reduce complexity.

When scaling your API, finding the right balance between these two approaches becomes critical. Let's take a look at how normalisation and denormalisation can impact query performance in the context of a messaging application:

In a normalised database design for a messaging app, you might have separate tables for users, conversations, and messages. Each message would reference the conversation it belongs to and the user who sent it. This design ensures data integrity and reduces redundancy, making it easier to update and modify user or conversation details. However, when retrieving a conversation with all its messages, multiple table joins are required, potentially impacting query performance as the dataset grows.

On the other hand, denormalisation in the same messaging application could involve duplicating relevant data. For example, you might store denormalised conversations with embedded messages in a single table. This design eliminates the need for complex joins, allowing for faster retrieval of conversations and their associated messages. However, it increases the redundancy of data, potentially leading to data inconsistencies if not managed carefully.

The choice for the messaging application depends on various factors. If data consistency is critical and frequent updates are expected, a normalised approach ensures data integrity. However, if query performance and retrieval speed are paramount, denormalisation can provide a performance boost by reducing the need for joins and simplifying queries.

Each approach comes with trade-offs, and finding the right balance for your messaging application depends on the specific needs and priorities of your system.

What are your thoughts on normalisation and denormalisation in the context of API scaling? Share your experiences and insights in the comments below. Let's learn from each other and navigate the trade-offs together! 🚀💬

Top comments (0)