DEV Community

Cover image for Isolation Levels - part I: Introduction
Franck Pachot
Franck Pachot

Posted on • Updated on

Isolation Levels - part I: Introduction

Isolation level in SQL databases is a complex topic that developers often overlook, opting instead to stick with their database default settings. Although this choice is valid, it is crucial to know and understand the isolation level you are using because it defines the measures that you need to implement in your application to prevent race condition anomalies in a multi-user system.

Failure to understand isolation levels can lead to your code not delivering the expected performance and potentially corrupting the database.

Moreover, race condition anomalies are nearly impossible to cover in integration testing and can be challenging to detect and troubleshoot in production. Therefore, it is essential to address this concern from the design phase.

The issue that isolation levels aim to address is not specific to databases. When transactions involve multiple reads and writes on the same datasets, there's a risk of data corruption due to changes made by others in the database state between your operations.

An example

When designing a booking system for flights, hotels, or meeting rooms, it is important to ensure that there are no errors in the booking process. To do this, the system needs to check the current state of bookings before making a reservation. This helps to ensure that the seat you want to reserve is available. If the state of the bookings changes before you complete your transaction, you may end up with a double booking, where two people have reserved the same seat. Therefore, it is important to ensure that the booking process is designed in a way that prevents this from happening.

Preventing anomalies

There are two ways to prevent all anomalies and ensure that writes won't corrupt the database.

Serialized transactions

The first and most straightforward (but not practical) form of anomaly protection would be to have the application issue transactions serially to the database. Ensuring that one transaction starts after another end, each transaction reads and writes to its consistent state.

However, this approach limits the scalability of your system, as it hinders the ability to increase throughput. No database does this, except some embedded databases used by a single thread, and this is mentioned to acknowledge that this brute-force approach would solve the problem. It also helps to understand the term "serializable".

Serializable transactions

The other method to maintain database consistency is to permit multiple transactions to read and write on separate pieces of data simultaneously but to disallow concurrent reads and writes that conflict on the same data. The tracking of reads and writes to data for disallowing concurrent transactions with operations that conflict (i.e., those that cannot behave as serialized transactions) incurs a slight performance overhead. This method is the serializable isolation level (the gold standard for isolation, which behaves the same as the serialized, i.e., behaving as a lay user would intuitively expect without any anomalies). Along the same lines, as the conditions for which operations are allowed concurrently are relaxed, we get more performant isolation levels but at the cost of some well-defined anomalies (i.e., unintuitive behavior to the layperson).

Race conditions

Proper design is crucial, especially when considering the fact that most Continuous Integration/Continuous Deployment (CI/CD) tests are conducted on distinct datasets and at different times. These tests often fail to detect race condition anomalies. Once your system is in production with multiple users, and multi-threaded applications, race conditions become a significant concern, but their consequences may not be detected immediately if they corrupt data.

The next part will give a summary of isolation levels in MVCC (Multi-Version Concurrency Control) databases.

Top comments (0)