In addition to the built-in synchronous replication to Raft quorum, mandatory for data resiliency and high availability, YugabyteDB also incorporates built-in asynchronous replication for various purposes, such as disaster recovery, remote read replicas, and change data capture.
Traditional databases often combine synchronous, quorum, asynchronous, and eventual consistency to complement the crash recovery methods. Those have been added over the years to a monolithic design. Transaction logs, which protect changes made in memory that can be lost in case of a crash, have been expanded to cover media recovery by archiving them. This log stream is also sent to another server to update a standby database asynchronously, reducing the recovery time (RTO) in case of a disaster. Furthermore, some synchronous commits are included to minimize the recovery point objective (RPO), but only if there are enough standby databases to not block the primary database when one is inaccessible.
Cloud-native databases are a step forward from traditional databases, which depend on disaster recovery methods for achieving high availability. They are tailored for modern infrastructure and differentiate the two types of failures. Firstly, they need to be able to be resilient to transient failures such as network partitions or instance crashes without requiring failover or recovery. Secondly, they should be safeguarded from regional disasters that necessitate recovery, potentially involving human decisions and causing minimal data loss.
YugabyteDB guarantees resilience with synchronous replication to a quorum and fault tolerance to zone failures or even regions when the latency between them is not too high and at least three regions are available within that distance to allow a quorum. This has been discussed in the previous posts of this series. Additional asynchronous replication can be set between two clusters to protect data over two data centers or distant regions.
One cluster operates as the active database to ensure transaction consistency. At the same time, the other serves as standby and is only used for reads until a switchover or failover reverses their roles. This setup looks like traditional database streaming replication but is more scalable as each tablet has its replication stream. Consistency is maintained through a safe time mechanism that ensures timeline consistency.
This configuration enables Disaster Recovery between two distant regions with too high a latency to extend the synchronous cluster. Each region is designed to withstand common failures and has Raft replication. Asynchronous replication between the two clusters protects against fire, flood, or earthquakes and is necessary when significant distance and latency prevent synchronous replication.
Cross-cluster asynchronous replication can also be used for Active-Active deployments with Last-Write-Wins conflict resolution. All these features are built into the Raft logs between the query layer and distributed storage.
Raft replication can be extended in a cluster with Read Replicas, essentially Raft observers. These read replicas stay synchronized using Raft logs but do not participate in the write quorum, so they do not affect write latency. However, they cannot be used for disaster recovery. Their primary purpose is to offload reporting activity to remote regions, where the local replicas can provide timeline-consistent reads.
To be complete, the built-in replication can also generate Change Data Capture (CDC) records to audit or replicate to other systems.
This terminates the series with ten facts about YugabyteDB, aimed at debunking common misconceptions and clarifying the platform's unique capabilities, often misrepresented by competitors. We’ve highlighted its PostgreSQL compatibility, advanced isolation levels, fault-tolerant Raft architecture, linear scalability, auto-sharding, performance optimizations, RocksDB-powered flexibility, and fast recovery features, demonstrating why YugabyteDB is a leading choice for distributed SQL databases. YugabyteDB is Open Source.
Top comments (0)