For applications that run in a single region but need a safety net, you can adopt the Active-Active Single-Master pattern, where you set up two clusters in different regions. One cluster actively takes responsibility for all reads and writes, and replicates the data to another cluster asynchronously. The second cluster can be promoted to primary in the case of a failure. This setup is very useful when you have only two regions and want to deploy the database in one region for low latency, but have another copy of the database in the other region for failover.
SetupTo set up a local universe, refer to Set up a local YugabyteDB universe.
SetupTo set up a universe, refer to Set up a YugabyteDB Anywhere universe.
Suppose you have cluster with a replication factor of 3, and applications deployed in
This ensures that the reads and writes are in the same region, with the expected low latencies. Because the entire cluster is in a single region, in case of a region failure, you would lose all the data.
Secondary replica cluster
You can set up a secondary cluster in a different region, say
us-east, using xCluster.
us-east cluster (sink) is independent of the primary cluster in
us-west and the data is populated by asynchronous replication from the primary cluster (source). This means that the read and write latencies of the primary cluster are not affected, but at the same time, the data in the second cluster is not immediately consistent with the primary cluster. The sink cluster acts as a replica cluster and can take over as primary in case of a failure. This can also be used for blue-green deployment testing.
Because the second cluster has the same schema and the data (with a short lag), it can serve stale reads for local applications.
Writes still have to go to the primary cluster in
You can preserve and guarantee transactional atomicity and global ordering when propagating change data from one universe to another by adding the
transactional flag when setting up the xCluster replication. This is the default behavior.
You can relax the transactional atomicity guarantee for lower replication lag.
When the primary cluster in
us-west fails, the secondary cluster in
us-east can be promoted to become the primary and can start serving both reads and writes.
The replication happens at the DocDB layer, bypassing the query layer, and some standard functionality doesn't work.
UNIQUEindexes and constraints, as there is no way to check uniqueness.
TRIGGERS, as the triggers won't be fired because the query layer is bypassed.
SERIALcolumns, as both the clusters would generate the same sequence (use UUID instead).
- Schema changes are not automatically transmitted but have to be applied manually (currently).
Another thing to note with xCluster is that transaction updates are NOT committed atomically from the source to the sink and hence the second cluster could be transactionally inconsistent.