Synchronous multi region (3+ regions)
For protection in the event of the failure of an entire cloud region, you can deploy YugabyteDB across multiple regions with a synchronously replicated multi-region universe. In a synchronized multi-region universe, a minimum of three nodes are replicated across three regions with a replication factor (RF) of 3. In the event of a region failure, the universe continues to serve data requests from the remaining regions. YugabyteDB automatically performs a failover to the nodes in the other two regions, and the tablets being failed over are evenly distributed across the two remaining regions.
This deployment provides the following advantages:
- Resilience - putting the universe nodes in different regions provides a higher degree of failure independence.
- Consistency - all writes are synchronously replicated. Transactions are globally consistent.
Create a synchronized multi-region universe
Before you can create a multi-region universe in YugabyteDB Anywhere, you need to install YugabyteDB Anywhere and configure it to run in AWS.
Start a workload
To verify that the application is running correctly, navigate to the application UI at http://localhost:8080/ to view the universe network diagram, as well as latency and throughput charts for the running workload.
View the universe activity
You can use YugabyteDB Anywhere to view per-node statistics for the universe, as follows:
Navigate to Universes and select your universe.
Select Nodes to view the total read and write operations for each node.
Note that both the reads and the writes are approximately the same across all the nodes, indicating uniform load.
Select Metrics to view charts such as YSQL operations per second and latency.
Latency in a multi-region universe depends on the distance and network packet transfer times between the nodes of the universe as well as between the universe and the client. Because the tablet leader replicates write operations across a majority of tablet peers before sending a response to the client, all writes involve cross-region communication between tablet peers.
For best performance and lower data transfer costs, you want to minimize transfers between providers and between provider regions. You do this by placing your universe as close to your applications as possible, as follows:
- Use the same cloud provider as your application.
- Place your universe in the same region as your application.
- Peer your universe with the Virtual Private Cloud (VPC) hosting your application.
YugabyteDB offers tunable global reads that allow read requests to trade off some consistency for lower read latency. By default, read requests in a YugabyteDB universe are handled by the leader of the Raft group associated with the target tablet to ensure strong consistency. If you are willing to sacrifice some consistency in favor of lower latency, you can choose to read from a tablet follower that is closer to the client rather than from the leader. YugabyteDB also allows you to specify the maximum staleness of data when reading from tablet followers.
For more information, see Follower reads examples.
If application reads and writes are known to be originating primarily from a single region, you can designate a preferred region, which pins the tablet leaders to that single region. As a result, the preferred region handles all read and write requests from clients. Non-preferred regions are used only for hosting tablet follower replicas.
For multi-row or multi-table transactional operations, colocating the leaders in a single zone or region can help reduce the number of cross-region network hops involved in executing a transaction.
Set a particular zone in the region to which you are connected as preferred, as follows:
Navigate to your universes's Overview and click Actions > Edit Universe.
Under Availability Zones, find the zone and select its corresponding Preferred.
To verify that the load is moving to the preferred zone in the region, select Nodes.
When complete, the load is handled exclusively by the preferred region.
With the tablet leaders now all located in the region to which the application is connected, latencies decrease and throughput increases.
Note that cross-region latencies are unavoidable in the write path, given the need to ensure region-level automatic failover and repair.