Synchronous replication (3+ regions)

Distribute data across regions

YugabyteDB can be deployed in a globally distributed manner to serve application queries from the region closest to the end users with low latencies as well as to survive any outages to ensure high availability.

This page simulates AWS regions on a local machine. First, you deploy YugabyteDB in the us-west-2 region across multiple availability zones (a, b, c) and start a key-value workload against this cluster. Next, you change this setup to run across multiple geographic regions in US East (us-east-1) and Tokyo (ap-northeast-1), with the workload running uninterrupted during the entire transition.

This page uses the yb-ctl local cluster management utility.

Create a multi-zone cluster in US West

If you have a previously running local cluster, destroy it using the following.

./bin/yb-ctl destroy

Start a new local cluster with a replication factor (RF) of 3, and each replica placed in different zones (us-west-2a, us-west-2b, us-west-2c) in the us-west-2 (Oregon) region of AWS. This can be done by running the following:

./bin/yb-ctl --rf 3 create --placement_info "aws.us-west-2.us-west-2a,aws.us-west-2.us-west-2b,aws.us-west-2.us-west-2c"

In this deployment, the YB-Masters are each placed in a separate zone to allow them to survive the loss of a zone. You can view the masters on the dashboard, as per the following illustration:

Multi-zone cluster YB-Masters

You can view the tablet servers on the tablet servers page, as per the following illustration:

Multi-zone cluster YB-TServers

Start a workload

Download the YugabyteDB workload generator JAR file (yb-sample-apps.jar) by running the following command:

wget https://github.com/yugabyte/yb-sample-apps/releases/download/1.3.9/yb-sample-apps.jar

Run a SqlInserts workload in a separate shell, as follows:

java -jar ./yb-sample-apps.jar --workload SqlInserts \
                                    --nodes 127.0.0.1:5433 \
                                    --num_threads_write 1 \
                                    --num_threads_read 4

You should now see some read and write load on the tablet servers page, as per the following illustration:

Multi-zone cluster load

Add nodes

Add new nodes in US East and Tokyo regions

Add a node in the zone us-east-1a of region us-east-1, as follows:

./bin/yb-ctl add_node --placement_info "aws.us-east-1.us-east-1a"

Add another node in the zone ap-northeast-1a of region ap-northeast-1, as follows:

./bin/yb-ctl add_node --placement_info "aws.ap-northeast-1.ap-northeast-1a"

These two new nodes are added into the cluster but are not taking any read or write IO. This is because the YB-Master's initial placement policy of storing data across the zones in us-west-2 region still applies, as per the following illustration:

Add node in a new region

Update placement policy

Update the placement policy, instructing the YB-Master to place data in the new regions, as follows:

./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    modify_placement_info aws.us-west-2.us-west-2a,aws.us-east-1.us-east-1a,aws.ap-northeast-1.ap-northeast-1a 3

You should see that the data as well as the IO gradually moves from the nodes in us-west-2b and us-west-2c to the newly added nodes. The tablet servers page should soon look similar to the following illustration:

Multi region workload

Retire old nodes

Start new masters

You need to move the YB-Master from the old nodes to the new nodes. To do so, first start new masters on the new nodes, as follows:

./bin/yb-ctl add_node --master --placement_info "aws.us-east-1.us-east-1a"
./bin/yb-ctl add_node --master --placement_info "aws.ap-northeast-1.ap-northeast-1a"

Add master

Remove old masters

Remove the old masters from the masters Raft group. Assuming nodes with addresses 127.0.0.2 and 127.0.0.3 were the two old nodes, run the following commands:

./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100,127.0.0.4:7100,127.0.0.5:7100 change_master_config REMOVE_SERVER 127.0.0.2 7100
./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.3:7100,127.0.0.4:7100,127.0.0.5:7100 change_master_config REMOVE_SERVER 127.0.0.3 7100

Add master

Remove old nodes

Remove the old nodes, as follows:

./bin/yb-ctl remove_node 2
./bin/yb-ctl remove_node 3

Add master

Clean up

Optionally, you can shutdown the local cluster created in Step 1, as follows:

./bin/yb-ctl destroy