AttentionThis page documents an earlier version. Go to the latest (v2.1)version.
1. Create a multi-zone universe in US West
If you have a previously running local universe, destroy it using the following.
$ ./bin/yb-ctl destroy
Start a new local universe with replication factor 3, and each replica placed in different zones (
us-west-2c) in the
us-west-2 (Oregon) region of AWS. This can be done by running the following:
$ ./bin/yb-ctl --rf 3 create --placement_info "aws.us-west-2.us-west-2a,aws.us-west-2.us-west-2b,aws.us-west-2.us-west-2c"
In this deployment, the YB Masters are each placed in a separate zone to allow them to survive the loss of a zone. You can view the masters on the dashboard .
You can view the tablet servers on the tablet servers page .
2. Start a workload
Download the sample app jar.
$ wget https://github.com/yugabyte/yb-sample-apps/releases/download/v1.2.0/yb-sample-apps.jar?raw=true -O yb-sample-apps.jar
Run a simple key-value workload in a separate shell.
$ java -jar ./yb-sample-apps.jar --workload SqlInserts \ --nodes 127.0.0.1:5433 \ --num_threads_write 1 \ --num_threads_read 4
You should now see some read and write load on the tablet servers page .
3. Add nodes in US East and Tokyo regions
Add new nodes
Add a node in the zone
us-east-1a of region
$ ./bin/yb-ctl add_node --placement_info "aws.us-east-1.us-east-1a"
Add another node in the zone
ap-northeast-1a of region
$ ./bin/yb-ctl add_node --placement_info "aws.ap-northeast-1.ap-northeast-1a"
At this point, these 2 new nodes are added into the cluster but are not taking any read or write IO. This is because YB Master's initial placement policy of storing data across the zones in
us-west-2 region still applies.
Update placement policy
Let us now update the placement policy, instructing the YB-Master to place data in the new regions.
$ ./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \ modify_placement_info aws.us-west-2.us-west-2a,aws.us-east-1.us-east-1a,aws.ap-northeast-1.ap-northeast-1a 3
You should see that the data as well as the IO gradually moves from the nodes in
us-west-2c to the newly added nodes. The tablet servers page should soon look something like the screenshot below.
4. Retire old nodes
Start new masters
Next we need to move the YB-Master from the old nodes to the new nodes. In order to do so, first start a new masters on the new nodes.
$ ./bin/yb-ctl add_node --master --placement_info "aws.us-east-1.us-east-1a"
$ ./bin/yb-ctl add_node --master --placement_info "aws.ap-northeast-1.ap-northeast-1a"
Remove old masters
Remove the old masters from the masters Raft group. Assuming nodes with IPs 127.0.0.2 and 127.0.0.3 were the two old nodes, run the following commands.
$ ./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100,127.0.0.4:7100,127.0.0.5:7100 change_master_config REMOVE_SERVER 127.0.0.2 7100
$ ./bin/yb-admin --master_addresses 127.0.0.1:7100,127.0.0.3:7100,127.0.0.4:7100,127.0.0.5:7100 change_master_config REMOVE_SERVER 127.0.0.3 7100
Remove old nodes
Now it's safe to remove the old nodes.
$ ./bin/yb-ctl remove_node 2
$ ./bin/yb-ctl remove_node 3
5. Clean up (optional)
Optionally, you can shutdown the local cluster created in Step 1.
$ ./bin/yb-ctl destroy