Docs

Linear Scalability

With YugaByte DB, you can add nodes to scale your cluster up very efficiently and reliably in order to achieve more read and write IOPS. In this tutorial, we will look at how YugaByte DB can scale while a workload is running. We will run a read-write workload using a pre-packaged sample application against a 3-node local cluster with a replication factor of 3, and add nodes to it while the workload is running. We will then observe how the cluster scales out, by verifying that the number of read/write IOPS are evenly distributed across all the nodes at all times.

If you haven’t installed YugaByte DB yet, do so first by following the Quick Start guide.

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ ./yb-docker-ctl destroy

Start a new local cluster - by default, this will create a 3 node universe with a replication factor of 3.

$ ./yb-docker-ctl create

2. Run sample key-value app

Run the Cassandra sample key-value app against the local universe by typing the following command.

$ docker cp yb-master-n1:/home/yugabyte/java/yb-sample-apps.jar .
$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes localhost:9042 \
                                    --num_threads_write 1 \
                                    --num_threads_read 4 \
                                    --value_size 4096

The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.

2017-11-20 14:02:48,114 [INFO|...] Read: 9893.73 ops/sec (0.40 ms/op), 233458 total ops  |
                                   Write: 1155.83 ops/sec (0.86 ms/op), 28072 total ops  |  ...

2017-11-20 14:02:53,118 [INFO|...] Read: 9639.85 ops/sec (0.41 ms/op), 281696 total ops  |
                                   Write: 1078.74 ops/sec (0.93 ms/op), 33470 total ops  |  ...

3. Observe IOPS per node

You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.

Read and write IOPS with 3 nodes

4. Add node and observe linear scale out

Add a node to the universe.

$ ./yb-docker-ctl add_node

Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 4 nodes

5. Add another node and observe linear scale out

Add yet another node to the universe.

$ ./yb-docker-ctl add_node

Now we should have 5 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 5 nodes

YugaByte DB automatically lets the client know to use the newly added nodes for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.

S6. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./yb-docker-ctl destroy

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ kubectl delete -f yugabyte-statefulset.yaml

Start a new local cluster - by default, this will create a 3 node universe with a replication factor of 3.

$ kubectl apply -f yugabyte-statefulset.yaml

Check the Kubernetes dashboard to see the 3 yb-tserver and 3 yb-master pods representing the 3 nodes of the cluster.

$ minikube dashboard

Kubernetes Dashboard

2. Check cluster status with Admin UI

We need to access the [yb-master Admin UI]](/admin/yb-master/#admin-ui) on port 7000 exposed by any of the pods in the yb-master service (one of yb-master-0, yb-master-1 or yb-master-2). Let us set up a network route to access yb-master-0 on port 7000 from our localhost. You can do this by running the following command.

$ kubectl port-forward yb-master-0 7000

Now, you can view the yb-master-0 Admin UI is available at http://localhost:7000.

3. Add node and observe linear scale out

Add a node to the universe.

$ kubectl scale statefulset yb-tserver --replicas=4

Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. YugaByte DB automatically updates application clients to use the newly added node for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.

You can also observe the newly added node using the following command.

$ kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
yb-master-0    1/1       Running   0          5m
yb-master-1    1/1       Running   0          5m
yb-master-2    1/1       Running   0          5m
yb-tserver-0   1/1       Running   1          5m
yb-tserver-1   1/1       Running   1          5m
yb-tserver-2   1/1       Running   0          5m
yb-tserver-3   1/1       Running   0          4m

4. Scale back down to 3 nodes

The cluster can now be scaled back to only 3 nodes.

$ kubectl scale statefulset yb-tserver --replicas=3
$ kubectl get pods
NAME           READY     STATUS        RESTARTS   AGE
yb-master-0    1/1       Running       0          6m
yb-master-1    1/1       Running       0          6m
yb-master-2    1/1       Running       0          6m
yb-tserver-0   1/1       Running       1          6m
yb-tserver-1   1/1       Running       1          6m
yb-tserver-2   1/1       Running       0          6m
yb-tserver-3   1/1       Terminating   0          5m

Step 6. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ kubectl delete -f yugabyte-statefulset.yaml

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ ./bin/yb-ctl destroy

Start a new local cluster - by default, this will create a 3 node universe with a replication factor of 3.

$ ./bin/yb-ctl create

2. Run sample key-value app

Run the Cassandra sample key-value app against the local universe by typing the following command.

$ java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --num_threads_write 1 \
                                    --num_threads_read 4 \
                                    --value_size 4096

The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.

2017-11-20 14:02:48,114 [INFO|...] Read: 9893.73 ops/sec (0.40 ms/op), 233458 total ops  |
                                   Write: 1155.83 ops/sec (0.86 ms/op), 28072 total ops  |  ...

2017-11-20 14:02:53,118 [INFO|...] Read: 9639.85 ops/sec (0.41 ms/op), 281696 total ops  |
                                   Write: 1078.74 ops/sec (0.93 ms/op), 33470 total ops  |  ...

3. Observe IOPS per node

You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.

Read and write IOPS with 3 nodes

4. Add node and observe linear scale out

Add a node to the universe.

$ ./bin/yb-ctl add_node

Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 4 nodes

5. Add another node and observe linear scale out

Add yet another node to the universe.

$ ./bin/yb-ctl add_node

Now we should have 5 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 5 nodes

The YugaByte universe automatically let the client know to use the newly added nodes for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.

6. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./bin/yb-ctl destroy

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ ./bin/yb-ctl destroy

Start a new local cluster - by default, this will create a 3 node universe with a replication factor of 3.

$ ./bin/yb-ctl create

2. Run sample key-value app

Run the Cassandra sample key-value app against the local universe by typing the following command.

$ java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --num_threads_write 1 \
                                    --num_threads_read 4 \
                                    --value_size 4096

The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.

2017-11-20 14:02:48,114 [INFO|...] Read: 9893.73 ops/sec (0.40 ms/op), 233458 total ops  |
                                   Write: 1155.83 ops/sec (0.86 ms/op), 28072 total ops  |  ...

2017-11-20 14:02:53,118 [INFO|...] Read: 9639.85 ops/sec (0.41 ms/op), 281696 total ops  |
                                   Write: 1078.74 ops/sec (0.93 ms/op), 33470 total ops  |  ...

3. Observe IOPS per node

You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.

Read and write IOPS with 3 nodes

4. Add node and observe linear scale out

Add a node to the universe.

$ ./bin/yb-ctl add_node

Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 4 nodes

5. Add another node and observe linear scale out

Add yet another node to the universe.

$ ./bin/yb-ctl add_node

Now we should have 5 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.

Read and write IOPS with 5 nodes

The YugaByte universe automatically let the client know to use the newly added nodes for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.

6. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./bin/yb-ctl destroy