YugaByte YugaByte
PRODUCT
YugaByte DB Enterprise Edition Compare
SOLUTIONS

Use Cases

Distributed OLTP Apps Fast Data Infrastructure

Deployment Options

Microservices & Containers Multi-Region & Multi-Cloud

Roles

App Development Cloud Operations

Industries

Software-as-a-Service Financial Services Internet of Things E-Commerce
DOCS
Install Explore Develop FAQ
RESOURCES

Meet Us

Events Online Talks

Request Help

GitHub Forum Gitter
ABOUT
Team Careers In the News Contact Us
BLOG
Download

Docs

  • Introduction
    • Overview
    • Core Features
    • Benefits
  • Quick Start
    • 1. Install YugaByte DB
    • 2. Create Local Cluster
    • 3. Test Cassandra API
    • 4. Test Redis API
    • 5. Run Sample Apps
  • Explore
    • 1. Linear Scalability
    • 2. Fault Tolerance
    • 3. ACID Transactions
    • 4. Secondary Indexes
    • 5. Auto Sharding
    • 6. Auto Rebalancing
    • 7. Tunable Reads
  • Develop
    • Client Drivers
      • Java
      • NodeJS
      • Python
    • Learn
      • 1. SQL vs NoSQL
      • 2. Data Modelling
      • 3. ACID Transactions
      • 4. Aggregations
      • 5. Batch Operations
    • Ecosystem Integrations
      • Apache Spark
      • JanusGraph
      • KairosDB
    • Real World Examples
      • E-Commerce App
      • IoT Fleet Management
  • Deploy
    • Private DC
    • Kubernetes
      • Local SSD
    • Public Clouds
      • Amazon Web Services
      • Google Cloud Platform
      • Microsoft Azure
    • Enterprise Edition
      • 1. Initial Setup
      • 2. Install Admin Console
      • 3. Configure Cloud Providers
  • Manage
    • Enterprise Edition
      • Create Universe
      • Edit Universe
      • Edit Config Flags
      • Upgrade Universe
      • Delete Universe
    • Diagnostics Reporting
  • Troubleshoot
    • Troubleshooting Overview
    • Cluster Level Issues
      • Cassandra Connection Issues
      • Redis Connection Issues
    • Node Level Issues
      • Check Processes
      • Inspect Logs
      • System Stats
    • Enterprise Edition
      • Troubleshoot Universes
  • Architecture
    • Basics
      • Single Node
      • Universe, YB-TServer, YB-Master
      • Sharding
      • Replication
      • Persistence
      • Query Layer
      • Acknowledgements
    • Core Functions
      • Universe Creation
      • Table Creation
      • Write IO Path
      • Read IO Path
      • High Availability
    • Transactions
      • Isolation Levels
      • Single Row Transactions
      • Distributed Transactions
      • Transactional IO Path
  • Comparisons
    • Apache Cassandra
    • MongoDB
    • Redis
    • Azure Cosmos DB
    • Google Cloud Spanner
    • Apache HBase
  • API Reference
    • Apache Cassandra
      • ALTER TABLE
      • CREATE INDEX
      • CREATE KEYSPACE
      • CREATE TABLE
      • CREATE TYPE
      • DROP KEYSPACE
      • DROP TABLE
      • DROP TYPE
      • USE
      • INSERT
      • SELECT
      • UPDATE
      • DELETE
      • TRANSACTION
      • TRUNCATE
      • Simple Value
      • Subscript
      • Function Call
      • Operator Call
      • BLOB
      • BOOLEAN
      • MAP
      • FROZEN
      • INET
      • Integer
      • Non-integer
      • TEXT
      • Date & Time
      • UUID & TIMEUUID
    • Redis
      • APPEND
      • AUTH
      • CONFIG
      • DEL
      • ECHO
      • EXISTS
      • FLUSHALL
      • FLUSHDB
      • GET
      • GETRANGE
      • GETSET
      • HDEL
      • HEXISTS
      • HGET
      • HGETALL
      • HKEYS
      • HLEN
      • HMGET
      • HMSET
      • HSET
      • HSTRLEN
      • HVALS
      • INCR
      • MGET
      • MSET
      • ROLE
      • SADD
      • SCARD
      • SET
      • SETRANGE
      • SISMEMBER
      • SMEMBERS
      • SREM
      • STRLEN
      • TSADD
      • TSGET
      • TSRANGEBYTIME
      • TSREM
      • TSCARD
      • TSLASTN
      • ZADD
      • ZCARD
      • ZRANGEBYSCORE
      • ZREM
      • ZREVRANGE
  • Admin Reference
    • yb-ctl
    • yb-docker-ctl
    • docker-compose
    • yb-master
    • yb-tserver
  • FAQs
    • Product
    • Architecture
    • Enterprise Edition
    • Cassandra API
Explore

Tunable Read Latency

With YugaByte DB, you can choose different consistency levels for performing reads. Relaxed consistency levels lead to lower latencies since the DB now has less work to do at read time including serving the read from the tablet followers. Reading from the followers is similar to a reading from a cache, which can give more read IOPS with low latency. In this tutorial, we will update a single key-value over and over, and read it from the tablet leader. While that workload is running, we will start another workload to read from a follower and verify that we are able to read from a tablet follower.

If you haven’t installed YugaByte DB yet, do so first by following the Quick Start guide.

  • macOS
  • Linux

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ ./yb-docker-ctl destroy

Start a new local universe with replication factor 5. This will create 5 nodes by default.

$ ./yb-docker-ctl create --rf 5 

Add 2 more nodes.

$ ./yb-docker-ctl add_node
$ ./yb-docker-ctl add_node

2. Write some data

By default, the key-value sample application runs with strong read consistency where all data is read from the tablet leader. We are going to populate exactly one key with a 10KB value into the system. Since the replication factor is 5, this key will get replicated to 5 of the 7 nodes in the universe.

Let us run the CQL sample key-value app to constantly update this key-value, as well as perform reads with strong consistency against the local universe.

$ docker cp yb-master-n1:/home/yugabyte/java/yb-sample-apps.jar .
$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes localhost:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240

In the above command, we have set the value of num_unique_keys to 1, which means we are overwriting a single key key:0. We can verify this using cqlsh:

$ docker exec -it yb-tserver-n1 /home/yugabyte/bin/cqlsh
Connected to local cluster at localhost:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> SELECT k FROM ybdemo_keyspace.cassandrakeyvalue;
 k
-------
 key:0

(1 rows)

3. Strongly consistent reads from tablet leaders

When performing strongly consistent reads as a part of the above command, all reads will be served by the tablet leader of the tablet that contains the key key:0. If we browse to the tablet-servers page, we will see that all the requests are indeed being served by one tserver:

Reads from the tablet leader

4. Timeline consistent reads from tablet replicas

Let us stop the above sample app, and run the following variant of the sample app. This command will do updates to the same key key:0 which will go through the tablet leader, but it will reads from the replicas.

$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes localhost:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240 \
                                    --local_reads

This can be easily seen by refreshing the tablet-servers page, where we will see that the writes are served by a single TServer that is the leader of the tablet for the key key:0 while multiple TServers which are replicas serve the reads.

Reads from the tablet follower

5. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./yb-docker-ctl destroy

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

$ ./yb-docker-ctl destroy

Start a new local universe with replication factor 5. This will create 5 nodes by default.

$ ./yb-docker-ctl create --rf 5 

Add 2 more nodes.

$ ./yb-docker-ctl add_node
$ ./yb-docker-ctl add_node

2. Write some data

By default, the key-value sample application runs with strong read consistency where all data is read from the tablet leader. We are going to populate exactly one key with a 10KB value into the system. Since the replication factor is 5, this key will get replicated to 5 of the 7 nodes in the universe.

Let us run the CQL sample key-value app to constantly update this key-value, as well as perform reads with strong consistency against the local universe.

$ docker cp yb-master-n1:/home/yugabyte/java/yb-sample-apps.jar .
$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes localhost:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240

In the above command, we have set the value of num_unique_keys to 1, which means we are overwriting a single key key:0. We can verify this using cqlsh:

$ docker exec -it yb-tserver-n1 /home/yugabyte/bin/cqlsh
Connected to local cluster at localhost:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> SELECT k FROM ybdemo_keyspace.cassandrakeyvalue;
 k
-------
 key:0

(1 rows)

3. Strongly consistent reads from tablet leaders

When performing strongly consistent reads as a part of the above command, all reads will be served by the tablet leader of the tablet that contains the key key:0. If we browse to the tablet-servers page, we will see that all the requests are indeed being served by one tserver:

Reads from the tablet leader

4. Timeline consistent reads from tablet replicas

Let us stop the above sample app, and run the following variant of the sample app. This command will do updates to the same key key:0 which will go through the tablet leader, but it will reads from the replicas.

$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes localhost:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240 \
                                    --local_reads

This can be easily seen by refreshing the tablet-servers page, where we will see that the writes are served by a single TServer that is the leader of the tablet for the key key:0 while multiple TServers which are replicas serve the reads.

Reads from the tablet leader

5. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./yb-docker-ctl destroy

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

./bin/yb-ctl destroy

Start a new local universe with the default 3 nodes and default replication factor 3.

./bin/yb-ctl create

Add 1 more node.

./bin/yb-ctl add_node

2. Write some data

By default, the key-value sample application runs with strong read consistency where all data is read from the tablet leader. We are going to populate exactly one key with a 10KB value into the system. Since the replication factor is 3, this key will get replicated to only 3 of the 4 nodes in the universe.

Let us run the Cassandra sample key-value app to constantly update this key-value, as well as perform reads with strong consistency against the local universe.

java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240

In the above command, we have set the value of num_unique_keys to 1, which means we are overwriting a single key key:0. We can verify this using cqlsh:

$ ./bin/cqlsh 127.0.0.1
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> SELECT k FROM ybdemo_keyspace.cassandrakeyvalue;
 k
-------
 key:0

(1 rows)

3. Strongly consistent reads from tablet leaders

When performing strongly consistent reads as a part of the above command, all reads will be served by the tablet leader of the tablet that contains the key key:0. If we browse to the tablet-servers page, we will see that all the requests are indeed being served by one tserver:

Reads from the tablet leader

4. Timeline consistent reads from tablet replicas

Let us stop the above sample app, and run the following variant of the sample app. This command will do updates to the same key key:0 which will go through the tablet leader, but it will reads from the replicas.

java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240 \
                                    --local_reads

This can be easily seen by refreshing the tablet-servers page, where we will see that the writes are served by a single TServer that is the leader of the tablet for the key key:0 while multiple TServers which are replicas serve the reads.

Reads from the tablet follower

5. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./bin/yb-ctl destroy

1. Setup - create universe

If you have a previously running local universe, destroy it using the following.

./bin/yb-ctl destroy

Start a new local universe with the default 3 nodes and default replication factor 3.

./bin/yb-ctl create

Add 1 more node.

./bin/yb-ctl add_node

2. Write some data

By default, the key-value sample application runs with strong read consistency where all data is read from the tablet leader. We are going to populate exactly one key with a 10KB value into the system. Since the replication factor is 3, this key will get replicated to only 3 of the 4 nodes in the universe.

Let us run the Cassandra sample key-value app to constantly update this key-value, as well as perform reads with strong consistency against the local universe.

java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240

In the above command, we have set the value of num_unique_keys to 1, which means we are overwriting a single key key:0. We can verify this using cqlsh:

$ ./bin/cqlsh 127.0.0.1
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> SELECT k FROM ybdemo_keyspace.cassandrakeyvalue;
 k
-------
 key:0

(1 rows)

3. Strongly consistent reads from tablet leaders

When performing strongly consistent reads as a part of the above command, all reads will be served by the tablet leader of the tablet that contains the key key:0. If we browse to the tablet-servers page, we will see that all the requests are indeed being served by one tserver:

Reads from the tablet leader

4. Timeline consistent reads from tablet replicas

Let us stop the above sample app, and run the following variant of the sample app. This command will do updates to the same key key:0 which will go through the tablet leader, but it will reads from the replicas.

java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
                                    --nodes 127.0.0.1:9042 \
                                    --nouuid \
                                    --num_unique_keys 1 \
                                    --num_threads_write 1 \
                                    --num_threads_read 1 \
                                    --value_size 10240 \
                                    --local_reads

This can be easily seen by refreshing the tablet-servers page, where we will see that the writes are served by a single TServer that is the leader of the tablet for the key key:0 while multiple TServers which are replicas serve the reads.

Reads from the tablet follower

5. Clean up (optional)

Optionally, you can shutdown the local cluster created in Step 1.

$ ./bin/yb-ctl destroy
Auto Rebalancing
Develop Java Apps
YugaByte

SUBSCRIBE TO NEWS

The latest news, tips, blog posts, and resources.

Copyright © 2017-2018 YugaByte, Inc. All rights reserved.

Apache and Apache Cassandra are trademarks of the Apache Software Foundation in the United States and/or other countries. Redis and the Redis logo are the trademarks of Salvatore Sanfilippo in the United States and other countries. No endorsement by these organizations is implied by the use of these marks.