YugaByte YugaByte
PRODUCT
YugaByte DB Enterprise Edition Compare
SOLUTIONS

Use Cases

Distributed OLTP Apps Fast Data Infrastructure

Deployment Options

Microservices & Containers Multi-Region & Multi-Cloud

Roles

App Development Cloud Operations

Industries

Software-as-a-Service Financial Services Internet of Things E-Commerce
DOCS
Install Explore Develop FAQ
RESOURCES

Meet Us

Events Online Talks

Request Help

GitHub Forum Gitter
ABOUT
Team Careers In the News Contact Us
BLOG
Download

Docs

  • Introduction
    • Overview
    • Core Features
    • Benefits
  • Quick Start
    • 1. Install YugaByte DB
    • 2. Create Local Cluster
    • 3. Test Cassandra API
    • 4. Test Redis API
    • 5. Run Sample Apps
  • Explore
    • 1. Linear Scalability
    • 2. Fault Tolerance
    • 3. ACID Transactions
    • 4. Secondary Indexes
    • 5. Auto Sharding
    • 6. Auto Rebalancing
    • 7. Tunable Reads
  • Develop
    • Client Drivers
      • Java
      • NodeJS
      • Python
    • Learn
      • 1. SQL vs NoSQL
      • 2. Data Modelling
      • 3. ACID Transactions
      • 4. Aggregations
      • 5. Batch Operations
    • Ecosystem Integrations
      • Apache Spark
      • JanusGraph
      • KairosDB
    • Real World Examples
      • E-Commerce App
      • IoT Fleet Management
  • Deploy
    • Private DC
    • Kubernetes
      • Local SSD
    • Public Clouds
      • Amazon Web Services
      • Google Cloud Platform
      • Microsoft Azure
    • Enterprise Edition
      • 1. Initial Setup
      • 2. Install Admin Console
      • 3. Configure Cloud Providers
  • Manage
    • Enterprise Edition
      • Create Universe
      • Edit Universe
      • Edit Config Flags
      • Upgrade Universe
      • Delete Universe
    • Diagnostics Reporting
  • Troubleshoot
    • Troubleshooting Overview
    • Cluster Level Issues
      • Cassandra Connection Issues
      • Redis Connection Issues
    • Node Level Issues
      • Check Processes
      • Inspect Logs
      • System Stats
    • Enterprise Edition
      • Troubleshoot Universes
  • Architecture
    • Basics
      • Single Node
      • Universe, YB-TServer, YB-Master
      • Sharding
      • Replication
      • Persistence
      • Query Layer
      • Acknowledgements
    • Core Functions
      • Universe Creation
      • Table Creation
      • Write IO Path
      • Read IO Path
      • High Availability
    • Transactions
      • Isolation Levels
      • Single Row Transactions
      • Distributed Transactions
      • Transactional IO Path
  • Comparisons
    • Apache Cassandra
    • MongoDB
    • Redis
    • Azure Cosmos DB
    • Google Cloud Spanner
    • Apache HBase
  • API Reference
    • Apache Cassandra
      • ALTER TABLE
      • CREATE INDEX
      • CREATE KEYSPACE
      • CREATE TABLE
      • CREATE TYPE
      • DROP KEYSPACE
      • DROP TABLE
      • DROP TYPE
      • USE
      • INSERT
      • SELECT
      • UPDATE
      • DELETE
      • TRANSACTION
      • TRUNCATE
      • Simple Value
      • Subscript
      • Function Call
      • Operator Call
      • BLOB
      • BOOLEAN
      • MAP
      • FROZEN
      • INET
      • Integer
      • Non-integer
      • TEXT
      • Date & Time
      • UUID & TIMEUUID
    • Redis
      • APPEND
      • AUTH
      • CONFIG
      • DEL
      • ECHO
      • EXISTS
      • FLUSHALL
      • FLUSHDB
      • GET
      • GETRANGE
      • GETSET
      • HDEL
      • HEXISTS
      • HGET
      • HGETALL
      • HKEYS
      • HLEN
      • HMGET
      • HMSET
      • HSET
      • HSTRLEN
      • HVALS
      • INCR
      • MGET
      • MSET
      • ROLE
      • SADD
      • SCARD
      • SET
      • SETRANGE
      • SISMEMBER
      • SMEMBERS
      • SREM
      • STRLEN
      • TSADD
      • TSGET
      • TSRANGEBYTIME
      • TSREM
      • TSCARD
      • TSLASTN
      • ZADD
      • ZCARD
      • ZRANGEBYSCORE
      • ZREM
      • ZREVRANGE
  • Admin Reference
    • yb-ctl
    • yb-docker-ctl
    • docker-compose
    • yb-master
    • yb-tserver
  • FAQs
    • Product
    • Architecture
    • Enterprise Edition
    • Cassandra API
Quick Start

Create Local Cluster

After installing YugaByte DB, follow the instructions below to create a local cluster.

  • Docker
  • Kubernetes
  • macOS
  • Linux

1. Create a 3 node cluster with replication factor 3

We will use the yb-docker-ctl utility downloaded in the previous step to create and administer a containerized local cluster. Detailed output for the create command is available in yb-docker-ctl Reference.

$ ./yb-docker-ctl create

Clients can now connect to YugaByte DB’s Cassandra API at localhost:9042 and to YugaByte’s Redis API at localhost:6379.

2. Check cluster status with yb-docker-ctl

Run the command below to see that we now have 3 yb-master (yb-master-n1,yb-master-n2,yb-master-n3) and 3 yb-tserver (yb-tserver-n1,yb-tserver-n2,yb-tserver-n3) containers running on this localhost. Roles played by these containers in a YugaByte cluster (aka Universe) is explained in detail here.

$ ./yb-docker-ctl status
PID        Type       Node       URL                       Status          Started At          
26132      tserver    n3         http://172.18.0.7:9000    Running         2017-10-20T17:54:54.99459154Z
25965      tserver    n2         http://172.18.0.6:9000    Running         2017-10-20T17:54:54.412377451Z
25846      tserver    n1         http://172.18.0.5:9000    Running         2017-10-20T17:54:53.806993683Z
25660      master     n3         http://172.18.0.4:7000    Running         2017-10-20T17:54:53.197652566Z
25549      master     n2         http://172.18.0.3:7000    Running         2017-10-20T17:54:52.640188158Z
25438      master     n1         http://172.18.0.2:7000    Running         2017-10-20T17:54:52.084772289Z

3. Check cluster status with Admin UI

The yb-master-n1 Admin UI is available at http://localhost:7000 and the yb-tserver-n1 Admin UI is available at http://localhost:9000. Other masters and tservers do not have their admin ports mapped to localhost to avoid port conflicts.

3.1 Overview and Master status

The yb-master-n1 home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugaByte DB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all the 3 tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

Run the following command to create the cluster.

$ kubectl apply -f yugabyte-statefulset.yaml
service "yb-masters" created
statefulset "yb-master" created
service "yb-tservers" created
statefulset "yb-tserver" created

2. Check cluster status

Run the command below to see that we now have two services with 3 pods each - 3 yb-master pods (yb-master-1,yb-master-2,yb-master-3) and 3 yb-tserver pods (yb-tserver-1,yb-tserver-2,yb-tserver-3) running. Roles played by these pods in a YugaByte DB cluster (aka Universe) is explained in detail here.

$ kubectl get pods
NAME           READY     STATUS              RESTARTS   AGE
yb-master-0    0/1       ContainerCreating   0          5s
yb-master-1    0/1       ContainerCreating   0          5s
yb-master-2    1/1       Running             0          5s
yb-tserver-0   0/1       ContainerCreating   0          4s
yb-tserver-1   0/1       ContainerCreating   0          4s
yb-tserver-2   0/1       ContainerCreating   0          4s

Eventually all the pods will have the Running state.

$ kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
yb-master-0    1/1       Running   0          13s
yb-master-1    1/1       Running   0          13s
yb-master-2    1/1       Running   0          13s
yb-tserver-0   1/1       Running   1          12s
yb-tserver-1   1/1       Running   1          12s
yb-tserver-2   1/1       Running   1          12s

3. Initialize the Redis API

Initialize Redis API by running the following yb-admin command. This will initialize the Redis API and DB in the YugaByte DB kubernetes universe we just setup.

$ kubectl exec -it yb-master-0 /home/yugabyte/bin/yb-admin -- --master_addresses yb-master-0.yb-masters.default.svc.cluster.local:7100,yb-master-1.yb-masters.default.svc.cluster.local:7100,yb-master-2.yb-masters.default.svc.cluster.local:7100 setup_redis_table
...
I0127 19:38:10.358551   115 client.cc:1292] Created table system_redis.redis of type REDIS_TABLE_TYPE
I0127 19:38:10.358872   115 yb-admin_client.cc:400] Table 'system_redis.redis' created.

Clients can now connect to this YugaByte DB universe using Cassandra and Redis APIs on the 9042 and 6379 ports respectively.

4. Check cluster status via Kubernetes

You can see the status of the 2 services by simply running the following command.

$ kubectl get services
NAME          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                               AGE
kubernetes    ClusterIP   10.96.0.1    <none>        443/TCP                               3d
yb-masters    ClusterIP   None         <none>        7000/TCP,7100/TCP                     1m
yb-tservers   ClusterIP   None         <none>        9042/TCP,6379/TCP,9000/TCP,9100/TCP   1m

5. Check cluster status with Admin UI

In order to do this, we would need to access the UI on port 7000 exposed by any of the pods in the yb-master service (one of yb-master-0, yb-master-1 or yb-master-2). Let us set up a network route to access yb-master-0 on port 7000 from our localhost. You can do this by running the following command.

kubectl port-forward yb-master-0 7000

Now, you can view the yb-master-0 Admin UI is available at http://localhost:7000.

5.1 Overview and Master status

The yb-master-0 home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugaByte DB version is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

5.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all the 3 tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

We will use the yb-ctl utility located in the bin directory of the YugaByte DB package to create and administer a local cluster. The default data directory used is /tmp/yugabyte-local-cluster. You can change this directory with the --data_dir option. Detailed output for the create command is available in yb-ctl Reference.

$ ./bin/yb-ctl create

You can now check /tmp/yugabyte-local-cluster to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugaByte DB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

2. Check cluster status with yb-ctl

Run the command below to see that we now have 3 yb-master processes and 3 yb-tserver processes running on this localhost. Roles played by these processes in a YugaByte cluster (aka Universe) is explained in detail here.

$ ./bin/yb-ctl status
2017-10-16 22:19:52,363 INFO: Server is running: type=master, node_id=1, PID=31926, admin service=127.0.0.1:7000
2017-10-16 22:19:52,438 INFO: Server is running: type=master, node_id=2, PID=31929, admin service=127.0.0.2:7000
2017-10-16 22:19:52,448 INFO: Server is running: type=master, node_id=3, PID=31932, admin service=127.0.0.3:7000
2017-10-16 22:19:52,462 INFO: Server is running: type=tserver, node_id=1, PID=31935, admin service=127.0.0.1:9000, cql service=127.0.0.1:9042, redis service=127.0.0.1:6379
2017-10-16 22:19:52,795 INFO: Server is running: type=tserver, node_id=2, PID=31938, admin service=127.0.0.2:9000, cql service=127.0.0.2:9042, redis service=127.0.0.2:6379
2017-10-16 22:19:53,476 INFO: Server is running: type=tserver, node_id=3, PID=31941, admin service=127.0.0.3:9000, cql service=127.0.0.3:9042, redis service=127.0.0.3:6379

3. Check cluster status with Admin UI

Node 1’s master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. You can visit the other nodes’ Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1’s master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugaByte DB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0 across all the 3 tservers. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

We will use the yb-ctl utility located in the bin directory of the YugaByte DB package to create and administer a local cluster. The default data directory used is /tmp/yugabyte-local-cluster. You can change this directory with the --data_dir option. Detailed output for the create command is available in yb-ctl Reference.

$ ./bin/yb-ctl create

You can now check /tmp/yugabyte-local-cluster to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugaByte DB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

2. Check cluster status with yb-ctl

Run the command below to see that we now have 3 yb-master processes and 3 yb-tserver processes running on this localhost. Roles played by these processes in a YugaByte cluster (aka Universe) is explained in detail here.

$ ./bin/yb-ctl status
2017-10-16 22:19:52,363 INFO: Server is running: type=master, node_id=1, PID=31926, admin service=127.0.0.1:7000
2017-10-16 22:19:52,438 INFO: Server is running: type=master, node_id=2, PID=31929, admin service=127.0.0.2:7000
2017-10-16 22:19:52,448 INFO: Server is running: type=master, node_id=3, PID=31932, admin service=127.0.0.3:7000
2017-10-16 22:19:52,462 INFO: Server is running: type=tserver, node_id=1, PID=31935, admin service=127.0.0.1:9000, cql service=127.0.0.1:9042, redis service=127.0.0.1:6379
2017-10-16 22:19:52,795 INFO: Server is running: type=tserver, node_id=2, PID=31938, admin service=127.0.0.2:9000, cql service=127.0.0.2:9042, redis service=127.0.0.2:6379
2017-10-16 22:19:53,476 INFO: Server is running: type=tserver, node_id=3, PID=31941, admin service=127.0.0.3:9000, cql service=127.0.0.3:9042, redis service=127.0.0.3:6379

3. Check cluster status with Admin UI

Node 1’s master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. You can visit the other nodes’ Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1’s master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugaByte DB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0 across all the 3 tservers. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

Install YugaByte DB
Test YugaByte DB Cassandra API
YugaByte

SUBSCRIBE TO NEWS

The latest news, tips, blog posts, and resources.

Copyright © 2017-2018 YugaByte, Inc. All rights reserved.

Apache and Apache Cassandra are trademarks of the Apache Software Foundation in the United States and/or other countries. Redis and the Redis logo are the trademarks of Salvatore Sanfilippo in the United States and other countries. No endorsement by these organizations is implied by the use of these marks.