Join us on
Star us on
Get Started
Slack
GitHub
Get Started
v2.5 (latest) v2.2 (stable) v2.1 (earlier version) v2.0 (earlier version) v1.3 (earlier version)
  • GET STARTED
    • Quick start
      • 1. Install YugabyteDB
      • 2. Create a local cluster
      • 3. Explore YSQL
      • 4. Build an application
        • Java
        • NodeJS
        • Go
        • Python
        • Ruby
        • C#
        • PHP
        • C++
        • C
    • Introduction
    • Explore core
      • 1. Linear scalability
      • 2. Fault tolerance
      • 3. Global distribution
      • 4. Auto sharding
      • 5. Tunable reads
      • 6. Observability
  • USER GUIDES
    • Develop
      • Learn app development
        • 1. SQL vs NoSQL
        • 2. Data modeling
        • 3. Data types
        • 4. ACID transactions
        • 5. Aggregations
        • 6. Batch operations
        • 7. Date and time
        • 8. Strings and text
      • Ecosystem integrations
        • Apache Kafka
        • Apache Spark
        • JanusGraph
        • KairosDB
        • Presto
        • Metabase
      • Real-world examples
        • E-Commerce App
        • IoT Fleet Management
        • Retail Analytics
      • Explore sample applications
    • Deploy
      • Checklist
      • Manual deployment
        • 1. System configuration
        • 2. Install software
        • 3. Start YB-Masters
        • 4. Start YB-TServers
        • 5. Verify deployment
      • Kubernetes
        • Helm Chart
        • Helm configuration
        • Local SSD
      • Docker
      • Public clouds
        • Amazon Web Services
        • Google Cloud Platform
        • Microsoft Azure
      • Pivotal Cloud Foundry
      • Yugabyte Platform
        • 1. Prepare cloud environment
        • 2. Install Admin Console
        • 3. Configure Admin Console
        • 4. Configure Cloud Providers
    • Benchmark
      • Performance
      • YCSB
      • Large datasets
    • Secure
      • Security checklist
      • Authentication
      • Authorization
        • 1. RBAC Model
        • 2. Create Roles
        • 3. Grant permissions
      • TLS encryption
        • 1. Prepare nodes
        • 2. Server-server encryption
        • 3. Client-server encryption
        • 4. Connect to cluster
      • Encryption at Rest
    • Manage
      • Backup and restore
        • Backing up data
        • Restoring data
      • Data migration
        • Bulk import
        • Bulk export
      • Change cluster config
      • Upgrade deployment
      • Diagnostics reporting
      • Yugabyte Platform
        • Create universe - Multi-zone
        • Create universe - Multi-region
        • Edit universe
        • Edit config flags
        • Health checking and alerts
        • Create and edit instance tags
        • Node status and actions
        • Read replicas
        • Back up and restore
        • Upgrade universe
        • Delete universe
    • Troubleshoot
      • Troubleshooting overview
      • Cluster level issues
        • YCQL connection issues
        • YEDIS connection Issues
      • Node level issues
        • Check processes
        • Inspect logs
        • System statistics
      • Yugabyte Platform
        • Troubleshoot universes
  • REFERENCE
    • APIs
      • YSQL
        • Statements
          • ABORT
          • ALTER DATABASE
          • ALTER DOMAIN
          • ALTER TABLE
          • BEGIN
          • COMMENT
          • COMMIT
          • COPY
          • CREATE DATABASE
          • CREATE DOMAIN
          • CREATE INDEX
          • CREATE SCHEMA
          • CREATE SEQUENCE
          • CREATE TABLE
          • CREATE TABLE AS
          • CREATE TYPE
          • CREATE USER
          • CREATE VIEW
          • DEALLOCATE
          • DELETE
          • DROP DATABASE
          • DROP DOMAIN
          • DROP SEQUENCE
          • DROP TABLE
          • DROP TYPE
          • END
          • EXECUTE
          • EXPLAIN
          • GRANT
          • INSERT
          • LOCK
          • PREPARE
          • RESET
          • REVOKE
          • ROLLBACK
          • SELECT
          • SET
          • SET CONSTRAINTS
          • SET TRANSACTION
          • SHOW
          • SHOW TRANSACTION
          • TRUNCATE
          • UPDATE
        • Data types
          • Binary
          • Boolean
          • Character
          • Date-time
          • Json
          • Money
          • Numeric
          • Serial
          • UUID
        • Expressions
          • currval()
          • lastval()
          • nextval()
        • Keywords
        • Reserved Names
      • YCQL
        • Quick Start YCQL
        • ALTER KEYSPACE
        • ALTER ROLE
        • ALTER TABLE
        • CREATE INDEX
        • CREATE KEYSPACE
        • CREATE ROLE
        • CREATE TABLE
        • CREATE TYPE
        • DROP INDEX
        • DROP KEYSPACE
        • DROP ROLE
        • DROP TABLE
        • DROP TYPE
        • GRANT PERMISSION
        • GRANT ROLE
        • REVOKE PERMISSION
        • REVOKE ROLE
        • USE
        • INSERT
        • SELECT
        • UPDATE
        • DELETE
        • TRANSACTION
        • TRUNCATE
        • Simple Value
        • Subscript
        • Function Call
        • Operator Call
        • BLOB
        • BOOLEAN
        • MAP, SET, LIST
        • FROZEN
        • INET
        • Integer & Counter
        • Non-Integer
        • TEXT
        • Date & Time Types
        • UUID & TIMEUUID
        • JSONB
        • Date and time functions
    • CLIs
      • yb-ctl
      • yb-docker-ctl
      • yb-master
      • yb-tserver
      • ysqlsh
      • cqlsh
    • Sample data
      • Chinook
      • Northwind
      • PgExercises
      • SportsDB
    • Tools
      • TablePlus
  • RELEASES
    • Release history
      • v1.3.1
      • v1.3.0
      • v1.2.12
      • v1.2.11
      • v1.2.10
      • v1.2.9
      • v1.2.8
      • v1.2.6
      • v1.2.5
      • v1.2.4
  • CONCEPTS
    • Architecture
      • Design goals
      • Layered architecture
      • Basic concepts
        • Universe
        • YB-TServer
        • YB-Master
        • Acknowledgements
      • Query layer
        • Overview
      • DocDB store
        • Sharding
        • Replication
        • Persistence
        • Performance
      • DocDB transactions
        • Isolation Levels
        • Single row transactions
        • Distributed transactions
        • Transactional IO path
  • FAQ
    • Comparisons
      • CockroachDB
      • Google Cloud Spanner
      • MongoDB
      • FoundationDB
      • Amazon DynamoDB
      • Azure Cosmos DB
      • Apache Cassandra
      • Redis in-memory store
      • Apache HBase
    • Other FAQs
      • Product
      • Architecture
      • Yugabyte Platform
      • API compatibility
  • CONTRIBUTOR GUIDES
    • Get involved
  • Misc
    • YEDIS
      • Quick start
      • Develop
        • Client drivers
          • C
          • C++
          • C#
          • Go
          • Java
          • NodeJS
          • Python
      • API reference
        • APPEND
        • AUTH
        • CONFIG
        • CREATEDB
        • DELETEDB
        • LISTDB
        • SELECT
        • DEL
        • ECHO
        • EXISTS
        • EXPIRE
        • EXPIREAT
        • FLUSHALL
        • FLUSHDB
        • GET
        • GETRANGE
        • GETSET
        • HDEL
        • HEXISTS
        • HGET
        • HGETALL
        • HINCRBY
        • HKEYS
        • HLEN
        • HMGET
        • HMSET
        • HSET
        • HSTRLEN
        • HVALS
        • INCR
        • INCRBY
        • KEYS
        • MONITOR
        • PEXPIRE
        • PEXPIREAT
        • PTTL
        • ROLE
        • SADD
        • SCARD
        • RENAME
        • SET
        • SETEX
        • PSETEX
        • SETRANGE
        • SISMEMBER
        • SMEMBERS
        • SREM
        • STRLEN
        • ZRANGE
        • TSADD
        • TSCARD
        • TSGET
        • TSLASTN
        • TSRANGEBYTIME
        • TSREM
        • TSREVRANGEBYTIME
        • TTL
        • ZADD
        • ZCARD
        • ZRANGEBYSCORE
        • ZREM
        • ZREVRANGE
        • ZSCORE
        • PUBSUB
        • PUBLISH
        • SUBSCRIBE
        • UNSUBSCRIBE
        • PSUBSCRIBE
        • PUNSUBSCRIBE
> Quick start >

2. Create a local cluster

Attention

This page documents an earlier version. Go to the latest (v2.3) version.

After installing YugabyteDB, follow the instructions below to create a local cluster.

Note

The local cluster setup on a single host is intended for development and learning. For production deployment or performance benchmarking, deploying a true multi-node on multi-host setup, see Deploy YugabyteDB.
  • macOS
  • Linux
  • Docker
  • Kubernetes

1. Create a local cluster

You can use the yb-ctl utility, located in the bin directory of the YugabyteDB package, to create and administer a local cluster. The default data directory is $HOME/yugabyte-data. You can change the location of the data directory by using the --data_dir option.

To quickly create a 1-node or 3-node local cluster, follow the steps below. For details on using the yb-ctl create command and the cluster configuration, see Create a local cluster in the utility reference.

Create a 1-node cluster with RF=1

To create a 1-node cluster with a replication factor (RF) of 1, run the following yb-ctl create command.

$ ./bin/yb-ctl create

The initial cluster creation may take a minute or so without any output on the prompt.

Create a 3-node cluster with RF=3

To run a distributed SQL cluster locally, you can quickly create a 3-node cluster with RF of 3 by running the following command.

$ ./bin/yb-ctl --rf 3 create

You can now check $HOME/yugabyte-data to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

Clients can now connect to the YSQL and YCQL APIs at localhost:5433 and localhost:9042 respectively.

2. Check cluster status with yb-ctl

Run the yb-ctl status command to see the yb-master and yb-tserver processes running locally.

Example

For a 1-node cluster, the yb-ctl status command will show that you have 1 yb-master process and 1 yb-tserver process running on the localhost. For details about the roles of these processes in a YugabyteDB cluster (aka Universe), see Universe.

$ ./bin/yb-ctl status
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1                                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres                                  |
| YSQL Shell          : bin/ysqlsh                                                                 |
| YCQL Shell          : bin/cqlsh                                                                  |
| YEDIS Shell         : bin/redis-cli                                                              |
| Web UI              : http://127.0.0.1:7000/                                                     |
| Cluster Data        : /Users/yugabyte/yugabyte-data                                             |
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
| Node 1: yb-tserver (pid 20696), yb-master (pid 20693)                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres                                  |
| YSQL Shell          : bin/ysqlsh                                                                 |
| YCQL Shell          : bin/cqlsh                                                                  |
| YEDIS Shell         : bin/redis-cli                                                              |
| data-dir[0]         : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data                       |
| yb-tserver Logs     : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/tserver/logs          |
| yb-master Logs      : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/master/logs           |
----------------------------------------------------------------------------------------------------

3. Check cluster status with Admin UI

Node 1's master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. If you created a multi-node cluster, you can visit the other nodes' Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1's master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 1 and Num Nodes (TServers) as 1. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a local cluster

You can use the yb-ctl utility, located in the bin directory of the YugabyteDB package, to create and administer a local cluster. The default data directory is $HOME/yugabyte-data. You can change the location of the data directory by using the --data_dir option.

To quickly create a 1-node or 3-node local cluster, follow the steps below. For details on using the yb-ctl create command and the cluster configuration, see Create a local cluster in the utility reference.

Create a 1-node cluster with RF=1

To create a 1-node cluster with a replication factor (RF) of 1, run the following yb-ctl create command.

$ ./bin/yb-ctl create

The initial cluster creation may take a minute or so without any output on the prompt.

Create a 3-node cluster with RF=3

To run a distributed SQL cluster locally, you can quickly create a 3-node cluster with RF of 3 by running the following command.

$ ./bin/yb-ctl --rf 3 create

You can now check $HOME/yugabyte-data to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

Clients can now connect to the YSQL and YCQL APIs at localhost:5433 and localhost:9042 respectively.

2. Check cluster status with yb-ctl

Run the yb-ctl status command to see the yb-master and yb-tserver processes running locally.

Example

For a 1-node cluster, the yb-ctl status command will show that you have 1 yb-master process and 1 yb-tserver process running on the localhost. For details about the roles of these processes in a YugabyteDB cluster (aka Universe), see Universe.

$ ./bin/yb-ctl status
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1                                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres                                  |
| YSQL Shell          : bin/ysqlsh                                                                 |
| YCQL Shell          : bin/cqlsh                                                                  |
| YEDIS Shell         : bin/redis-cli                                                              |
| Web UI              : http://127.0.0.1:7000/                                                     |
| Cluster Data        : /Users/yugabyte/yugabyte-data                                             |
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
| Node 1: yb-tserver (pid 20696), yb-master (pid 20693)                                            |
----------------------------------------------------------------------------------------------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres                                  |
| YSQL Shell          : bin/ysqlsh                                                                 |
| YCQL Shell          : bin/cqlsh                                                                  |
| YEDIS Shell         : bin/redis-cli                                                              |
| data-dir[0]         : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data                       |
| yb-tserver Logs     : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/tserver/logs          |
| yb-master Logs      : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/master/logs           |
----------------------------------------------------------------------------------------------------

3. Check cluster status with Admin UI

Node 1's master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. If you created a multi-node cluster, you can visit the other nodes' Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1's master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 1 and Num Nodes (TServers) as 1. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a local cluster

You can use the yb-docker-ctl utility, downloaded in the previous step, to create and administer a containerized local cluster.

To quickly create a 1-node or 3-node local cluster using Docker, follow the steps below. For details on using the yb-docker-ctl create command and the cluster configuration, see Create a local cluster in the utility reference.

Create a 1-node cluster with RF=1

To create a 1-node cluster with a replication factor (RF) of 1, run the default yb-ctl create command.

$ ./yb-docker-ctl create

Create a 3-node cluster with RF=3

To run a distributed SQL cluster locally, run the following yb-docker-ctl command to create a 3-node YugabyteDB cluster with a replication factor (RF) of 3.

$ ./yb-docker-ctl create --rf 3

Clients can now connect to the YSQL and YCQL APIs at localhost:5433 and localhost:9042 respectively.

2. Check cluster status with yb-docker-ctl

Run the command below to see that we now have 1 yb-master (yb-master-n1) and 1 yb-tserver (yb-tserver-n1) containers running on this localhost. Roles played by these containers in a YugabyteDB cluster are explained in detail here.

$ ./yb-docker-ctl status
ID             PID        Type       Node                 URL                       Status          Started At
921494a8058d   5547       tserver    yb-tserver-n1        http://192.168.64.5:9000  Running         2018-10-18T22:02:50.187976253Z
feea0823209a   5039       master     yb-master-n1         http://192.168.64.2:7000  Running         2018-10-18T22:02:47.163244578Z

3. Check cluster status with Admin UI

The yb-master-n1 Admin UI is available at http://localhost:7000 and the yb-tserver-n1 Admin UI is available at http://localhost:9000. Other masters and tservers do not have their admin ports mapped to localhost to avoid port conflicts.

NOTE: Clients connecting to the cluster will connect to only yb-tserver-n1 even if you used yb-docker-ctl to create a multi-node local cluster. In case of Docker for Mac, routing traffic directly to containers is not even possible today. Since only 1 node will receive the incoming client traffic, throughput expected for Docker-based local clusters can be significantly lower than binary-based local clusters.

3.1 Overview and Master status

The yb-master-n1 home page shows that we have a cluster (aka a Universe) with Replication Factor of 1 and Num Nodes (TServers) as 1. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 1 tservers along with the time since it last connected to this master via regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all available tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tservers.

master-home

1. Create a 1 node cluster with replication factor 1

$ kubectl apply -f yugabyte-statefulset-rf-1.yaml
service/yb-masters created
service/yb-master-ui created
statefulset.apps/yb-master created
service/yb-tservers created
statefulset.apps/yb-tserver created

By default, the above command will create a 1 node cluster with Replication Factor (RF) 1. This cluster has 1 pod of yb-master and yb-tserver each. If you want to create a 3 node local cluster with RF 3, then simply change the replica count of yb-master and yb-tserver in the yaml file to 3.

2. Check cluster status

Run the command below to see that we now have two services with 1 pods each - 1 yb-master pod (yb-master-1) and 1 yb-tserver pods (yb-tserver-1) running. Roles played by these pods in a YugabyteDB cluster (aka Universe) is explained in detail here.

$ kubectl get pods
NAME           READY     STATUS              RESTARTS   AGE
yb-master-0    0/1       ContainerCreating   0          5s
yb-tserver-0   0/1       ContainerCreating   0          4s

Eventually all the pods will have the Running state.

Run the following command to initialize the YSQL API. Note that this step can take a few minutes depending on the resource utilization of your Kubernetes environment.

$ kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
yb-master-0    1/1       Running   0          13s
yb-tserver-0   1/1       Running   0          12s

3. Initialize the YSQL API

$ kubectl exec -it yb-master-0 bash --  -c "YB_ENABLED_IN_POSTGRES=1 FLAGS_pggate_master_addresses=yb-master-0.yb-masters.default.svc.cluster.local:7100 /home/yugabyte/postgres/bin/initdb -D /tmp/yb_pg_initdb_tmp_data_dir -U postgres"

Clients can now connect to this YugabyteDB universe using YSQL and YCQL APIs on the 5433 and 9042 ports respectively.

4. Check cluster status via Kubernetes

You can see the status of the 3 services by simply running the following command.

$ kubectl get services
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP                                        13m
yb-master-ui   LoadBalancer   10.110.45.247   <pending>     7000:32291/TCP                                 11m
yb-masters     ClusterIP      None            <none>        7000/TCP,7100/TCP                              11m
yb-tservers    ClusterIP      None            <none>        9000/TCP,9100/TCP,9042/TCP,6379/TCP,5433/TCP   11m

5. Check cluster status with Admin UI

In order to do this, we would need to access the UI on port 7000 exposed by any of the pods in the yb-master service. In order to do so, we find the URL for the yb-master-ui LoadBalancer service.

$ minikube service  yb-master-ui --url
http://192.168.99.100:31283

Now, you can view the yb-master-0 Admin UI is available at the above URL.

5.1 Overview and master status

The yb-master-0 home page shows that we have a cluster (aka a Universe) with Replication Factor of 1 and Num Nodes (TServers) as 1. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version is also shown for your reference.

master-home

The Masters section highlights the 1 yb-master along its corresponding cloud, region and zone placement information.

5.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all available tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tservers.

tserver-list

Ask our community
  • Slack
  • Github
  • Forum
  • StackOverflow
Yugabyte
Contact Us
Copyright © 2017-2020 Yugabyte, Inc. All rights reserved.