2. Create a local cluster
Attention
This page documents an earlier version. Go to the latest (v2.3) version.After installing YugabyteDB, follow the instructions below to create a local cluster.
Note
The local cluster setup on a single host is intended for development and learning. For production deployment or performance benchmarking, deploying a true multi-node on multi-host setup, see Deploy YugabyteDB.1. Create a local cluster
You can use the yb-ctl
utility, located in the bin
directory of the YugabyteDB package, to create and administer a local cluster. The default data directory is $HOME/yugabyte-data
. You can change the location of the data directory by using the --data_dir
option.
To quickly create a 1-node or 3-node local cluster, follow the steps below. For details on using the yb-ctl create
command and the cluster configuration, see Create a local cluster in the utility reference.
Create a 1-node cluster with RF=1
To create a 1-node cluster with a replication factor (RF) of 1, run the following yb-ctl create
command.
$ ./bin/yb-ctl create
The initial cluster creation may take a minute or so without any output on the prompt.
Create a 3-node cluster with RF=3
To run a distributed SQL cluster locally, you can quickly create a 3-node cluster with RF of 3 by running the following command.
$ ./bin/yb-ctl --rf 3 create
You can now check $HOME/yugabyte-data
to see node-i
directories created where i
represents the node_id
of the node. Inside each such directory, there will be 2 disks disk1
and disk2
to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i
is by default set to 127.0.0.i
.
Clients can now connect to the YSQL and YCQL APIs at localhost:5433
and localhost:9042
respectively.
2. Check cluster status with yb-ctl
Run the yb-ctl status
command to see the yb-master
and yb-tserver
processes running locally.
Example
For a 1-node cluster, the yb-ctl status
command will show that you have 1 yb-master
process and 1 yb-tserver
process running on the localhost. For details about the roles of these processes in a YugabyteDB cluster (aka Universe), see Universe.
$ ./bin/yb-ctl status
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1 |
----------------------------------------------------------------------------------------------------
| JDBC : jdbc:postgresql://127.0.0.1:5433/postgres |
| YSQL Shell : bin/ysqlsh |
| YCQL Shell : bin/cqlsh |
| YEDIS Shell : bin/redis-cli |
| Web UI : http://127.0.0.1:7000/ |
| Cluster Data : /Users/yugabyte/yugabyte-data |
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
| Node 1: yb-tserver (pid 20696), yb-master (pid 20693) |
----------------------------------------------------------------------------------------------------
| JDBC : jdbc:postgresql://127.0.0.1:5433/postgres |
| YSQL Shell : bin/ysqlsh |
| YCQL Shell : bin/cqlsh |
| YEDIS Shell : bin/redis-cli |
| data-dir[0] : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data |
| yb-tserver Logs : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/tserver/logs |
| yb-master Logs : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/master/logs |
----------------------------------------------------------------------------------------------------
3. Check cluster status with Admin UI
Node 1's master Admin UI is available at http://127.0.0.1:7000
and the tserver Admin UI is available at http://127.0.0.1:9000
. If you created a multi-node cluster, you can visit the other nodes' Admin UIs by using their corresponding IP addresses.
3.1 Overview and Master status
Node 1's master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor
of 1 and Num Nodes (TServers)
as 1. The Num User Tables
is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.
The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.
3.2 TServer status
Clicking on the See all nodes
takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets)
is 0. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.
1. Create a local cluster
You can use the yb-ctl
utility, located in the bin
directory of the YugabyteDB package, to create and administer a local cluster. The default data directory is $HOME/yugabyte-data
. You can change the location of the data directory by using the --data_dir
option.
To quickly create a 1-node or 3-node local cluster, follow the steps below. For details on using the yb-ctl create
command and the cluster configuration, see Create a local cluster in the utility reference.
Create a 1-node cluster with RF=1
To create a 1-node cluster with a replication factor (RF) of 1, run the following yb-ctl create
command.
$ ./bin/yb-ctl create
The initial cluster creation may take a minute or so without any output on the prompt.
Create a 3-node cluster with RF=3
To run a distributed SQL cluster locally, you can quickly create a 3-node cluster with RF of 3 by running the following command.
$ ./bin/yb-ctl --rf 3 create
You can now check $HOME/yugabyte-data
to see node-i
directories created where i
represents the node_id
of the node. Inside each such directory, there will be 2 disks disk1
and disk2
to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i
is by default set to 127.0.0.i
.
Clients can now connect to the YSQL and YCQL APIs at localhost:5433
and localhost:9042
respectively.
2. Check cluster status with yb-ctl
Run the yb-ctl status
command to see the yb-master
and yb-tserver
processes running locally.
Example
For a 1-node cluster, the yb-ctl status
command will show that you have 1 yb-master
process and 1 yb-tserver
process running on the localhost. For details about the roles of these processes in a YugabyteDB cluster (aka Universe), see Universe.
$ ./bin/yb-ctl status
----------------------------------------------------------------------------------------------------
| Node Count: 1 | Replication Factor: 1 |
----------------------------------------------------------------------------------------------------
| JDBC : jdbc:postgresql://127.0.0.1:5433/postgres |
| YSQL Shell : bin/ysqlsh |
| YCQL Shell : bin/cqlsh |
| YEDIS Shell : bin/redis-cli |
| Web UI : http://127.0.0.1:7000/ |
| Cluster Data : /Users/yugabyte/yugabyte-data |
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
| Node 1: yb-tserver (pid 20696), yb-master (pid 20693) |
----------------------------------------------------------------------------------------------------
| JDBC : jdbc:postgresql://127.0.0.1:5433/postgres |
| YSQL Shell : bin/ysqlsh |
| YCQL Shell : bin/cqlsh |
| YEDIS Shell : bin/redis-cli |
| data-dir[0] : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data |
| yb-tserver Logs : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/tserver/logs |
| yb-master Logs : /Users/yugabyte/yugabyte-data/node-1/disk-1/yb-data/master/logs |
----------------------------------------------------------------------------------------------------
3. Check cluster status with Admin UI
Node 1's master Admin UI is available at http://127.0.0.1:7000
and the tserver Admin UI is available at http://127.0.0.1:9000
. If you created a multi-node cluster, you can visit the other nodes' Admin UIs by using their corresponding IP addresses.
3.1 Overview and Master status
Node 1's master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor
of 1 and Num Nodes (TServers)
as 1. The Num User Tables
is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.
The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.
3.2 TServer status
Clicking on the See all nodes
takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets)
is 0. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.
1. Create a local cluster
You can use the yb-docker-ctl
utility, downloaded in the previous step, to create and administer a containerized local cluster.
To quickly create a 1-node or 3-node local cluster using Docker, follow the steps below. For details on using the yb-docker-ctl create
command and the cluster configuration, see Create a local cluster in the utility reference.
Create a 1-node cluster with RF=1
To create a 1-node cluster with a replication factor (RF) of 1, run the default yb-ctl create
command.
$ ./yb-docker-ctl create
Create a 3-node cluster with RF=3
To run a distributed SQL cluster locally, run the following yb-docker-ctl
command to create a 3-node YugabyteDB cluster with a replication factor (RF) of 3.
$ ./yb-docker-ctl create --rf 3
Clients can now connect to the YSQL and YCQL APIs at localhost:5433
and localhost:9042
respectively.
2. Check cluster status with yb-docker-ctl
Run the command below to see that we now have 1 yb-master
(yb-master-n1) and 1 yb-tserver
(yb-tserver-n1) containers running on this localhost. Roles played by these containers in a YugabyteDB cluster are explained in detail here.
$ ./yb-docker-ctl status
ID PID Type Node URL Status Started At
921494a8058d 5547 tserver yb-tserver-n1 http://192.168.64.5:9000 Running 2018-10-18T22:02:50.187976253Z
feea0823209a 5039 master yb-master-n1 http://192.168.64.2:7000 Running 2018-10-18T22:02:47.163244578Z
3. Check cluster status with Admin UI
The yb-master-n1 Admin UI is available at http://localhost:7000
and the yb-tserver-n1 Admin UI is available at http://localhost:9000
. Other masters and tservers do not have their admin ports mapped to localhost to avoid port conflicts.
NOTE: Clients connecting to the cluster will connect to only yb-tserver-n1 even if you used yb-docker-ctl to create a multi-node local cluster. In case of Docker for Mac, routing traffic directly to containers is not even possible today. Since only 1 node will receive the incoming client traffic, throughput expected for Docker-based local clusters can be significantly lower than binary-based local clusters.
3.1 Overview and Master status
The yb-master-n1 home page shows that we have a cluster (aka a Universe) with Replication Factor
of 1 and Num Nodes (TServers)
as 1. The Num User Tables
is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.
The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.
3.2 TServer status
Clicking on the See all nodes
takes us to the Tablet Servers page where we can observe the 1 tservers along with the time since it last connected to this master via regular heartbeats. Additionally, we can see that the Load (Num Tablets)
is balanced across all available tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis
table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tservers.
1. Create a 1 node cluster with replication factor 1
$ kubectl apply -f yugabyte-statefulset-rf-1.yaml
service/yb-masters created
service/yb-master-ui created
statefulset.apps/yb-master created
service/yb-tservers created
statefulset.apps/yb-tserver created
By default, the above command will create a 1 node cluster with Replication Factor (RF) 1. This cluster has 1 pod of yb-master and yb-tserver each. If you want to create a 3 node local cluster with RF 3, then simply change the replica count of yb-master and yb-tserver in the yaml file to 3.
2. Check cluster status
Run the command below to see that we now have two services with 1 pods each - 1 yb-master
pod (yb-master-1) and 1 yb-tserver
pods (yb-tserver-1) running. Roles played by these pods in a YugabyteDB cluster (aka Universe) is explained in detail here.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 0/1 ContainerCreating 0 5s
yb-tserver-0 0/1 ContainerCreating 0 4s
Eventually all the pods will have the Running
state.
Run the following command to initialize the YSQL API. Note that this step can take a few minutes depending on the resource utilization of your Kubernetes environment.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 1/1 Running 0 13s
yb-tserver-0 1/1 Running 0 12s
3. Initialize the YSQL API
$ kubectl exec -it yb-master-0 bash -- -c "YB_ENABLED_IN_POSTGRES=1 FLAGS_pggate_master_addresses=yb-master-0.yb-masters.default.svc.cluster.local:7100 /home/yugabyte/postgres/bin/initdb -D /tmp/yb_pg_initdb_tmp_data_dir -U postgres"
Clients can now connect to this YugabyteDB universe using YSQL and YCQL APIs on the 5433 and 9042 ports respectively.
4. Check cluster status via Kubernetes
You can see the status of the 3 services by simply running the following command.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m
yb-master-ui LoadBalancer 10.110.45.247 <pending> 7000:32291/TCP 11m
yb-masters ClusterIP None <none> 7000/TCP,7100/TCP 11m
yb-tservers ClusterIP None <none> 9000/TCP,9100/TCP,9042/TCP,6379/TCP,5433/TCP 11m
5. Check cluster status with Admin UI
In order to do this, we would need to access the UI on port 7000 exposed by any of the pods in the yb-master
service. In order to do so, we find the URL for the yb-master-ui LoadBalancer service.
$ minikube service yb-master-ui --url
http://192.168.99.100:31283
Now, you can view the yb-master-0 Admin UI is available at the above URL.
5.1 Overview and master status
The yb-master-0 home page shows that we have a cluster (aka a Universe) with Replication Factor
of 1 and Num Nodes (TServers)
as 1. The Num User Tables
is 0 since there are no user tables created yet. YugabyteDB version is also shown for your reference.
The Masters section highlights the 1 yb-master along its corresponding cloud, region and zone placement information.
5.2 TServer status
Clicking on the See all nodes
takes us to the Tablet Servers page where we can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats. Additionally, we can see that the Load (Num Tablets)
is balanced across all available tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis
table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tservers.