Google Cloud Platform
Attention
This page documents an earlier version. Go to the latest (v2.3) version.Prerequisites
- Download and install the Google Cloud SDK.
NOTE: If you install gcloud using a package manager (as opposed to downloading and installing it manually), it does not support some of the commands below.
- Install
kubectl
After installing Cloud SDK, install the kubectl command line tool by running the following command:
$ gcloud components install kubectl
- Configure defaults for gcloud
Set the project id as yugabyte
. You can change this as per your need.
$ gcloud config set project yugabyte
Set the defaut compute zone as us-west1-b
. You can change this as per your need.
$ gcloud config set compute/zone us-west1-b
1. Create a GKE cluster
Create a Kubernetes cluster if you have not already done so by running the following command.
$ gcloud container clusters create yugabyte
2. Create a YugabyteDB cluster
Create a YugabyteDB cluster by running the following.
$ kubectl create -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset.yaml
service "yb-masters" created
statefulset "yb-master" created
service "yb-tservers" created
statefulset "yb-tserver" created
3. Check the cluster
You should see the following pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 1/1 Running 0 3m
yb-master-1 1/1 Running 0 3m
yb-master-2 1/1 Running 0 3m
yb-tserver-0 1/1 Running 0 3m
yb-tserver-1 1/1 Running 0 3m
yb-tserver-2 1/1 Running 0 3m
You can view the persistent volumes.
$ kubectl get persistentvolumes
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f3301c41-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-master-0 standard 5m
pvc-f33f29b3-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-master-1 standard 5m
pvc-f35005b6-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-master-2 standard 5m
pvc-f36189ab-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-tserver-0 standard 5m
pvc-f366a4af-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-tserver-1 standard 5m
pvc-f36d2892-1110-11e8-8231-42010a8a0083 1Gi RWO Delete Bound default/datadir-yb-tserver-2 standard 5m
You can view all the services by running the following command.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP XX.XX.XX.X <none> 443/TCP 23m
yb-masters ClusterIP None <none> 7000/TCP,7100/TCP 17m
yb-tservers ClusterIP None <none> 9000/TCP,9100/TCP,9042/TCP,6379/TCP 14m
4. Connect to the cluster
You can connect to the YCQL API by running the following.
$ kubectl exec -it yb-tserver-0 bin/cqlsh
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> DESCRIBE KEYSPACES;
system_schema system_auth system
5. Destroy cluster (optional)
Destroy the YugabyteDB cluster we created above by running the following.
$ kubectl delete -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset.yaml
service "yb-masters" deleted
statefulset "yb-master" deleted
service "yb-tservers" deleted
statefulset "yb-tserver" deleted
To destroy the persistent volume claims (you will lose all the data if you do this), run:
$ kubectl delete pvc -l app=yb-master
$ kubectl delete pvc -l app=yb-tserver
6. Destroy the GKE cluster (optional)
To destroy the machines we created for the gcloud cluster, run the following.
$ gcloud container clusters delete yugabyte
Advanced Kubernetes Deployment
More advanced scenarios for deploying in Kubernetes are covered in the Kubernetes Deployments section.
Prerequisites
-
Download and install terraform.
-
Verify by the
terraform
command, it should print a help message that looks similar to that shown below.
$ terraform
Usage: terraform [--version] [--help] <command> [args]
...
Common commands:
apply Builds or changes infrastructure
console Interactive console for Terraform interpolations
destroy Destroy Terraform-managed infrastructure
env Workspace management
fmt Rewrites config files to canonical format
1. Create a terraform config file
-
First create a terraform file with provider details
provider "google" { # Provide your Creadentilals credentials = "${file("yugabyte-pcf-bc8114281026.json")}"
The name of your GCP project
project = "<Your-GCP-Project-Name>" }
NOTE: :- You can get credentials file by following steps given here
-
Now add the yugabyte terraform module to your file
module "yugabyte-db-cluster" { source = "github.com/Yugabyte/terraform-gcp-yugabyte.git"
The name of the cluster to be created.
cluster_name = "test-cluster"
key pair.
ssh_private_key = "SSH_PRIVATE_KEY_HERE" ssh_public_key = "SSH_PUBLIC_KEY_HERE" ssh_user = "SSH_USER_NAME_HERE"
The region name where the nodes should be spawned.
region_name = "YOUR_VPC_REGION"
Replication factor.
replication_factor = "3"
The number of nodes in the cluster, this cannot be lower than the replication factor.
node_count = "3" }
2. Create a cluster
Init terraform first if you have not already done so.
$ terraform init
To check what changes are going to happen in environment run the following
$ terraform plan
Now run the following to create the instances and bring up the cluster.
$ terraform apply
Once the cluster is created, you can go to the URL http://<node ip or dns name>:7000
to view the UI. You can find the node's ip or dns by running the following:
$ terraform state show google_compute_instance.yugabyte_node[0]
You can access the cluster UI by going to any of the following URLs.
You can check the state of the nodes at any point by running the following command.
$ terraform show
3. Verify resources created
The following resources are created by this module:
module.terraform-gcp-yugabyte.google_compute_instance.yugabyte_node
The GCP VM instances.
For cluster named test-cluster
, the instances will be named yugabyte-test-cluster-n1
, yugabyte-test-cluster-n2
, yugabyte-test-cluster-n3
.
module.terraform-gcp-yugabyte.google_compute_firewall.Yugabyte-Firewall
The firwall rule that allows the various clients to access the YugabyteDB cluster.
For cluster named test-cluster
, this firewall rule will be named default-yugabyte-test-cluster-firewall
with the ports 7000, 9000, 9042 and 6379 open to all.
module.terraform-gcp-yugabyte.google_compute_firewall.Yugabyte-Intra-Firewall
The firewall rule that allows communication internal to the cluster.
For cluster named test-cluster
, this firewall rule will be named default-yugabyte-test-cluster-intra-firewall
with the ports 7100, 9100 open to all other vm instances in the same network.
module.terraform-gcp-yugabyte.null_resource.create_yugabyte_universe
A local script that configures the newly created instances to form a new YugabyteDB universe.
4. Destroy the cluster (optional)
To destroy what we just created, you can run the following command.
$ terraform destroy