Join us on
Star us on
Get Started
Slack
GitHub
Get Started
v2.5 (latest) v2.2 (stable) v2.1 (earlier version) v2.0 (earlier version) v1.3 (earlier version)
  • GET STARTED
    • Quick start
      • 1. Install YugabyteDB
      • 2. Create a local cluster
      • 3. Explore YSQL
      • 4. Build an application
        • Java
        • NodeJS
        • Go
        • Python
        • Ruby
        • C#
        • PHP
        • C++
        • C
    • Introduction
    • Explore core
      • 1. Linear scalability
      • 2. Fault tolerance
      • 3. Global distribution
      • 4. Auto sharding
      • 5. Tunable reads
      • 6. Observability
  • USER GUIDES
    • Develop
      • Learn app development
        • 1. SQL vs NoSQL
        • 2. Data modeling
        • 3. Data types
        • 4. ACID transactions
        • 5. Aggregations
        • 6. Batch operations
        • 7. Date and time
        • 8. Strings and text
      • Ecosystem integrations
        • Apache Kafka
        • Apache Spark
        • JanusGraph
        • KairosDB
        • Presto
        • Metabase
      • Real-world examples
        • E-Commerce App
        • IoT Fleet Management
        • Retail Analytics
      • Explore sample applications
    • Deploy
      • Checklist
      • Manual deployment
        • 1. System configuration
        • 2. Install software
        • 3. Start YB-Masters
        • 4. Start YB-TServers
        • 5. Verify deployment
      • Kubernetes
        • Helm Chart
        • Helm configuration
        • Local SSD
      • Docker
      • Public clouds
        • Amazon Web Services
        • Google Cloud Platform
        • Microsoft Azure
      • Pivotal Cloud Foundry
      • Yugabyte Platform
        • 1. Prepare cloud environment
        • 2. Install Admin Console
        • 3. Configure Admin Console
        • 4. Configure Cloud Providers
    • Benchmark
      • Performance
      • YCSB
      • Large datasets
    • Secure
      • Security checklist
      • Authentication
      • Authorization
        • 1. RBAC Model
        • 2. Create Roles
        • 3. Grant permissions
      • TLS encryption
        • 1. Prepare nodes
        • 2. Server-server encryption
        • 3. Client-server encryption
        • 4. Connect to cluster
      • Encryption at Rest
    • Manage
      • Backup and restore
        • Backing up data
        • Restoring data
      • Data migration
        • Bulk import
        • Bulk export
      • Change cluster config
      • Upgrade deployment
      • Diagnostics reporting
      • Yugabyte Platform
        • Create universe - Multi-zone
        • Create universe - Multi-region
        • Edit universe
        • Edit config flags
        • Health checking and alerts
        • Create and edit instance tags
        • Node status and actions
        • Read replicas
        • Back up and restore
        • Upgrade universe
        • Delete universe
    • Troubleshoot
      • Troubleshooting overview
      • Cluster level issues
        • YCQL connection issues
        • YEDIS connection Issues
      • Node level issues
        • Check processes
        • Inspect logs
        • System statistics
      • Yugabyte Platform
        • Troubleshoot universes
  • REFERENCE
    • APIs
      • YSQL
        • Statements
          • ABORT
          • ALTER DATABASE
          • ALTER DOMAIN
          • ALTER TABLE
          • BEGIN
          • COMMENT
          • COMMIT
          • COPY
          • CREATE DATABASE
          • CREATE DOMAIN
          • CREATE INDEX
          • CREATE SCHEMA
          • CREATE SEQUENCE
          • CREATE TABLE
          • CREATE TABLE AS
          • CREATE TYPE
          • CREATE USER
          • CREATE VIEW
          • DEALLOCATE
          • DELETE
          • DROP DATABASE
          • DROP DOMAIN
          • DROP SEQUENCE
          • DROP TABLE
          • DROP TYPE
          • END
          • EXECUTE
          • EXPLAIN
          • GRANT
          • INSERT
          • LOCK
          • PREPARE
          • RESET
          • REVOKE
          • ROLLBACK
          • SELECT
          • SET
          • SET CONSTRAINTS
          • SET TRANSACTION
          • SHOW
          • SHOW TRANSACTION
          • TRUNCATE
          • UPDATE
        • Data types
          • Binary
          • Boolean
          • Character
          • Date-time
          • Json
          • Money
          • Numeric
          • Serial
          • UUID
        • Expressions
          • currval()
          • lastval()
          • nextval()
        • Keywords
        • Reserved Names
      • YCQL
        • Quick Start YCQL
        • ALTER KEYSPACE
        • ALTER ROLE
        • ALTER TABLE
        • CREATE INDEX
        • CREATE KEYSPACE
        • CREATE ROLE
        • CREATE TABLE
        • CREATE TYPE
        • DROP INDEX
        • DROP KEYSPACE
        • DROP ROLE
        • DROP TABLE
        • DROP TYPE
        • GRANT PERMISSION
        • GRANT ROLE
        • REVOKE PERMISSION
        • REVOKE ROLE
        • USE
        • INSERT
        • SELECT
        • UPDATE
        • DELETE
        • TRANSACTION
        • TRUNCATE
        • Simple Value
        • Subscript
        • Function Call
        • Operator Call
        • BLOB
        • BOOLEAN
        • MAP, SET, LIST
        • FROZEN
        • INET
        • Integer & Counter
        • Non-Integer
        • TEXT
        • Date & Time Types
        • UUID & TIMEUUID
        • JSONB
        • Date and time functions
    • CLIs
      • yb-ctl
      • yb-docker-ctl
      • yb-master
      • yb-tserver
      • ysqlsh
      • cqlsh
    • Sample data
      • Chinook
      • Northwind
      • PgExercises
      • SportsDB
    • Tools
      • TablePlus
  • RELEASES
    • Release history
      • v1.3.1
      • v1.3.0
      • v1.2.12
      • v1.2.11
      • v1.2.10
      • v1.2.9
      • v1.2.8
      • v1.2.6
      • v1.2.5
      • v1.2.4
  • CONCEPTS
    • Architecture
      • Design goals
      • Layered architecture
      • Basic concepts
        • Universe
        • YB-TServer
        • YB-Master
        • Acknowledgements
      • Query layer
        • Overview
      • DocDB store
        • Sharding
        • Replication
        • Persistence
        • Performance
      • DocDB transactions
        • Isolation Levels
        • Single row transactions
        • Distributed transactions
        • Transactional IO path
  • FAQ
    • Comparisons
      • CockroachDB
      • Google Cloud Spanner
      • MongoDB
      • FoundationDB
      • Amazon DynamoDB
      • Azure Cosmos DB
      • Apache Cassandra
      • Redis in-memory store
      • Apache HBase
    • Other FAQs
      • Product
      • Architecture
      • Yugabyte Platform
      • API compatibility
  • CONTRIBUTOR GUIDES
    • Get involved
  • Misc
    • YEDIS
      • Quick start
      • Develop
        • Client drivers
          • C
          • C++
          • C#
          • Go
          • Java
          • NodeJS
          • Python
      • API reference
        • APPEND
        • AUTH
        • CONFIG
        • CREATEDB
        • DELETEDB
        • LISTDB
        • SELECT
        • DEL
        • ECHO
        • EXISTS
        • EXPIRE
        • EXPIREAT
        • FLUSHALL
        • FLUSHDB
        • GET
        • GETRANGE
        • GETSET
        • HDEL
        • HEXISTS
        • HGET
        • HGETALL
        • HINCRBY
        • HKEYS
        • HLEN
        • HMGET
        • HMSET
        • HSET
        • HSTRLEN
        • HVALS
        • INCR
        • INCRBY
        • KEYS
        • MONITOR
        • PEXPIRE
        • PEXPIREAT
        • PTTL
        • ROLE
        • SADD
        • SCARD
        • RENAME
        • SET
        • SETEX
        • PSETEX
        • SETRANGE
        • SISMEMBER
        • SMEMBERS
        • SREM
        • STRLEN
        • ZRANGE
        • TSADD
        • TSCARD
        • TSGET
        • TSLASTN
        • TSRANGEBYTIME
        • TSREM
        • TSREVRANGEBYTIME
        • TTL
        • ZADD
        • ZCARD
        • ZRANGEBYSCORE
        • ZREM
        • ZREVRANGE
        • ZSCORE
        • PUBSUB
        • PUBLISH
        • SUBSCRIBE
        • UNSUBSCRIBE
        • PSUBSCRIBE
        • PUNSUBSCRIBE
> Deploy > Kubernetes >

Local SSD

Attention

This page documents an earlier version. Go to the latest (v2.3) version.

Local vs remote SSDs

Kubernetes gives users the option of using remote disks using dynamic provisioning or local storage which has to be pre-provisioned.

Local storage gives great performance but the data is not replicated and can be lost if the node fails. This option is ideal for databases like YugabyteDB that manage their own replication and can guarantee HA.

Remote storage has slightly lower performance but the data is resilient to failures. This type of storage is absolutely essential for databases that do not offer HA (for example, traditional RDBMSs like Postgres and MySQL).

Below is a table that summarizes the features and when to use local vs remote storage.

Local SSD storage Remote SSD storage
Provision large disk capacity per node Depends on cloud-provider Yes
Ideal deployment strategy with YugabyteDB Use for latency sensitive apps
Add remote storage to increase capacity / cost-efficient tiering
Use for large disk capacity per node
Disk storage resilient to failures No Yes
Performance - latency Lower Higher
Performance - throughput Higher Lower
Typical cost characteristics Lower Higher
Kubernetes provisioning scheme Pre-provisioned Dynamic provisioning

Thus, it is generally preferable to use local storage where possible for higher performance and lower costs. The following section explains how to deploy YugabyteDB on Kubernetes using local SSDs.

  • Google Kubernetes Engine (GKE)

1. Create a gcloud cluster

Each cluster brings up 3 nodes each of the type n1-standard-1 for the Kubernetes masters. You can directly create a cluster with the desired machine type using the --machine-type option. In thie example we are going to create a node-pool with n1-standard-8 type nodes for the Yugabyte universe.

  • Choose the zone

First, choose the zone in which you want to run the cluster in. In this tutorial, we are going to deploy the Kubernetes masters using the default machine type n1-standard-1 in the zone us-west1-a, and add a node pool with the desired node type and node count in order to deploy the YugabyteDB universe. You can view the list of zones by running the following command:

$ gcloud compute zones list
NAME                       REGION                   STATUS
...
us-west1-b                 us-west1                 UP
us-west1-c                 us-west1                 UP
us-west1-a                 us-west1                 UP
...
  • Create the glcoud container cluster

Create a Kubernetes cluster on GKE by running the following in order to create a cluster in the desired zone.

$ gcloud container clusters create yugabyte --zone us-west1-b
  • List gcloud container clusters

You can list the available cluster by running the following command.

$ gcloud container clusters list
NAME      LOCATION    MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
yugabyte  us-west1-b  1.8.7-gke.1     35.199.164.253  n1-standard-1  1.8.7-gke.1   3          RUNNING
Created [https://container.googleapis.com/v1/projects/yugabyte/zones/us-west1-b/clusters/yugabyte].

2. Create a node pool

Create a node pool with 3 nodes, each having 8 cpus and 2 local SSDs.

$ gcloud container node-pools create node-pool-8cpu-2ssd \
      --cluster=yugabyte \
      --local-ssd-count=2 \
      --machine-type=n1-standard-8 \
      --num-nodes=3 \
      --zone=us-west1-b
Created
NAME                 MACHINE_TYPE   DISK_SIZE_GB  NODE_VERSION
node-pool-8cpu-2ssd  n1-standard-8  100           1.8.7-gke.1

Note the --local-ssd-count option above, which tells gcloud to mount the nodes with 2 local SSDs each.

We can list all the node pools by doing the following.

$ gcloud container node-pools list --cluster yugabyte --zone=us-west1-b
NAME                 MACHINE_TYPE   DISK_SIZE_GB  NODE_VERSION
default-pool         n1-standard-1  100           1.8.7-gke.1
node-pool-8cpu-2ssd  n1-standard-8  100           1.8.7-gke.1

You can view details of the node-pool just created by running the following command:

$ gcloud container node-pools describe node-pool-8cpu-2ssd --cluster yugabyte --zone=us-west1-b
config:
  diskSizeGb: 100
  imageType: COS
  localSsdCount: 2
  machineType: n1-standard-8
initialNodeCount: 3
name: node-pool-8cpu-2ssd

3. Create a YugabyteDB universe

If this is your only container cluster, kubectl automatically points to this cluster. However, if you have multiple clusters, you should switch kubectl to point to this cluster by running the following command:

$ gcloud container clusters get-credentials yugabyte --zone us-west1-b
Fetching cluster endpoint and auth data.
kubeconfig entry generated for yugabyte.

You can launch a universe on this node pool to run on local SSDs by running the following command.

$ kubectl apply -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset-local-ssd-gke.yaml
service "yb-masters" created
service "yb-master-ui" created
statefulset "yb-master" created
service "yb-tservers" created
statefulset "yb-tserver" created

You can see the yaml file to launch a YugabyteDB kubernetes universe on nodes with local disks.

Note the following nodeSelector snippet in the yaml file which instructs the Kubernetes scheduler to place the Yugabyte pods on nodes that have local disks:

  nodeSelector:
    cloud.google.com/gke-local-ssd: "true"

Also, note that we instruct the scheduler to place the various pods in the yb-master or yb-tserver services on different physical nodes with the antiAffinity hint:

  spec:
    affinity:
      # Set the anti-affinity selector scope to YB masters.
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - yb-master
            topologyKey: kubernetes.io/hostname

4. View the universe

You can verify that the YugabyteDB pods are running by doing the following:

$ kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
yb-master-0    1/1       Running   0          49s
yb-master-1    1/1       Running   0          49s
yb-master-2    1/1       Running   0          49s
yb-tserver-0   1/1       Running   0          48s
yb-tserver-1   1/1       Running   0          48s
yb-tserver-2   1/1       Running   0          48s

You can check all the services that are running by doing the following:

$ kubectl get services
NAME           TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                               AGE
kubernetes     ClusterIP      10.7.240.1    <none>          443/TCP                               11m
yb-master-ui   LoadBalancer   10.7.246.86   XX.XX.XX.XX     7000:30707/TCP                        1m
yb-masters     ClusterIP      None          <none>          7000/TCP,7100/TCP                     1m
yb-tservers    ClusterIP      None          <none>          9000/TCP,9100/TCP,9042/TCP,6379/TCP   1m

Note the yb-master-ui service above. It is a loadbalancer service, which exposes the YugabyteDB universe UI. You can view this by browsing to the url http://XX.XX.XX.XX:7000. It should look as follows.

GKE YugabyteDB dashboard

5. Connect to the universe

You can connect to one of the tserver pods and verify that the local disk is mounted into the pods.

$ kubectl exec -it yb-tserver-0 bash

We can observe the local disks by running the following command.

[[email protected] yugabyte]# df -kh
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sdb        369G   70M  350G   1% /mnt/disk0
/dev/sdc        369G   69M  350G   1% /mnt/disk1
...

You can connect to the cqlsh shell on this universe by running the following command.

$ kubectl exec -it yb-tserver-0 bin/cqlsh
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> DESCRIBE KEYSPACES;

system_schema system_auth system

6. [Optional] Destroy the cluster

You can destroy the YugabyteDB universe by running the following. Note that this does not destroy the data, and you may not be able to respawn the cluster because there is data left behind on the persistent disks.

$ kubectl delete -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset-local-ssd-gke.yaml

You can destroy the node-pool we created by running the following command:

$ gcloud container node-pools delete node-pool-8cpu-2ssd --cluster yugabyte --zone=us-west1-b

Finally, you can destroy the entire gcloud container cluster by running the following:

$ gcloud beta container clusters delete yugabyte --zone us-west1-b
Ask our community
  • Slack
  • Github
  • Forum
  • StackOverflow
Yugabyte
Contact Us
Copyright © 2017-2020 Yugabyte, Inc. All rights reserved.