Join us on
Star us on
Get Started
Slack
GitHub
Get Started
v2.5 (latest) v2.2 (stable) v2.1 (earlier version) v2.0 (earlier version) v1.3 (earlier version)
  • GET STARTED
    • Quick start
      • 1. Install YugabyteDB
      • 2. Create a local cluster
      • 3. Explore YSQL
      • 4. Build an application
        • Java
        • NodeJS
        • Go
        • Python
        • Ruby
        • C#
        • PHP
        • C++
        • C
    • Introduction
    • Explore core
      • 1. Linear scalability
      • 2. Fault tolerance
      • 3. Global distribution
      • 4. Auto sharding
      • 5. Tunable reads
      • 6. Observability
  • USER GUIDES
    • Develop
      • Learn app development
        • 1. SQL vs NoSQL
        • 2. Data modeling
        • 3. Data types
        • 4. ACID transactions
        • 5. Aggregations
        • 6. Batch operations
        • 7. Date and time
        • 8. Strings and text
      • Ecosystem integrations
        • Apache Kafka
        • Apache Spark
        • JanusGraph
        • KairosDB
        • Presto
        • Metabase
      • Real-world examples
        • E-Commerce App
        • IoT Fleet Management
        • Retail Analytics
      • Explore sample applications
    • Deploy
      • Checklist
      • Manual deployment
        • 1. System configuration
        • 2. Install software
        • 3. Start YB-Masters
        • 4. Start YB-TServers
        • 5. Verify deployment
      • Kubernetes
        • Helm Chart
        • Helm configuration
        • Local SSD
      • Docker
      • Public clouds
        • Amazon Web Services
        • Google Cloud Platform
        • Microsoft Azure
      • Pivotal Cloud Foundry
      • Yugabyte Platform
        • 1. Prepare cloud environment
        • 2. Install Admin Console
        • 3. Configure Admin Console
        • 4. Configure Cloud Providers
    • Benchmark
      • Performance
      • YCSB
      • Large datasets
    • Secure
      • Security checklist
      • Authentication
      • Authorization
        • 1. RBAC Model
        • 2. Create Roles
        • 3. Grant permissions
      • TLS encryption
        • 1. Prepare nodes
        • 2. Server-server encryption
        • 3. Client-server encryption
        • 4. Connect to cluster
      • Encryption at Rest
    • Manage
      • Backup and restore
        • Backing up data
        • Restoring data
      • Data migration
        • Bulk import
        • Bulk export
      • Change cluster config
      • Upgrade deployment
      • Diagnostics reporting
      • Yugabyte Platform
        • Create universe - Multi-zone
        • Create universe - Multi-region
        • Edit universe
        • Edit config flags
        • Health checking and alerts
        • Create and edit instance tags
        • Node status and actions
        • Read replicas
        • Back up and restore
        • Upgrade universe
        • Delete universe
    • Troubleshoot
      • Troubleshooting overview
      • Cluster level issues
        • YCQL connection issues
        • YEDIS connection Issues
      • Node level issues
        • Check processes
        • Inspect logs
        • System statistics
      • Yugabyte Platform
        • Troubleshoot universes
  • REFERENCE
    • APIs
      • YSQL
        • Statements
          • ABORT
          • ALTER DATABASE
          • ALTER DOMAIN
          • ALTER TABLE
          • BEGIN
          • COMMENT
          • COMMIT
          • COPY
          • CREATE DATABASE
          • CREATE DOMAIN
          • CREATE INDEX
          • CREATE SCHEMA
          • CREATE SEQUENCE
          • CREATE TABLE
          • CREATE TABLE AS
          • CREATE TYPE
          • CREATE USER
          • CREATE VIEW
          • DEALLOCATE
          • DELETE
          • DROP DATABASE
          • DROP DOMAIN
          • DROP SEQUENCE
          • DROP TABLE
          • DROP TYPE
          • END
          • EXECUTE
          • EXPLAIN
          • GRANT
          • INSERT
          • LOCK
          • PREPARE
          • RESET
          • REVOKE
          • ROLLBACK
          • SELECT
          • SET
          • SET CONSTRAINTS
          • SET TRANSACTION
          • SHOW
          • SHOW TRANSACTION
          • TRUNCATE
          • UPDATE
        • Data types
          • Binary
          • Boolean
          • Character
          • Date-time
          • Json
          • Money
          • Numeric
          • Serial
          • UUID
        • Expressions
          • currval()
          • lastval()
          • nextval()
        • Keywords
        • Reserved Names
      • YCQL
        • Quick Start YCQL
        • ALTER KEYSPACE
        • ALTER ROLE
        • ALTER TABLE
        • CREATE INDEX
        • CREATE KEYSPACE
        • CREATE ROLE
        • CREATE TABLE
        • CREATE TYPE
        • DROP INDEX
        • DROP KEYSPACE
        • DROP ROLE
        • DROP TABLE
        • DROP TYPE
        • GRANT PERMISSION
        • GRANT ROLE
        • REVOKE PERMISSION
        • REVOKE ROLE
        • USE
        • INSERT
        • SELECT
        • UPDATE
        • DELETE
        • TRANSACTION
        • TRUNCATE
        • Simple Value
        • Subscript
        • Function Call
        • Operator Call
        • BLOB
        • BOOLEAN
        • MAP, SET, LIST
        • FROZEN
        • INET
        • Integer & Counter
        • Non-Integer
        • TEXT
        • Date & Time Types
        • UUID & TIMEUUID
        • JSONB
        • Date and time functions
    • CLIs
      • yb-ctl
      • yb-docker-ctl
      • yb-master
      • yb-tserver
      • ysqlsh
      • cqlsh
    • Sample data
      • Chinook
      • Northwind
      • PgExercises
      • SportsDB
    • Tools
      • TablePlus
  • RELEASES
    • Release history
      • v1.3.1
      • v1.3.0
      • v1.2.12
      • v1.2.11
      • v1.2.10
      • v1.2.9
      • v1.2.8
      • v1.2.6
      • v1.2.5
      • v1.2.4
  • CONCEPTS
    • Architecture
      • Design goals
      • Layered architecture
      • Basic concepts
        • Universe
        • YB-TServer
        • YB-Master
        • Acknowledgements
      • Query layer
        • Overview
      • DocDB store
        • Sharding
        • Replication
        • Persistence
        • Performance
      • DocDB transactions
        • Isolation Levels
        • Single row transactions
        • Distributed transactions
        • Transactional IO path
  • FAQ
    • Comparisons
      • CockroachDB
      • Google Cloud Spanner
      • MongoDB
      • FoundationDB
      • Amazon DynamoDB
      • Azure Cosmos DB
      • Apache Cassandra
      • Redis in-memory store
      • Apache HBase
    • Other FAQs
      • Product
      • Architecture
      • Yugabyte Platform
      • API compatibility
  • CONTRIBUTOR GUIDES
    • Get involved
  • Misc
    • YEDIS
      • Quick start
      • Develop
        • Client drivers
          • C
          • C++
          • C#
          • Go
          • Java
          • NodeJS
          • Python
      • API reference
        • APPEND
        • AUTH
        • CONFIG
        • CREATEDB
        • DELETEDB
        • LISTDB
        • SELECT
        • DEL
        • ECHO
        • EXISTS
        • EXPIRE
        • EXPIREAT
        • FLUSHALL
        • FLUSHDB
        • GET
        • GETRANGE
        • GETSET
        • HDEL
        • HEXISTS
        • HGET
        • HGETALL
        • HINCRBY
        • HKEYS
        • HLEN
        • HMGET
        • HMSET
        • HSET
        • HSTRLEN
        • HVALS
        • INCR
        • INCRBY
        • KEYS
        • MONITOR
        • PEXPIRE
        • PEXPIREAT
        • PTTL
        • ROLE
        • SADD
        • SCARD
        • RENAME
        • SET
        • SETEX
        • PSETEX
        • SETRANGE
        • SISMEMBER
        • SMEMBERS
        • SREM
        • STRLEN
        • ZRANGE
        • TSADD
        • TSCARD
        • TSGET
        • TSLASTN
        • TSRANGEBYTIME
        • TSREM
        • TSREVRANGEBYTIME
        • TTL
        • ZADD
        • ZCARD
        • ZRANGEBYSCORE
        • ZREM
        • ZREVRANGE
        • ZSCORE
        • PUBSUB
        • PUBLISH
        • SUBSCRIBE
        • UNSUBSCRIBE
        • PSUBSCRIBE
        • PUNSUBSCRIBE
> Deploy >

Docker

Attention

This page documents an earlier version. Go to the latest (v2.3) version.
  • Prerequisites
    • Linux
    • macOS
    • Windows
  • 1. Create swarm nodes
  • 2. Create overlay network
  • 3. Create yb-master services
  • 4. Create yb-tserver service
  • 5. Test the APIs
    • YCQL API
    • YEDIS API
    • YSQL API
  • 6. Test fault-tolerance with node failure
  • 7. Test auto-scaling with node addition
  • 8. Remove services and destroy nodes
  • Docker Swarm
  • Docker Compose

Docker includes swarm mode for natively managing a cluster of Docker Engines called a swarm. The Docker CLI can be used create a swarm, deploy application services to a swarm, and manage swarm behavior -- without using any additional orchestration software. Details on how swarm mode works are available here.

This tutorial uses Docker Machine to create multiple nodes on your desktop. These nodes can even be on multiple machines on the cloud platform of your choice.

Prerequisites

Linux

  • Docker Engine 1.12 or later installed using Docker for Linux.
  • Docker Machine.

macOS

  • Docker Engine 1.12 or later installed using Docker for Mac. Docker Machine is already included with Docker for Mac.

  • VirtualBox 5.2 or later for creating the swarm nodes.

Windows

  • Docker Engine 1.12 or later installed using Docker for Windows. Docker Machine is already included with Docker for Windows.

  • Microsoft Hyper-V driver for creating the swarm nodes.

As noted in Docker docs, the host on which Docker for Mac or Docker for Windows is installed does not itself participate in the swarm. The included version of Docker Machine is used to create the swarm nodes using VirtualBox (for macOS) and Hyper-V (for Windows).

1. Create swarm nodes

Following bash script is a simpler form of Docker's own swarm beginner tutorial bash script. You can use this for Linux and macOS. If you are using Windows, then download and change the powershell Hyper-V version of the same script.

  • The script first instantiates 3 nodes using Docker Machine and VirtualBox. Thereafter, it initializes the swarm cluster by creating a swarm manager on the first node. Finally, it adds the remaining nodes as workers to the cluster. It also pulls the yugabytedb/yugabyte container image into each of the nodes to expedite the next steps.

Note

In more fault-tolerant setups, there will be multiple manager nodes and they will be independent of the worker nodes. A 3-node master and 3-node worker setup is used in the Docker tutorial script referenced above.
#!/bin/bash

# Swarm mode using Docker Machine

workers=3

# create worker machines
echo "======> Creating $workers worker machines ...";
for node in $(seq 1 $workers);
do
    echo "======> Creating worker$node machine ...";
    docker-machine create -d virtualbox worker$node;
done

# list all machines
docker-machine ls

# initialize swarm mode and create a manager on worker1
echo "======> Initializing the swarm manager on worker1 ..."
docker-machine ssh worker1 "docker swarm init --listen-addr $(docker-machine ip worker1) --advertise-addr $(docker-machine ip worker1)"

# get worker tokens
export worker_token=`docker-machine ssh worker1 "docker swarm join-token worker -q"`
echo "worker_token: $worker_token"

# show members of swarm
docker-machine ssh worker1 "docker node ls"

# other workers join swarm, worker1 is already a member
for node in $(seq 2 $workers);
do
    echo "======> worker$node joining swarm as worker ..."
    docker-machine ssh worker$node \
    "docker swarm join \
    --token $worker_token \
    --listen-addr $(docker-machine ip worker$node) \
    --advertise-addr $(docker-machine ip worker$node) \
    $(docker-machine ip worker1)"
done

# pull the yugabytedb container
for node in $(seq 1 $workers);
do
    echo "======> pulling yugabytedb/yugabyte container on worker$node ..."
    docker-machine ssh worker$node \
    "docker pull yugabytedb/yugabyte"
done

# show members of swarm
docker-machine ssh worker1 "docker node ls"
  • Review all the nodes created.
$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
worker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v18.05.0-ce
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v18.05.0-ce
worker3   -        virtualbox   Running   tcp://192.168.99.102:2376           v18.05.0-ce  

2. Create overlay network

  • SSH into the worker1 node where the swarm manager is running.
$ docker-machine ssh worker1
  • Create an overlay network that the swarm services can use to communicate with each other. The attachable option allows standalone containers to connect to swarm services on the network.
$ docker network create --driver overlay --attachable yugabytedb

3. Create yb-master services

  • Create 3 yb-master replicated services each with replicas set to 1. This is the only way in Docker Swarm today to get stable network identies for each of yb-master containers that we will need to provide as input for creating the yb-tserver service in the next step.

Note for Kubernetes Users

Docker Swarm lacks an equivalent of Kubernetes StatefulSets. The concept of replicated services is similar to Kubernetes Deployments.
$ docker service create \
--replicas 1 \
--name yb-master1 \
--network yugabytedb \
--mount type=volume,source=yb-master1,target=/mnt/data0 \
--publish 7000:7000 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
--replication_factor=3
$ docker service create \
--replicas 1 \
--name yb-master2 \
--network yugabytedb \
--mount type=volume,source=yb-master2,target=/mnt/data0 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
--replication_factor=3
$ docker service create \
--replicas 1 \
--name yb-master3 \
--network yugabytedb \
--mount type=volume,source=yb-master3,target=/mnt/data0 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master \
--fs_data_dirs=/mnt/data0 \
--master_addresses=yb-master1:7100,yb-master2:7100,yb-master3:7100 \
--replication_factor=3
  • Run the command below to see the services that are now live.
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
jfnrqfvnrc5b        yb-master1          replicated          1/1                 yugabytedb/yugabyte:latest   *:7000->7000/tcp
kqp6eju3kq88        yb-master2          replicated          1/1                 yugabytedb/yugabyte:latest
ah6wfodd4noh        yb-master3          replicated          1/1                 yugabytedb/yugabyte:latest  
  • View the yb-master Admin UI by going to the port 7000 of any node, courtesy of the publish option used when yb-master1 was created. For example, we can see from Step 1 that worker2's IP address is 192.168.99.101. So, http://192.168.99.101:7000 takes us to the yb-master Admin UI.

4. Create yb-tserver service

  • Create a single yb-tserver global service so that swarm can then automatically spawn 1 container/task on each worker node. Each time we add a node to the swarm, the swarm orchestrator creates a task and the scheduler assigns the task to the new node.

Note for Kubernetes Users

The global services concept in Docker Swarm is similar to Kubernetes DaemonSets.
$ docker service create \
--mode global \
--name yb-tserver \
--network yugabytedb \
--mount type=volume,source=yb-tserver,target=/mnt/data0 \
--publish 9000:9000 \
yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver \
--fs_data_dirs=/mnt/data0 \
--tserver_master_addrs=yb-master1:7100,yb-master2:7100,yb-master3:7100

Tip

Use remote volumes instead of local volumes (used above) when you want to scale-out or scale-in your swarm cluster.
  • Run the command below to see the services that are now live.
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
jfnrqfvnrc5b        yb-master1          replicated          1/1                 yugabytedb/yugabyte:latest   *:7000->7000/tcp
kqp6eju3kq88        yb-master2          replicated          1/1                 yugabytedb/yugabyte:latest
ah6wfodd4noh        yb-master3          replicated          1/1                 yugabytedb/yugabyte:latest
n6padh2oqjk7        yb-tserver          global              3/3                 yugabytedb/yugabyte:latest   *:9000->9000/tcp
  • Now we can go to http://192.168.99.101:9000 to see the yb-tserver admin UI.

5. Test the APIs

YCQL API

  • Find the container ID of the yb-tserver running on worker1. Use the first param of docker ps output.

  • Connect to that container using that container ID.

$ docker exec -it <ybtserver_container_id> /home/yugabyte/bin/cqlsh
Connected to local cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh>
  • Follow the test instructions as noted in Quick Start.

YEDIS API

  • Find the container ID of the yb-master running on worker1. Use the first param of docker ps output.

  • Initialize the YEDIS API.

$ docker exec -it <ybmaster_container_id> /home/yugabyte/bin/yb-admin -- --master_addresses yb-master1:7100,yb-master2:7100,yb-master3:7100 setup_redis_table
I0515 19:54:48.952378    39 client.cc:1208] Created table system_redis.redis of type REDIS_TABLE_TYPE
I0515 19:54:48.953572    39 yb-admin_client.cc:440] Table 'system_redis.redis' created.
  • Follow the test instructions as noted in Quick Start.

YSQL API

  • Install the postgresql client in the yb-tserver container.
$ docker exec -it <ybtserver_container_id> yum install postgresql
  • Connect to the ysqlsh client in yb-tserver.
$ docker exec -it <ybtserver_container_id> ysqlsh
...
ysqlsh (11.2)
Type "help" for help.

postgres=#
  • Follow the test instructions as noted in Quick Start.

6. Test fault-tolerance with node failure

Docker Swarm ensures that the yb-tserver global service will always have 1 yb-tserver container running on every node. If the yb-tserver container on any node dies, then Docker Swarm will bring it back on.

$ docker kill <ybtserver_container_id>

Observe the output of docker ps every few seconds till you see that the yb-tserver container is re-spawned by Docker Swarm.

7. Test auto-scaling with node addition

  • On the host machine, get worker token for new worker nodes to use to join the existing swarm.
$ docker-machine ssh worker1 "docker swarm join-token worker -q"
SWMTKN-1-aadasdsadas-2ja2q2esqsivlfx2ygi8u62yq
  • Create a new node worker4.
$ docker-machine create -d virtualbox worker4
  • Pull the YugabyteDB container.
$ docker-machine ssh worker4 "docker pull yugabytedb/yugabyte"
  • Join worker4 with existing swarm.
$ docker-machine ssh worker4 \
    "docker swarm join \
    --token SWMTKN-1-aadasdsadas-2ja2q2esqsivlfx2ygi8u62yq \
    --listen-addr $(docker-machine ip worker4) \
    --advertise-addr $(docker-machine ip worker4) \
    $(docker-machine ip worker1)"
  • Observe that Docker Swarm adds a new yb-tserver instance to the newly added worker4 node and changes its replica status from 3 / 3 to 4 / 4.
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
jfnrqfvnrc5b        yb-master1          replicated          1/1                 yugabytedb/yugabyte:latest   *:7000->7000/tcp
kqp6eju3kq88        yb-master2          replicated          1/1                 yugabytedb/yugabyte:latest
ah6wfodd4noh        yb-master3          replicated          1/1                 yugabytedb/yugabyte:latest
n6padh2oqjk7        yb-tserver          global              4/4                 yugabytedb/yugabyte:latest   *:9000->9000/tcp 

8. Remove services and destroy nodes

  • Stop the machines.
$ docker-machine stop $(docker-machine ls -q)
  • Remove the machines.
$ docker-machine rm $(docker-machine ls -q)
  • Prerequisites
    • Linux
    • macOS
    • Windows
  • 1. Create swarm nodes
  • 2. Create overlay network
  • 3. Create yb-master services
  • 4. Create yb-tserver service
  • 5. Test the APIs
    • YCQL API
    • YEDIS API
    • YSQL API
  • 6. Test fault-tolerance with node failure
  • 7. Test auto-scaling with node addition
  • 8. Remove services and destroy nodes
Ask our community
  • Slack
  • Github
  • Forum
  • StackOverflow
Yugabyte
Contact Us
Copyright © 2017-2020 Yugabyte, Inc. All rights reserved.