Configure the on-premises cloud provider
You can configure the on-premises cloud provider for YugabyteDB using the Yugabyte Platform console. If no cloud providers are configured, the main Dashboard prompts you to configure at least one cloud provider, as per the following illustration:
Configure the on-premises provider
Provider name is an internal tag used for organizing your providers, so you know where you want to deploy your YugabyteDB universes.
To provision on-prem nodes with YugabyteDB, Yugabyte Platform requires SSH access to these nodes. Unless you plan to provision the database nodes manually, this user needs to have passwordless sudo permissions to complete a few tasks.
If the SSH user requires a password for sudo access or the SSH user does not have sudo access, follow the steps in the Manually provision nodes section.
Port number of SSH client connections.
Manually Provision Nodes
If you choose to manually set up your database nodes, set this flag to true. Otherwise, Yugabyte Platform will use the sudo user to set up DB nodes. For manual provisioning, you would be prompted to run a python script at a later stage, or to run a set of commands on the database nodes.
If any of the items from this checklist are true, you need to provision the nodes manually, keeping in mind the following:
yugabyte:yugabyteuser + group
- Sudo user requires a password
- The SSH user is not a sudo user
Ensure that the SSH key is pasted correctly (the supported format is RSA).
Air Gap Install
If enabled, the installation runs in an air-gapped mode without expecting any internet access.
Desired Home Directory (Optional)
Specifies the home directory of yugabyte user. The default value is
Node Exporter Port
The port number (default value 9300) for the Node Exporter. You can override this to specify a different port.
Install Node Exporter
Defines whether to install or skip installing Node Exporter. You can skip this step if you have Node Exporter already installed on the nodes. Ensure you have provided the correct port number for skipping the installation.
Node Exporter User
You can override the default prometheus user. This is useful when a user is pre-provisioned (in case user creation is disabled) on nodes. If overridden, the installer checks if the user exists and creates the user if it does not exist.
Provision YugabyteDB nodes
Complete the fields shown in the following illustration to provide node hardware configuration (CPU, memory, and volume information):
This is an internal user-defined tag used as an identifier in the Instance Type universe field.
Number of Cores
This is the number of cores assigned to a node.
Mem Size (GB)
This is the memory allocation of a node.
Vol size (GB)
This is the disk volume of a node.
For mount paths, use a mount point with enough space to contain your node density. Use
/data. If you have multiple drives, add these as a comma-separated list, such as:
Region and Zones
Complete the fields shown in the following illustration to provide the location of DB nodes. All these fields are user-defined and will be later used during the universe creation:
Provision YugabyteDB nodes
After finishing the cloud provider configuration, click Manage Instances to provision as many nodes as your application requires.
For each node you want to add, click Add Instances to add a YugabyteDB node. You can use DNS names or IP addresses when adding instances (instance ID is an optional user-defined identifier).
Provision nodes manually
To provision your nodes manually, you have the following two options:
If the SSH user you provided has sudo privileges but requires a password, you can run the pre-provisioning script.
If the SSH user doesn't have sudo privileges at all, you need to set the database nodes up manually.
How to run the pre-provisioning script
This step is only required if you set Manually Provision Nodes to true and the SSH user has sudo privileges which require a password; otherwise, skip this step.
Follow these steps to manually provision each node using the pre-provisioning Python script:
Login to Yugabyte Platform virtual machine via SSH.
Access the docker
yugawarecontainer, as follows:
sudo docker exec -it yugaware bash
Copy and paste the Python script prompted via the UI and substitute for a node IP address and mount points. Optionally, use the
--ask_passwordflag if the sudo user requires password authentication, as follows:
bash-4.4# /opt/yugabyte/yugaware/data/provision/9cf26f3b-4c7c-451a-880d-593f2f76efce/provision_instance.py --ip 10.9.116.65 --mount_points /data --ask_password
Expect the following output and prompt:
Executing provision now for instance with IP 10.9.116.65... SUDO password:
Wait for the script to finish successfully.
Repeat step 3 for every node that will participate in the universe.
How to set up database nodes manually
This step is only required if you set Manually Provision Nodes to true and the SSH user does not have sudo privileges at all; otherwise, skip this step.
If the SSH user configured in the Onprem Provider does not have sudo privileges, then set up each of the database nodes manually by following the steps in this section. Note that you need access to a user with sudo privileges in order to complete these steps.
For each node, perform the following:
- Set up time synchronization
- Open incoming TCP ports
- Pre-provision the node
- Install Prometheus node exporter
- Install backup utilities
- Set crontab permissions
Set up time synchronization
A local NTP server or equivalent must be available.
Ensure an NTP-compatible time service client is installed in the node OS (chrony is installed by default in the standard CentOS 7 instance used in this example). Then, configure the time service client to use the available time server. The procedure includes this step and assumes chrony is the installed client.
Open incoming TCP/IP ports
Database servers need incoming TCP/IP access enabled to the following ports, for communications between themselves and the Platform server:
|TCP||22||SSH (for automatic administration)|
|TCP||7000||YB master webserver|
|TCP||7100||YB master RPC|
|TCP||9000||YB tablet server webserver|
|TCP||9100||YB tablet server RPC|
|TCP||9300||Prometheus node exporter|
|TCP||12000||YCQL HTTP (for DB statistics gathering)|
|TCP||13000||YSQL HTTP (for DB statistics gathering)|
This table is based on the information on the default ports page.
Pre-provision nodes manually
This process carries out all provisioning tasks on the database nodes which require elevated privileges. Once the database nodes have been prepared in this way, the universe creation process from the Yugabyte Platform server will connect with the nodes only via the
yugabyte user, and not require any elevation of privileges to deploy and operate the YugabyteDB universe.
Physical nodes (or cloud instances) are installed with a standard Centos 7 server image. The following steps are to be performed on each physical node, prior to universe creation:
Login to each database node as a user with sudo enabled (the “centos” user in centos7 images).
Add the following line to
/etc/chrony.conf(sudo is required):
server <your-time-server-IP-address> prefer iburst
Then, run the following command:
$ sudo chronyc makestep # (force instant sync to NTP server)
Add a new
yugabyte:yugabyteuser and group (sudo is required):
$ sudo useradd yugabyte # (add group yugabyte + create /home/yugabyte) $ sudo passwd yugabyte # (add a password to the yugabyte user) $ sudo su - yugabyte # (change to yugabyte user for convenient execution of next steps)
Copy the SSH public key to each DB node.
This public key should correspond to the private key entered into the Platform Provider, as outlined elsewhere in this document.
Run the following commands as the
yugabyteuser, after copying the SSH public key file to the user home directory:
$ cd ~yugabyte $ mkdir .ssh $ chmod 700 .ssh $ cat <pubkey file> >> .ssh/authorized_keys $ chmod 400 .ssh/authorized_keys $ exit # (exit from the yugabyte user back to previous user)
Add the following lines to
/etc/security/limits.conf(sudo is required):
* - core unlimited * - data unlimited * - fsize unlimited * - sigpending 119934 * - memlock 64 * - rss unlimited * - nofile 1048576 * - msgqueue 819200 * - stack 8192 * - cpu unlimited * - nproc 12000 * - locks unlimited
Modify the following line in
/etc/security/limits.d/20-nproc.conf(sudo is required):
* soft nproc 12000
Install the rsync and OpenSSL packages (sudo is required).
Most Linux distributions include rsync and openssl. If your distribution is missing these packages, install them using the following commands:
$ sudo yum install openssl $ sudo yum install rsync
For airgapped environments, make sure your yum repository mirror contains these packages.
Tune kernel settings (only if running on a Virtual machine) (sudo is required):
$ sudo bash -c 'sysctl vm.swappiness=0 >> /etc/sysctl.conf' $ sysctl kernel.core_pattern=/home/yugabyte/cores/core_%e.%p >> /etc/sysctl.conf
Prepare and mount the data volume (separate partition for database data) (sudo is required):
List the available storage volumes:
Perform the following steps for each available volume (all listed volumes other than the root volume):
$ sudo mkdir /data # (or /data1, /data2 etc) $ sudo mkfs -t xfs /dev/nvme1n1 # (create xfs filesystem over entire volume) $ sudo vi /etc/fstab
Add the following line to
/dev/nvme1n1 /data xfs noatime 0 0
Exit from vi, and continue:
$ sudo mount -av (mounts the new volume using the fstab entry, to validate) $ sudo chown yugabyte:yugabyte /data $ sudo chmod 755 /data
Install Prometheus node exporter
For Yugabyte Platform versions 2.8 and later, download the 1.2.2 version of the Prometheus node exporter, as follows:
For Yugabyte Platform versions prior to 2.8, download the 0.13.0 version of the exporter, as follows:
$ wget https://github.com/prometheus/node_exporter/releases/download/v0.13.0/node_exporter-0.13.0.linux-amd64.tar.gz
If you are doing an airgapped installation, download the node exporter using a computer connected to the internet and copy it over to the database nodes.
Note that the instructions here are for the 0.13.0 version. The same instructions work with the 1.2.2 version, but make sure to use the correct filename.
On each node, do the following as a user with sudo access:
node_exporter-....tar.gzpackage file you downloaded into the
/tmpdirectory on each of the DB nodes. Ensure this file is readable by the
centosuser on each node (or another user with sudo privileges).
Run the following commands (sudo required):
$ sudo mkdir /opt/prometheus $ sudo mkdir /etc/prometheus $ sudo mkdir /var/log/prometheus $ sudo mkdir /var/run/prometheus $ sudo mv /tmp/node_exporter-0.13.0.linux-amd64.tar /opt/prometheus $ sudo adduser prometheus (also adds group “prometheus”) $ sudo chown -R prometheus:prometheus /opt/prometheus $ sudo chown -R prometheus:prometheus /etc/prometheus $ sudo chown -R prometheus:prometheus /var/log/prometheus $ sudo chown -R prometheus:prometheus /var/run/prometheus $ sudo chmod +r /opt/prometheus/node_exporter-0.13.0.linux-amd64.tar $ sudo su - prometheus (user session is now as user “prometheus”)
Run the following commands as user
$ cd /opt/prometheus $ tar zxf node_exporter-0.13.0.linux-amd64.tar.gz $ exit # (exit from prometheus user back to previous user)
Edit the following file (sudo required):
$ sudo vi /etc/systemd/system/node_exporter.service
Add the following to
[Unit] Description=node_exporter - Exporter for machine metrics. Documentation=https://github.com/William-Yeh/ansible-prometheus After=network.target [Install] WantedBy=multi-user.target [Service] Type=simple #ExecStartPre=/bin/sh -c " mkdir -p '/var/run/prometheus' '/var/log/prometheus' " #ExecStartPre=/bin/sh -c " chown -R prometheus '/var/run/prometheus' '/var/log/prometheus' " #PIDFile=/var/run/prometheus/node_exporter.pid User=prometheus Group=prometheus ExecStart=/opt/prometheus/node_exporter-0.13.0.linux-amd64/node_exporter --web.listen-address=:9300 --collector.textfile.directory=/tmp/yugabyte/metrics
Exit from vi, and continue (sudo required):
$ sudo systemctl daemon-reload $ sudo systemctl enable node_exporter $ sudo systemctl start node_exporter
Check the status of the node_exporter service with the following command:
$ sudo systemctl status node_exporter
Install backup utilities
Platform supports backing up YugabyteDB to AWS S3, Azure Storage, Google Cloud Storage, and NFS. Install the backup utility for the backup storage you plan to use.
NFS: Install rsync. Yugabyte Platform uses rsync to do NFS backups, which you installed in an earlier step.
AWS S3: Install s3cmd. Yugabyte Platform relies on s3cmd to support copying backups to AWS S3. You have the following two options to install:
For a regular install, execute the following:
$ sudo yum install s3cmd
For an airgapped install, copy
/opt/third-party/s3cmd-2.0.1.tar.gzfrom the Yugabyte Platform node to the database node, and extract it into the
/usr/localdirectory on the database node, as follows:
$ cd /usr/local $ sudo tar xvfz path-to-s3cmd-2.0.1.tar.gz $ sudo ln -s /usr/local/s3cmd-2.0.1/s3cmd /usr/local/bin/s3cmd
Azure Storage: Install azcopy. You have the following two options:
Download azcopy_linux_amd64_10.4.0.tar.gz using the following command:
$ wget https://azcopyvnext.azureedge.net/release20200410/azcopy_linux_amd64_10.4.0.tar.gz
For airgapped installs, copy
/opt/third-party/azcopy_linux_amd64_10.4.0.tar.gzfrom the Yugabyte Platform node, as follows:
$ cd /usr/local $ sudo tar xfz path-to-azcopy_linux_amd64_10.4.0.tar.gz -C /usr/local/bin azcopy_linux_amd64_10.4.0/azcopy --strip-components 1
Google Cloud Storage: Install gsutil. You have the following two options:
Download gsutil_4.60.tar.gz using the following command:
$ wget https://storage.googleapis.com/pub/gsutil_4.60.tar.gz
For airgapped installs, copy
/opt/third-party/gsutil_4.60.tar.gzfrom the Yugabyte Platform node, as follows:
$ cd /usr/local $ sudo tar xvfz gsutil_4.60.tar.gz $ sudo ln -s /usr/local/gsutil/gsutil /usr/local/bin/gsutil
Set crontab permissions
Yugabyte Platform supports performing YugabyteDB liveness checks, log file management, and core file management using cron jobs.
Note that sudo is required to set up this service.
If Yugabyte Platform will be using cron jobs, make sure yugabyte user is allowed to run crontab:
- If you are using the
cron.allowfile to manage crontab access, add yugabyte user to this file.
- If you are using the
cron.denyfile, remove yugabyte user from this file.
If you are not using either file, no changes are required.