Configure the on-premises cloud provider
This page details how to configure the Onprem cloud provider for YugabyteDB using the Yugabyte Platform console. If no cloud providers are configured, the main Dashboard page highlights that you need to configure at least one cloud provider.
Step 1. Configuring the on-premises provider
On-premise Provider Info
This is an internal tag used for organizing your providers, so you know where you want to deploy your YugabyteDB universes.
To provision on-prem nodes with YugabyteDB, the Yugabyte Platform requires SSH access to these nodes. This user needs to have sudo permissions to complete a few tasks, which are explained in the prerequisite section.
Port number of ssh client connections.
Manually Provision Nodes
If you choose to manually set up your database nodes, then set this flag to true otherwise, the Yugabyte Platform will use the sudo user to set up DB nodes. For manual provisioning, you will be prompted to execute a python script at a later stage
If any of the items from this checklist are true, you need to provision the nodes manually.
yugabyte:yugabyteuser + group
- Sudo user requires a password.
Ensure that the SSH key is pasted correctly (Supported format is RSA).
Air Gap install
If enabled, the installation will run in an air-gapped mode without expecting any internet access.
Indicates if nodes are expected to use DNS or IP addresses. If enabled, then all internal communication will use DNS resolution.
Desired Home Directory (Optional)
Specifies the home directory of yugabyte user. The default value is /home/yugabyte.
Node Exporter Port
This is the port number (default value 9300) for the Node Exporter. You can override this to specify a different port.
Install Node Exporter
Whether to install or skip installing Node Exporter. You can skip this step if you have Node Exporter already installed on the nodes. Ensure you have provided the correct port number for skipping the installation.
Node Exporter User
You can override the default prometheus user. This is useful when a user is pre-provisioned (in case user creation is disabled) on nodes. If overridden, the installer will check if the user exists and will create the user if it doesn't.
Provision the YugabyteDB nodes
Follow the steps below to provide node hardware configuration (CPU, memory, and volume information)
This is an internal user-defined tag used as an identifier in the “Instance Type” universe field.
Number of cores
This is the number of cores assigned to a node.
Mem Size (GB)
This is the memory allocation of a node.
Vol size (GB)
This is the disk volume of a node.
For mount paths, use a mount point with enough space to contain your node density. Use
/data. If you have multiple drives, add these as a comma-separated list:
Region and Zones
Follow the steps below to provide the location of DB nodes. All these fields are user-defined, which will be later used during the universe creation.
Step 2. Provision the YugabyteDB nodes
After finishing the cloud provider configuration, click on “Manage Instances” to provision as many nodes as your application requires:
- Click on “Add Instances” to add the YugabyteDB node. You can use DNS names or IP addresses when adding instances.
- Instance ID is an optional user-defined identifier.
Run the pre-provisioning script
NoteThis step is only required if you set “Manually Provision Nodes” to true otherwise, you need to skip this step.
Follow these steps for manually provisioning each node by executing the pre-provisioning python script.
Login (ssh) to the Platform virtual machine
Access the docker
> sudo docker exec -it yugaware bash
Copy/paste the python script prompted in the UI and substitute for a node IP address and mount points.
- (Optional) Use
--ask_passwordflag if sudo user requires password authentication
- (Optional) Use
bash-4.4# /opt/yugabyte/yugaware/data/provision/9cf26f3b-4c7c-451a-880d-593f2f76efce/provision_instance.py --ip 10.9.116.65 --mount_points /data --ask_password Executing provision now for instance with IP 10.9.116.65... SUDO password:
Wait for the script to finish with the SUCCESS status.
- Repeat step 3 for every node that will participate in the universe
You’re finished configuring your on-premises cloud provider, and now you can proceed to universe creation.