Prepare nodes for on-premises deployment

Port settings and VM configuration

YugabyteDB Anywhere needs to be able to access nodes that will be used to create universes, and the nodes that make up universes need to be accessible to each other and to applications.

Prepare ports

The following ports must be opened for intra-cluster communication (they do not need to be exposed to your application, only to other nodes in the cluster and the YugabyteDB Anywhere node):

  • 7100 - YB-Master RPC
  • 9100 - YB-TServer RPC
  • 18018 - YB Controller

The following ports must be exposed for intra-cluster communication. You should expose these ports to administrators or users monitoring the system, as these ports provide diagnostic troubleshooting and metrics:

  • 9300 - Prometheus metrics
  • 7000 - YB-Master HTTP endpoint
  • 9000 - YB-TServer HTTP endpoint
  • 11000 - YEDIS API
  • 12000 - YCQL API
  • 13000 - YSQL API
  • 54422 - Custom SSH

The following ports must be exposed for intra-node communication and be available to your application or any user attempting to connect to the YugabyteDB universes:

  • 5433 - YSQL server
  • 9042 - YCQL server
  • 6379 - YEDIS server

For more information on ports used by YugabyteDB, refer to Default ports.

Prepare VMs

You can prepare VMs for use as nodes in an on-premises deployment, as follows:

  1. Ensure that the nodes conform to the requirements outlined in the YugabyteDB deployment checklist.

    This checklist also gives an idea of recommended instance types across public clouds.

  2. Install the prerequisites and verify the system resource limits, as described in system configuration.

  3. Ensure you have SSH access to the server and root access (or the ability to run sudo; the sudo user can require a password but having passwordless access is desirable for simplicity and ease of use).

  4. Execute the following command to verify that you can ssh into the node (from your local machine if the node has a public address):

    ssh -i your_private_key.pem ssh_user@node_ip

The following actions are performed with sudo access:

  • Create the yugabyte:yugabyte user and group.

  • Set the home directory to /home/yugabyte.

  • Create the prometheus:prometheus user and group.


    If you are using the LDAP directory for managing system users, you can pre-provision Yugabyte and Prometheus users, as follows:

    • Ensure that the yugabyte user belongs to the yugabyte group.

    • Set the home directory for the yugabyte user (default /home/yugabyte) and ensure that the directory is owned by yugabyte:yugabyte. The home directory is used during cloud provider configuration.

    • The Prometheus username and the group can be user-defined. You enter the custom user during the cloud provider configuration.

  • Ensure that you can schedule Cron jobs with Crontab. Cron jobs are used for health monitoring, log file rotation, and cleanup of system core files.


    For any third-party Cron scheduling tools, you can disable Crontab and add the following Cron entries:

    # Ansible: cleanup core files hourly
    0 * * * * /home/yugabyte/bin/
    # Ansible: cleanup yb log files hourly
    5 * * * * /home/yugabyte/bin/
    # Ansible: Check liveness of master
    */1 * * * * /home/yugabyte/bin/ master cron-check || /home/yugabyte/bin/ master start
    # Ansible: Check liveness of tserver
    */1 * * * * /home/yugabyte/bin/ tserver cron-check || /home/yugabyte/bin/ tserver start

    Disabling Crontab creates alerts after the universe is created, but they can be ignored. You need to ensure Cron jobs are set appropriately for YBA to function as expected.

  • Verify that Python 3.6 or later is installed.

    If you are using Python v3.11 or later, install the selinux python package as follows:

    python3.11 -m pip install selinux

    In case there is more than one Python 3 version installed, ensure that python3 refers to the right one as follows:

    sudo alternatives --set python3 /usr/bin/python3.9
    sudo alternatives --display python3
    python3 -V
  • Enable core dumps and set ulimits, as follows:

    *       hard        core        unlimited
    *       soft        core        unlimited
  • Configure SSH, as follows:

    • Disable sshguard.
    • Set UseDNS no in /etc/ssh/sshd_config (disables reverse lookup, which is used for authentication; DNS is still useable).
  • Set vm.swappiness to 0.

  • Set mount path permissions to 0755.


By default, YBA uses OpenSSH for SSH to remote nodes. YBA also supports the use of Tectia SSH that is based on the latest SSH G3 protocol. For more information, see Enable Tectia SSH.

Enable Tectia SSH

Tectia SSH is used for secure file transfer, secure remote access and tunnelling. YBA is shipped with a trial version of Tectia SSH client that requires a license in order to notify YBA to permanently use Tectia instead of OpenSSH.

To upload the Tectia license, manually copy it at ${storage_path}/yugaware/data/licenses/<license.txt>, where storage_path is the path provided during the Replicated installation.

After the license is uploaded, YBA exposes the runtime flag that you need to enable, as per the following example:

curl --location --request PUT 'http://<ip>/api/v1/customers/<customer_uuid>/runtime_config/00000000-0000-0000-0000-000000000000/key/'
--header 'Cookie: <Cookie>'
--header 'X-AUTH-TOKEN: <token>'
--header 'Csrf-Token: <csrf-token>'
--header 'Content-Type: text/plain'
--data-raw '"true"'