Prepare nodes for on-premises deployment
For on-premises deployments of Yugabyte universes, you need to import nodes that can be managed by Yugabyte Platform. This page outlines the steps required to prepare these YugabyteDB nodes for on-premises deployments.
The following ports must be opened for intra-cluster communication (they do not need to be exposed to your application, only to other nodes in the cluster and the platform node):
- 7100 - Master RPC
- 9100 - TServer RPC
The following ports must be exposed for intra-cluster communication, and you should additionally expose these ports to administrators or users monitoring the system, as these ports provide valuable diagnostic troubleshooting and metrics:
- 9300 - Prometheus metrics
- 7000 - Master HTTP endpoint
- 9000 - TServer HTTP endpoint
- 11000 - YEDIS API
- 12000 - YCQL API
- 13000 - YSQL API
The following nodes must be available to your application or any user attempting to connect to the YugabyteDB, in addition to intra-node communication:
- 5433 - YSQL server
- 9042 - YCQL server
- 6379 - YEDIS server
For more information on ports used by YugabyteDB, refer to Default ports.
To prepare nodes for on premises deployment:
Install the prerequisites and verify the system resource limits as described in system configuration.
Ensure you have
sshaccess to the machine and root access (or the ability to run
sudo; the sudo user can require a password but having passwordless access is desirable for simplicity and ease of use).
Verify that you can
sshinto this node (from your local machine if the node has a public address).
$ ssh -i your_private_key.pem [email protected]_ip
The following actions are performed with sudo access:
yugabyte:yugabyteuser + group.
Set the home directory to /home/yugabyte.
prometheus:prometheususer + group.
If you're using the LDAP directory for managing system users, you can pre-provision Yugabyte and Prometheus users:
yugabyteuser should belong to the
Set the home directory for the
yugabyteuser (default /home/yugabyte) and ensure the directory is owned by
yugabyte:yugabyte. The home directory is used during cloud provider configuration.
The Prometheus username and the group can be user-defined. You enter the custom user during cloud provider configuration.
Ensure you can schedule Cron jobs with Crontab. Cron jobs are used for health monitoring, log file rotation, and cleanup of system core files.
For any 3rd party cron scheduling tools, you can disable Crontab and add these cron entries:
# Ansible: cleanup core files hourly 0 * * * * /home/yugabyte/bin/clean_cores.sh # Ansible: cleanup yb log files hourly 5 * * * * /home/yugabyte/bin/zip_purge_yb_logs.sh # Ansible: Check liveness of master */1 * * * * /home/yugabyte/bin/yb-server-ctl.sh master cron-check || /home/yugabyte/bin/yb-server-ctl.sh master start # Ansible: Check liveness of tserver */1 * * * * /home/yugabyte/bin/yb-server-ctl.sh tserver cron-check || /home/yugabyte/bin/yb-server-ctl.sh tserver start
Disabling Crontab creates alerts after the universe is created, but they can be ignored. But you need to ensure cron jobs are set appropriately for the platform to work as expected.
Verify that Python 2.7 is installed.
Enable core dumps and set ulimits.
* hard core unlimited * soft core unlimited
Configure SSH as follows:
/etc/ssh/sshd_config(disables reverse lookup, which is used for auth; DNS is still useable).
mountpath permissions to 0755.