Hardware requirements for nodes

Prepare a VM for deployment in a universe

Refer to Hardware requirements for database cluster node hardware requirements. In particular, note the following sections:

It is recommended to use separate disks for the Linux OS and for the data.

Refer to Hardware requirements for database cluster node hardware requirements. In particular, note the following sections:

Refer to Hardware requirements for database cluster node hardware requirements. In particular, note the following sections:

Refer to Hardware requirements for database cluster node hardware requirements. In particular, note the following sections:

YBA does not support using ephemeral OS disks for Azure DB clusters.

Compute requirements

In general, a Kubernetes node that is running YBDB pods is expected to meet the following requirements:

  • 5 cores (minimum) or 8 cores (recommended)
  • 15 GB RAM (minimum)
  • 100 GB SSD disk (minimum)
  • 64-bit CPU architecture

However, to estimate the exact resources needed, see CPU and RAM for the CPU and RAM requirements for each yb-tserver and yb-master pod. For proper fault tolerance, each Kubernetes node should not run more than one yb-tserver and one yb-master pod. Use these requirements to estimate the total node capacity required in each of the zones that are part of the Kubernetes cluster.

Storage requirements

An appropriate storage class has to be specified both during YBA installation and when creating the Kubernetes provider configuration. The type of volume provisioned for YugabyteDB depends on the Kubernetes storage class being used. Consider the following recommendations when selecting or creating a storage class:

  • Use dynamically provisioned volumes for YBA and YBDB cluster pods. Set volume binding mode on a storage class to WaitForFirstConsumer for such volumes. This delays provisioning until a pod using the persistent volume claim (PVC) is created. The pod topology or scheduling constraints are respected.

    Scheduling might fail if the storage volume is not accessible from all the nodes in a cluster and the default volume binding mode is set to Immediate for certain regional cloud deployments. The volume may be created in a location or zone that is not accessible to the pod, causing the failure.

    On Google Cloud Provider (GCP), if you choose not to set binding mode to WaitForFirstConsumer, you might use regional persistent disks to replicate data between two zones in the same region on Google Kubernetes Engine (GKE). This can be used by the pod when it reschedules to another node in a different zone. For more information, see the following:

  • Use a storage class based on remote volumes (like cloud provider disks) rather than local storage volumes attached directly to the kubernetes node or local ephemeral volumes. Local storage provides great performance, but the data is not replicated and can be lost if the node fails or undergoes maintenance, requiring a full remote bootstrap of YugabyteDB data in a pod. Local storage is not recommended for production use cases.

  • Use an SSD-based storage class and an extent-based file system (XFS), as per recommendations provided in Deployment checklist - Disks.

  • Set the allowVolumeExpansion to true. This enables you to expand the volumes later by performing additional steps if you run out of disk space. Note that some storage providers might not support this setting. For more information, see Expanding persistent volumes claims.

  • The storage class for most cloud providers allows further configuration like IOPS and throughput and volume types. It is recommended to configure these parameters to match the recommended parameters as listed in Public clouds. For information on these configuration parameters, refer to the AWS, GCP, and Azure documentation.

The following is a sample storage class YAML file for Google Kubernetes Engine (GKE). You are expected to modify it to suit your Kubernetes cluster:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: yb-storage
provisioner: kubernetes.io/gce-pd
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: pd-ssd
  fstype: xfs