Note

To use automatic-mode transactional xCluster replication, both the Primary and Standby universes must be running v2025.1, v2.25.1, or later.

EA Automatic transactional xCluster replication handles all aspects of replication for both data and schema changes.

In particular, DDL changes made to the Primary universe are automatically replicated to the Standby universe.

Warning

Not all DDLs can be automatically replicated yet; see XCluster Limitations.

In this mode, xCluster replication operates at the YSQL database granularity. This means you only run xCluster management operations when adding and removing databases from replication, and not when tables in the databases are created or dropped.

Set up Automatic mode replication

This feature is EA ; you must enable it by adding the xcluster_enable_ddl_replication flag to the allowed_preview_flags_csv list and setting it to true on yb-master in both universes.

Before setting up xCluster replication, ensure you have reviewed the Prerequisites and Best practices.

DDLs must be paused on the Primary universe during the entire setup process. #26053

The following assumes you have set up Primary and Standby universes. Refer to Set up yugabyted universes. The yugabyted node must be started with --backup_daemon=true to initialize the backup/restore agent.

  1. Create a checkpoint on the Primary universe for all the databases that you want to be part of the replication.

    ./bin/yugabyted xcluster create_checkpoint \
        --replication_id <replication_id> \
        --databases <comma_separated_database_names> \
        --automatic_mode
    

    The command informs you how to perform the needed bootstraps -- databases must always be bootstrapped in automatic mode. For example:

    +-------------------------------------------------------------------------+
    |                                yugabyted                                |
    +-------------------------------------------------------------------------+
    | Status               : xCluster create checkpoint success.              |
    | Bootstrapping        : Bootstrap is required for database `yugabyte`.   |
    +-------------------------------------------------------------------------+
    For each database which requires bootstrap run the following commands to perform a backup and restore.
     Run on source:
    ./yugabyted backup --cloud_storage_uri <AWS/GCP/local cloud storage uri>  --database <database_name> --base_dir <base_dir of source node>
     Run on target:
    ./yugabyted restore --cloud_storage_uri <AWS/GCP/local cloud storage uri>  --database <database_name> --base_dir <base_dir of target node>
    
  2. Perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore.

    A full copy is needed here: it is not sufficient to just set up the same schemas on both sides via DDLs, even if there is no data. That will not properly set up some internal metadata including Postgres OIDs.
  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yugabyted configure point_in_time_recovery \
        --enable \
        --retention <retention_period> \
        --database <database_name>
    

    The retention_period must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Set up the xCluster replication.

    ./bin/yugabyted xcluster set_up \
        --target_address <ip_of_any_target_cluster_node> \
        --replication_id <replication_id> \
        --bootstrap_done
    

    You should see output similar to the following:

    +-----------------------------------------------+
    |                   yugabyted                   |
    +-----------------------------------------------+
    | Status        : xCluster set-up successful.   |
    +-----------------------------------------------+
    

The following assumes you have set up Primary and Standby universes. Refer to Set up universes.

  1. Create a checkpoint using the create_xcluster_checkpoint command, providing a name for the replication group, and the names of the databases to replicate as a comma-separated list.

    ./bin/yb-admin \
        -master_addresses <primary_master_addresses> \
        create_xcluster_checkpoint \
        <replication_group_id> \
        <comma_separated_namespace_names> \
        automatic_ddl_mode
    

    Note that automatic mode always requires bootstrapping databases so you will need to backup and restore the databases. Sample command output:

    Waiting for checkpointing of database(s) to complete
    Checkpointing of yugabyte completed. Bootstrap is required for setting up xCluster replication
    Successfully checkpointed databases for xCluster replication group repl_group1
    Perform a distributed Backup of database(s) [yugabyte] and Restore them on the target universe
    Once the above step(s) complete run 'setup_xcluster_replication'
    

    You can also manually check the status as follows:

    ./bin/yb-admin \
    -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    is_xcluster_bootstrap_required repl_group1 yugabyte
    

    You should see output similar to the following:

    Waiting for checkpointing of database(s) to complete
    Checkpointing of yugabyte completed. Bootstrap is required for setting up xCluster replication
    
  2. Perform a full copy of the database on the Primary to the Standby using distributed backup and restore. See Distributed snapshots for YSQL.

    A full copy is needed here: it is not sufficient to just set up the same schemas on both sides via DDLs, even if there is no data. That will not properly set up some internal metadata including Postgres OIDs.
  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yb-admin \
        -master_addresses <standby_master_addresses> \
        create_snapshot_schedule \
        <snapshot-interval> \
        <retention-time> \
        <ysql.database_name>
    

    The retention-time must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Set up the xCluster replication group.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    setup_xcluster_replication \
    <replication_group_id> \
    <standby_master_addresses>
    

    You should see output similar to the following:

    xCluster Replication group repl_group1 setup successfully
    

Monitor replication

For information on monitoring xCluster replication, refer to Monitor xCluster.

Add a database to a replication group

The database should have at least one table in order to be added to replication. If it is a colocated database then there should be at least one colocated table in the database in order for it to be added to replication.

  1. Create a checkpoint on the Primary universe for all the databases that you want to add to an existing replication group.

    ./bin/yugabyted xcluster add_to_checkpoint \
        --replication_id <replication_id> \
        --databases <comma_separated_database_names>
    

    You should see output similar to the following:

    Waiting for checkpointing of database to complete
    Successfully checkpointed database db2 for xCluster replication group repl_group1
    Bootstrap is not required for adding database to xCluster replication
    Create equivalent YSQL objects (schemas, tables, indexes, ...) for the database in the standby universe
    
  2. If bootstrapping is required, perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore. If your source database is not empty or you are using automatic mode, it must be bootstrapped.

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yugabyted configure point_in_time_recovery \
        --enable \
        --retention <retention_period> \
        --database <database_name>
    

    The retention_period must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Add the databases to the xCluster replication.

    ./bin/yugabyted xcluster add_to_replication \
        --databases <comma_separated_database_names> \
        --replication_id <replication_id> \
        --target_address <IP-of-any-target-node> \
        --bootstrap_done
    
  1. Create a checkpoint.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    add_namespace_to_xcluster_checkpoint <replication_group_id> <namespace_name>
    

    You should see output similar to the following:

    Waiting for checkpointing of database to complete
    Successfully checkpointed database db2 for xCluster replication group repl_group1
    Bootstrap is not required for adding database to xCluster replication
    Create equivalent YSQL objects (schemas, tables, indexes, ...) for the database in the standby universe
    
  2. If bootstrapping is required, perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore. If your source database is not empty or you are using automatic mode, it must be bootstrapped.

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yb-admin \
        -master_addresses <standby_master_addresses> \
        create_snapshot_schedule 1 10 ysql.yugabyte
    
  4. Set up the database using the checkpoint.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    add_namespace_to_xcluster_replication <replication_group_id> <namespace_name> <standby_master_addresses>
    

    You should see output similar to the following:

    Successfully added db2 to xCluster Replication group repl_group1
    

Remove a database from a replication group

To remove a database from a replication group, use the following command:

./bin/yugabyted xcluster remove_database_from_replication \
    --databases <comma_separated_database_names> \
    --replication_id <replication_id> \
    --target_address <ip_of_any_target_cluster_node>
./bin/yb-admin \
-master_addresses <primary_master_addresses> \
remove_namespace_from_xcluster_replication <replication_group_id> <namespace_name> <standby_master_addresses>

You should see output similar to the following:

Successfully removed db2 from xCluster Replication group repl_group1

Warning

If you want the databases being removed from replication on the target to be usable after dropping replication, you need to stop your workload (including performing DDLs) to them and wait for the replication lag to reach zero before dropping the replication group.

If you take no precautions then the target databases may be unusable; we strongly recommend dropping such databases rather than attempting to use them.

Drop xCluster replication group

To drop a replication group, use the following command:

./bin/yugabyted xcluster delete_replication \
    --replication_id <replication_id> \
    --target_address <ip_of_any_target_cluster_node>

To drop a replication group, use the following command:

./bin/yb-admin \
-master_addresses <primary_master_addresses> \
drop_xcluster_replication <replication_group_id> <standby_master_addresses>

You should see output similar to the following:

Outbound xCluster Replication group rg1 deleted successfully

Be careful using this outside of the switchover or failover workflows

If you want the databases being replicated to on the target to be usable after dropping replication, you need to stop your workload (including performing DDLs) and wait for the replication lag to reach zero before dropping the replication group.

Alternatively, you can follow the failover workflow to ensure the target cuts over to a consistent time.

If you take no precautions then the target databases may be unusable; we strongly recommend dropping such databases rather than attempting to use them.

Making DDL changes

Warning

Not all DDLs can be automatically replicated yet; see XCluster Limitations.

DDL operations must only be performed on the Primary universe. All schema changes are automatically replicated to the Standby universe.