Use the COPY statement to transfer data between tables and files. COPY TO copies from tables to files. COPY FROM copies from files to tables. COPY outputs the number of rows that were copied.


copy_from ::= COPY table_name [ ( column_name [ , ... ] ) ]  FROM 
              { 'filename' | PROGRAM 'command' | STDIN } 
              [ [ WITH ] ( option [ , ... ] ) ]

copy_to ::= COPY { table_name [ ( column_names ) ] | ( query ) }  TO 
            { 'filename' | PROGRAM 'command' | STDOUT } 
            [ [ WITH ] ( option [ , ... ] ) ]

copy_option ::= FORMAT format_name
                | OIDS [ boolean ]
                | FREEZE [ boolean ]
                | DELIMITER 'delimiter_character'
                | NULL 'null_string'
                | HEADER [ boolean ]
                | QUOTE 'quote_character'
                | ESCAPE 'escape_character'
                | FORCE_QUOTE { ( column_names ) | * }
                | FORCE_NOT_NULL ( column_names )
                | FORCE_NULL ( column_names )
                | ENCODING 'encoding_name'
                | ROWS_PER_TRANSACTION integer
                | DISABLE_FK_CHECK
                | REPLACE
                | SKIP integer









Specify the table, optionally schema-qualified, to be copied.


Specify the list of columns to be copied. If not specified, then all columns of the table will be copied.


Specify a SELECT, VALUES, INSERT, UPDATE, or DELETE statement whose results are to be copied. For INSERT, UPDATE, and DELETE statements, a RETURNING clause must be provided.


Specify the path of the file to be copied. An input file name can be an absolute or relative path, but an output file name must be an absolute path. Critically, the file must be located server-side on the local filesystem of the YB-TServer that you connect to.

To work with files that reside on the client, nominate stdin as the argument for FROM or stdout as the argument for TO.

Alternatively, you can use the \copy metacommand in ysqlsh.

stdin and stdout

Critically, these input and output channels are defined client-side in the environment of the client where you run ysqlsh or your preferred programming language. These options request that the data transmission goes via the connection between the client and the server.

If you execute the COPY TO or COPY FROM statements from a client program written in a language like Python, then you cannot use ysqlsh features. Rather, you must rely on your chosen language's features to connect stdin and stdout to the file that you nominate.

However, if you execute COPY FROM using ysqlsh, you have the further option of including the COPY invocation at the start of the file that you start as a .sql script. Create a test table thus:

drop table if exists t cascade;
create table t(c1 text primary key, c2 text, c3 text);

And prepare t.sql thus:

copy t(c1, c2, c3) from stdin with (format 'csv', header true);

Notice the \. terminator. You can simply execute \i t.sql at the ysqlsh prompt to copy in the data.

Some client-side languages have a dedicated exposure of COPY

For example, the "psycopg2" PostgreSQL driver for Python (and of course this works for YugabyteDB) has dedicated cursor methods for COPY. See Using COPY TO and COPY FROM.

Copy options


The ROWS_PER_TRANSACTION option defines the transaction size to be used by the COPY command.

Deafult : 20000 for YugabyteDB versions 2.14/2.15, and 1000 for older releases.

For example, if the total tuples to be copied are 5000 and ROWS_PER_TRANSACTION is set to 1000, then the database will create 5 transactions and each transaction will insert 1000 rows. If there is an error during the execution of the copy command, then some tuples can be persisted based on the already completed transaction. This implies that if an error occurs after inserting the 3500th row, then the first 3000 rows will be persisted in the database.

  • 1 to 1000 → Transaction_1
  • 1001 to 2000 → Transaction_2
  • 2001 to 3000 → Transaction_3
  • 3001 to 3500 → Error

First 3000 rows will be persisted to the table and tuples_processed will show 3000.


The examples below assume a table like this:

yugabyte=# INSERT INTO users(name) VALUES ('John Doe'), ('Jane Doe'), ('Dorian Gray');
yugabyte=# SELECT * FROM users;
 id |    name
  3 | Dorian Gray
  2 | Jane Doe
  1 | John Doe
(3 rows)

Export an entire table

Copy the entire table to a CSV file using an absolute path, with column names in the header.

yugabyte=# COPY users TO '/home/yuga/Desktop/users.txt.sql' DELIMITER ',' CSV HEADER;

Export a partial table using the WHERE clause with column selection

In the following example, a WHERE clause is used to filter the rows and only the name column.

yugabyte=# COPY (SELECT name FROM users where name='Dorian Gray') TO '/home/yuga/Desktop/users.txt.sql' DELIMITER

Import from CSV files

In the following example, the data exported in the previous examples are imported in the users table.

yugabyte=# COPY users FROM '/home/yuga/Desktop/users.txt.sql' DELIMITER ',' CSV HEADER;

Performance tips for large tables

The following copy options may help to speed up copying, or allow for faster recovery from a partial state:

  • DISABLE_FK_CHECK skips the foreign key check when copying new rows to the table.
  • REPLACE replaces the existing row in the table if the new row's primary/unique key conflicts with that of the existing row.
  • SKIP n skips the first n rows of the file. n must be a nonnegative integer.

For COPY operation examples using the pg_stat_progress_copy view, refer to View COPY status with pg_stat_progress_copy.