View terminated queries with yb_terminated_queries
Use the YugabyteDB yb_terminated_queries
view to see terminated queries and the reason for their termination.
When a query quits for unexpected reasons, information about the query and the responsible backend is stored. You can access this information by using yb_terminated_queries
view which is built on top of the yb_pg_stat_get_queries
function. The view returns queries using the following criteria:
- Temporary file size exceeds
temp_file_limit
. - Terminated by SIGSEGV - the query terminated due to a crash in the PostgreSQL process.
- Terminated by SIGKILL - the query was killed by the system's out of memory killer because the node is running out of memory.
Before you start
The examples will run on any YugabyteDB universe.
To create a universe, see Set up YugabyteDB universe.
Supported fields
At a ysqlsh prompt, run the following meta-commands to return the fields supported by yb_terminated_queries
:
yugabyte=# \x
yugabyte=# \d yb_terminated_queries
View "pg_catalog.yb_terminated_queries"
Column | Type | Collation | Nullable | Default
--------------------+--------------------------+-----------+----------+---------
databasename | name | | |
backend_pid | integer | | |
query_text | text | | |
termination_reason | text | | |
query_start_time | timestamp with time zone | | |
query_end_time | timestamp with time zone | | |
The following table describes the fields and their values:
Field | Type | Description |
---|---|---|
databasename | Name | Name of the database to which the backend was connected when the query was terminated. |
backend_pid | Integer | Backend process ID. |
query_text | Text | The query that was executed, up to a maximum of 256 characters. |
termination_reason | Text | An explanation of why the query was terminated. One of: Terminated by SIGKILL Terminated by SIGSEGV temporary file size exceeds temp_file_limit (xxx kB) |
query_start_time | Timestampz | Time at which the query started. |
query_end_time | Timestampz | Time at which the query was terminated. |
Examples
PostgreSQL crash
To simulate a crash in the PostgreSQL process, send a SIGSEGV signal to the backend process.
In a ysqlsh session, get the backend pid
:
yugabyte=# SELECT pg_backend_pid();
pg_backend_pid
----------------
4650
(1 row)
In the same session, start a long-running query, so that you have time to send a signal while the query is running:
yugabyte=# SELECT * FROM generate_series(1, 123456789);
In another session, send the terminating signal to the backend process:
$ kill -SIGSEGV 4650 # the pid of the backend process
Verify that the query is listed as a terminated query as follows:
yugabyte=# SELECT backend_pid, query_text, termination_reason FROM yb_terminated_queries;
backend_pid | query_text | termination_reason
-------------+----------------------------------------------+-----------------------
4650 | SELECT * FROM generate_series(1, 123456789); | Terminated by SIGSEGV
(1 row)
Exceed the temporary file limit
To simulate a query termination, set temp_file_limit
to 0KB:
yugabyte=# SET temp_file_limit TO 0;
Now any query that requires a temporary file will result in an error.
To ensure failure, run a query that generates hundreds of millions of rows as follows:
yugabyte=# SELECT * FROM generate_series(1, 123456789);
ERROR: temporary file size exceeds temp_file_limit (0kB)
To find the query in yb_terminated_queries
, enter the following command:
yugabyte=# SELECT backend_pid, query_text, termination_reason FROM yb_terminated_queries;
backend_pid | query_text | termination_reason
-------------+----------------------------------------------+---------------------------------------------------
23052 | SELECT * FROM generate_series(1, 123456789); | temporary file size exceeds temp_file_limit (0kB)
(1 row)
Out of memory
When a system is running critically low on memory, the out of memory killer will begin force killing processes. To simulate this, send a KILL signal to the backend process.
In a ysqlsh session, get the backend PID:
yugabyte=# SELECT pg_backend_pid();
pg_backend_pid
----------------
4801
(1 row)
In the same session, start a long-running query so that you have time to send a signal while the query is running:
yugabyte=# SELECT * FROM generate_series(1, 123456789);
In another session, send the terminating signal to the backend process:
$ kill -KILL 4801 # the pid of the backend process
Verify that the query is listed as a terminated query as follows:
yugabyte=# SELECT backend_pid, query_text, termination_reason FROM yb_terminated_queries;
backend_pid | query_text | termination_reason
-------------+----------------------------------------------+-----------------------
4801 | SELECT * FROM generate_series(1, 123456789); | Terminated by SIGKILL
(1 row)
Return a query from another database
Create a terminated query by running the following command:
yugabyte=# SET temp_file_limit TO 0;
yugabyte=# SELECT 'db1' FROM generate_series(1, 123456789);
Create a second database and connect to it as follows:
yugabyte=# CREATE DATABASE new_db;
yugabyte=# \c new_db;
Create a second terminated query by running the following command:
new_db=# SET temp_file_limit TO 0;
new_db=# SELECT 'db2' FROM generate_series(1, 123456789);
Querying the yb_terminated_queries
view without providing a database name returns both queries:
new_db=# SELECT query_text FROM yb_terminated_queries;
query_text
--------------------------------------------------
SELECT 'db1' FROM generate_series(1, 123456789);
SELECT 'db2' FROM generate_series(1, 123456789);
(2 rows)
When you query yb_terminated_queries
with the name of the current database, you only see the entries for that database.
Use the database name to get the terminated queries from the yugabyte
database as follows:
new_db=# SELECT query_text FROM yb_terminated_queries WHERE databasename='yugabyte';
query_text
--------------------------------------------------
SELECT 'db1' FROM generate_series(1, 123456789);
(1 row)
Use the database name to get the terminated queries from the new database as follows:
new_db=# SELECT query_text FROM yb_terminated_queries WHERE databasename='new_db';
query_text
--------------------------------------------------
SELECT 'db2' FROM generate_series(1, 123456789);
(1 row)
Limitations
- The underlying data returned by the query is refreshed at 500 ms intervals, so if a recently terminated query is not listed, try querying again.
- The backend holds up to 1000 failed queries before it starts to overwrite the first queries.
- If the stat collector process is abruptly terminated, the underlying data may be corrupted and invalid.
Learn more
- For information on the temporary file limit, refer to temp_file_limit YB-TServer flag.
- Refer to Get query statistics using pg_stat_statements to track planning and execution of all the SQL statements.
- Refer to View live queries with pg_stat_activity to analyze live queries.