PostgreSQL Deadlock Alert

PostgreSQL Deadlocks

When deadlocks occur in a PostgreSQL instance, they are written into a log file specified by log_directory and log_filename parameters in posgresql,conf file. The error looks similar to the following:

2023-06-02 09:15:03.717 CDT [10384] ERROR: deadlock detected
2023-06-02 09:15:03.717 CDT [10384] DETAIL: Process 10384 waits for ShareLock on transaction 3087012926; blocked by process 11352.
Process 11352 waits for ShareLock on transaction 3087012925; blocked by process 10384.
Process 10384: select * from pgbench_accounts where aid = 66 for update;
Process 11352: begin;
select * from pgbench_accounts where aid = 66 for update;
select * from pgbench_accounts where aid = 112 for update;

Foreign Table to Query Log File

We can configure the instance and DPA to read this file to alert you when deadlocks are occurring. For DPA to be able to read this file, you can create a FOREIGN TABLE and here are some example statements I executed to configure this:
-- make sure the file_fdw extension is present
CREATE EXTENSION IF NOT EXISTS file_fdw;

-- create the server for the file
CREATE SERVER log_file FOREIGN DATA WRAPPER file_fdw;

-- create the foreign table that points to the current log file
CREATE FOREIGN TABLE pg_log_file (text_line text)
SERVER log_file
OPTIONS (filename 'E:/PGData/log/postgresql.log');
We can now use a query similar to below to read this text file:
SELECT * FROM pg_log_file;

Configure a DPA Alert to Read the Log File

To configure an alert in DPA to watch for deadlocks, create a Custom SQL Alert - Single Numeric type, and use a query similar to this:
SELECT COUNT(1)
FROM pg_log_file
-- the first part of each line in the log file is the timestamp, use it to filter for the last 5 min of data
WHERE TO_TIMESTAMP(SUBSTRING(text_line, 1, 19), 'YYYY-MM-DD HH24:MI:SS') >= CURRENT_TIMESTAMP - interval '5 minutes'
AND text_line LIKE '%deadlock detected%';
SELECT text_line FROM pg_log_file WHERE text_line LIKE '%deadlock detected%';
This looks for deadlocks that have occurred within the last 5 minutes (should match the alert frequency). For the alert, configure thresholds as needed that are based on numbers of deadlocks within a 5 minute timeframe.

PostgreSQL Parameters

There is some configuration work for this to work properly, and here are the parameter values I modified so that Postgres writes errors to a single log file that gets truncated every day or after 100MB:
log_destination = 'stderr'     # Valid values are combinations of stderr, csvlog, syslog, and eventlog,
                                            # depending on platform. csvlog requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on         # Enable capturing of stderr and csvlog into log files. Required to be on for csvlogs.

# These are only used if logging_collector is on:
log_directory = 'log'                       # directory where log files are written, can be absolute or relative to PGDATA
log_filename = 'postgresql.log'      # log file name pattern
# begin with 0 to use octal notation
log_rotation_age = 1d                   # Automatic rotation of logfiles will happen after that time. 0 disables.
log_rotation_size = 100MB           # Automatic rotation of logfiles will happen after that much log output.

log_truncate_on_rotation = on      # If on, an existing log file with the same name as the new log file will be truncated rather than appended to.
                                                      # But such truncation only occurs on time-driven rotation, not on restarts or size-driven rotation. Default is
                                                      # off, meaning append to existing files in all cases.