F.19. lt_profile

F.19.1. Concepts
F.19.2. Extension architecture
F.19.3. Prerequisites
F.19.4. Creating Extensions
F.19.5. Privileges
F.19.6. Using lt_profile
F.19.7. Sections of PWR report
F.19.8. Sections of PSH report

This extension for LightDB helps you to find out most resource-consuming activities in your LightDB databases.

F.19.1. Concepts

Extension is based on statistic views of LightDB and contrib extension lt_stat_statements, lt_stat_activity and system_stats. It is written in pure pl/pgsql and doesn't need any external libraries or software, but LightDB database itself.

Historic repository will be created in your database by this extension. This repository will hold statistic "samples" for your lightdb clusters. Sample is taken by calling take_sample() function. LightDB also provide another lt_cron extension tool performing periodic take sample tasks.

Periodic samples can help you finding most resource intensive activities in the past. Suppose, you were reported performance degradation several hours ago. Resolving such issue, you can build a report between two samples bounding performance issue period to see load profile of your database.

You can take an explicit sample before running any batch processing, and after it will be done.

Any time you are taking a sample, pg_stat_statements_reset() will be called, ensuring you wont loose statements due to reaching lt_stat_statements.max. Also, report will contain section, informing you if captured statements count in any sample reaches 90% of lt_stat_statements.max.

lt_profile, installed in one cluster is able to collect statistics from other clusters, called servers. You just need to define some servers, providing names and connection strings and make sure connection can be established to all databases of all servers. Now you can track, for example, statistics on your standbys from master, or from any other server. Once extension is installed, a local server is automatically created - this is a server for cluster where lt_profile resides.

F.19.2. Extension architecture

Extension consists of four parts:

  • Historic repository is a storage for sampling data. Repository is a set of extension tables.

  • Sample management engine is a set of functions, used for taking samples and support repository by removing obsolete sample data from it.

  • Report engine is a set of functions used for report generation based on data from historic repository.

  • Administrative functions allows you to create and manage servers and baselines.

F.19.2.1. Extension tables

F.19.2.1.1. servers

Monitored servers (LightDB clusters) list.

Table F.4. servers Columns

Column Type

Description

server_id integer

Server id

server_name name

Server name

server_description text

Server description

server_created timestamp with time zone DEFAULT

Server created timestamp, default now()

db_exclude name[]

Databases that should be excluded

enabled boolean

Enabled or not, default true

connstr text

Connstr for dblink

max_sample_age integer

Global setting of max_sample_age would be overrided by this value if set

last_sample_id integer

Last sample id, default 0

size_smp_wnd_start time with time zone

Sample window start timestamp

size_smp_wnd_dur interval hour to second

Sample window duration

size_smp_interval interval day to minute

Sample window interval


F.19.2.1.2. samples

Sample times list.

Table F.5. samples Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

sample_time timestamp (0) with time zone

Timestamp of this sample


F.19.2.1.3. sample_server_hardware

Server hardware infomation (collected by system_stats). The information includes the number of CPUs, threads, cores, sockets, memory, etc.

Table F.6. sample_server_hardware Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

architecture text

Cpu architecture of the system

byte_order text

Byte order of the architecture

cpus integer

Cpu count

threads_per_core integer

Thread count per core

cores_per_socket integer

Core count per socket

sockets integer

Sockets of the system

vendor_id text

Vendor id

cpu_family integer

Cpu family

model integer

Model of the system

model_name text

Model name of the system

cpu_mhz text

Cpu frequency in mhz

l1dcache text

Level 1 data cache

l1icache text

Level 1 instruction cache

l2cache text

Level 2 cache

l3cache t

Level 3 cache

memory_total integer

Memory in total

swap_total integer

Swap memory in total


F.19.2.1.4. sample_cluster_instance

Cluster configuration infomation.

Table F.7. sample_cluster_instance Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

hostname text

Hostname (reference servers.server_name)

cluster_name text

Cluster name (reference current_setting('cluster_name'))

role text

Role, Primary if pg_is_in_recovery() = false, otherwise standby

release text

Version of LightDB, for example: LightDB 13.8-23.1

startup_time timestamp

Startup time


F.19.2.1.5. sample_kernel_settings

Sample kernel settings.

Table F.8. sample_kernel_settings Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

name text

Name

value text

Value

sourcefile text

Source file


F.19.2.1.6. sample_cpu_usage

Sample cpu usage, the data is produced by analyzing statistics from /proc/stat.

Table F.9. sample_cpu_usage Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

user_cpu float4

User cpu

sys_cpu float4

Sys cpu

idle_cpu float4

Idle cpu

io_wait float4

Io wait


F.19.2.1.7. sample_db_memory_usage

Sample memory usage.

Table F.10. sample_db_memory_usage Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

shared_memory bigint

Shared memory size in bytes

local_memory bigint

Local memory size in bytes


F.19.2.1.8. sample_settings

Sample settings.

Table F.11. sample_settings Columns

Column Type

Description

server_id integer

Server id

first_seen timestamp (0) with time zone

First seen timestamp

setting_scope smallint

Scope of setting. Currently may be 1 for pg_settings and 2 for other adm functions (like version)

name text

Name

setting text

Setting

reset_val text

Reset val

boot_val text

Boot val

unit text

Unit

sourcefile text

Source file

sourceline integer

Source line

pending_restart boolean

Pending restart flag


F.19.2.1.9. sample_timings

Sample timings for various function call of sample system.

Table F.12. sample_timings Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

event text

Sample event

time_spent interval MINUTE TO SECOND (2)

Spent time for the related event on this sample


F.19.2.1.10. baselines

Baselines setting.

Table F.13. baselines Columns

Column Type

Description

server_id integer

Server id

bl_id integer

Baseline sample id

bl_name varchar (25)

Baseline sample name

keep_until timestamp (0) with time zone

Timestamp indicating the end of life for this baseline sample


F.19.2.1.11. bl_samples

Baseline Sample list.

Table F.14. bl_samples Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

bl_id integer

Baseline id


F.19.2.1.12. stmt_list

Statement list.

Table F.15. stmt_list Columns

Column Type

Description

server_id integer

Server id

queryid_md5 char(32)

Md5 of the query id

query text

Query is a sql statement in text format


F.19.2.1.13. sample_stat_database

Sample statistic for database.

Table F.16. sample_stat_database Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

Database id

datname name

Database name

xact_commit bigint

Transaction commit count

xact_rollback bigint

Transaction rollback count

blks_read bigint

Blocks read count

blks_hit bigint

Blocks hit count

tup_returned bigint

Tuple returned count

tup_fetched bigint

Tuple fetched count

tup_inserted bigint

Tuple inserted count

tup_updated bigint

Tuple updated count

tup_deleted bigint

Tuple deleted count

conflicts bigint

Conflicts in total, including tablespace, lock, snapshot, bufferpin, startup_deadlock

temp_files bigint

Temporary files

temp_bytes bigint

Temporary files size in bytes

deadlocks bigint

Deadlocks

blk_read_time double precision

Block read time

blk_write_time double precision

Block write time

stats_reset timestamp with time zone

Timestamp of reset for the statistics.

datsize bigint

Database size

datsize_delta bigint

Database delta size

datistemplate boolean

True for a template database


F.19.2.1.14. sample_statements

Sample statistic for statements.

Table F.17. sample_statements Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

userid oid

OID of user who executed the statement

datid oid

OID of database in which the statement was executed

queryid bigint

Internal hash code, computed from the statement's parse tree

queryid_md5 char(32)

Md5 of query, query is the sql statement in text format

plans bigint

Number of times the statement was planned (if lt_stat_statements.track_planning is enabled, otherwise zero)

total_plan_time double precision

Total time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

min_plan_time double precision

Minimum time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

max_plan_time double precision

Maximum time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

mean_plan_time double precision

Mean time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

stddev_plan_time double precision

Population standard deviation of time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

calls bigint

Number of times the statement was executed

total_exec_time double precision

Total time spent executing the statement, in milliseconds

min_exec_time double precision

Minimum time spent executing the statement, in milliseconds

max_exec_time double precision

Maximum time spent executing the statement, in milliseconds

mean_exec_time double precision

Mean time spent executing the statement, in milliseconds

stddev_exec_time double precision

Population standard deviation of time spent executing the statement, in milliseconds

rows bigint

Total number of rows retrieved or affected by the statement

shared_blks_hit bigint

Total number of shared block cache hits by the statement

shared_blks_read bigint

Total number of shared blocks read by the statement

shared_blks_dirtied bigint

Total number of shared blocks dirtied by the statement

shared_blks_written bigint

Total number of shared blocks written by the statement

local_blks_hit bigint

Total number of local block cache hits by the statement

local_blks_read bigint

Total number of local blocks read by the statement

local_blks_dirtied bigint

Total number of local blocks dirtied by the statement

local_blks_written bigint

Total number of local blocks written by the statement

temp_blks_read bigint

Total number of temp blocks read by the statement

temp_blks_written bigint

Total number of temp blocks written by the statement

blk_read_time double precision

Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)

blk_write_time double precision

Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)

wal_records bigint

Total number of WAL records generated by the statement

wal_fpi bigint

Total number of WAL full page images generated by the statement

wal_bytes numeric

Total amount of WAL generated by the statement in bytes


F.19.2.1.15. sample_statements_total

Sample statistic for statements in total, group by database.

Table F.18. sample_statements_total Columns

Column Type

Description

server_id integer

Server id

datid oid

OID of database in which the statement was executed

plans bigint

Number of times the statement was planned (if lt_stat_statements.track_planning is enabled, otherwise zero)

total_plan_time double precision

Total time spent planning the statement, in milliseconds (if lt_stat_statements.track_planning is enabled, otherwise zero)

calls bigint

Number of times the statement was executed

total_exec_time double precision

Total time spent executing the statement, in milliseconds

rows bigint

Total number of rows retrieved or affected by the database

shared_blks_hit bigint

Total number of shared block cache hits by the database

shared_blks_read bigint

Total number of shared blocks read by the database

shared_blks_dirtied bigint

Total number of shared blocks dirtied by the database

shared_blks_written bigint

Total number of shared blocks written by the database

local_blks_hit bigint

Total number of local block cache hits by the database

local_blks_read bigint

Total number of local blocks read by the database

local_blks_dirtied bigint

Total number of local blocks dirtied by the database

local_blks_written bigint

Total number of local blocks written by the database

temp_blks_read bigint

Total number of temp blocks read by the database

temp_blks_written bigint

Total number of temp blocks written by the statement by the database

blk_read_time double precision

Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) by the database

blk_write_time double precision

Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) by the database

wal_records bigint

Total number of WAL records generated by the database

wal_fpi bigint

Total number of WAL full page images generated by the database

wal_bytes numeric

Total amount of WAL generated by all statements in bytes by the database

statements bigint

Total statements count by the database


F.19.2.1.16. sample_wait_event_total

Sample statistic for wait events in total.

Table F.19. sample_wait_event_total Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

pid integer

Backend process id related to this wait event

queryid bigint

Queryid related to the wait event

wait_event_type text

Wait event type in text

wait_event text

Wait event in text

wait_time double precision

Total wait time of this kind of event since the last sample

state text

Backend state in text


F.19.2.1.17. tablespaces_list

Tablespaces, captured in samples.

Table F.20. tablespaces_list Columns

Column Type

Description

server_id integer

Server id

tablespaceid oid

OID of the tablespace

tablespacename name

Name of the tablespace

tablespacepath oid

Path of the tablespace


F.19.2.1.18. sample_stat_tablespaces

Sample statistic for tablespace. The data is from the call of function pg_tablespace_size(oid).

Table F.21. sample_stat_tablespaces Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

tablespaceid oid

OID of tablespace

size integer

Total disk space used in the tablespace, in bytes

size_delta integer

Delta disk space compared with the size that captured by last sample, in bytes


F.19.2.1.19. tables_list

Table list related to the server and database.

Table F.22. tables_list Columns

Column Type

Description

server_id integer

Server id

datid oid

OID of the database

relid oid

Relid of the table (reference pg_class.oid)

relkind char(1)

Relkind of the table

reltoastrelid oid

OID of the TOAST table associated with this table (reference pg_class.reltoastrelid)

schemaname name

Schemaname of the table

relname name

Name of the table (reference pg_class.relname)


F.19.2.1.20. sample_stat_tables

Sample statistic for tables.

Table F.23. sample_stat_tables Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of the database

relid oid

Relid of the table (reference pg_class.oid)

tablespaceid oid

OID of tablespace (reference sample_stat_tablespaces.tablespaceid)

seq_scan bigint

Delta of sequence scan since last sample

seq_tup_read bigint

Delta of sequence tuple read since last sample

idx_scan bigint

Delta of index scan since last scan

idx_tup_fetch bigint

Delta of index tuple fetch since last sample

n_tup_ins bigint

Delta of inserted tuple count since last sample

n_tup_upd bigint

Delta of updated tuple count since last sample

n_tup_del bigint

Delta of deleted tuple count since last sample

n_tup_hot_upd bigint

Delta of hot updated tuple count since last sample

n_live_tup bigint

Number of live tuple

n_dead_tup bigint

Number of dead tuple

n_mod_since_analyze bigint

Number of modified since analyze

n_ins_since_vacuum bigint

Number of inserted since vacuum

sample_id timestamp with time zone

Timestamp of last vacuum

last_autovacuum timestamp with time zone

Timestamp of last autovacuum

last_analyze timestamp with time zone

Timestamp of last analyze

last_autoanalyze timestamp with time zone

Timestamp of last autoanalyze

vacuum_count bigint

Delta vacuum count since last sample

autovacuum_count bigint

Delta autovacuum count since last sample

analyze_count bigint

Delta analyze count since last sample

autoanalyze_count bigint

Delta auto analyze count since last sample

heap_blks_read bigint

Delta heap block read since last sample

heap_blks_hit bigint

Delta heap block hit since last sample

idx_blks_read bigint

Delta index blocks read since last sample

toast_blks_read integer

Delta toast blocks read since last sample

toast_blks_hit integer

Delta toast blocks hit since last sample

tidx_blks_read integer

Delta toast index blocks read since last sample

tidx_blks_hit integer

Delta toast index blocks hit since last sample

relsize integer

Toast table size

relsize_diff integer

Delta toast table size since last sample


F.19.2.1.21. sample_stat_tables_total

Sample statistic for tables, group by server,sample_id,datid,relkind,tablespaceid.

Table F.24. sample_stat_tables_total Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of the database

sample_id oid

OID of tablespace

relkind char(1)

Relkind of the table

seq_scan bigint

Total seq_scan since last sample

seq_tup_read bigint

Total seq_tup_read since last sample

idx_scan bigint

Total index scan since last sample

idx_tup_fetch bigint

Total index tuple fetched since last sample

n_tup_ins bigint

Total number of tuple inserted since last sample

n_tup_upd bigint

Total number of tuple updated since last sample

n_tup_del bigint

Total number of tuple deleted

n_tup_hot_upd bigint

Total number of hot tuple updated since last sample

sample_id bigint

Sample id

vacuum_count bigint

Total vacuum count since last sample

autovacuum_count bigint

Total auto vacuum count since last sample

analyze_count bigint

Total analyze count since last sample

autoanalyze_count bigint

Total auto analyze count since last sample

heap_blks_read bigint

Total heap blocks read since last sample

heap_blks_hit bigint

Total heap blocks hit since last sample

idx_blks_read bigint

Total index blocks read since last sample

idx_blks_hit bigint

Total index blocks hit since last sample

toast_blks_read bigint

Total toast blocks read since last sample

toast_blks_hit bigint

Total toast blocks hit since last sample

tidx_blks_read bigint

Total toast index blocks read since last sample

tidx_blks_hit bigint

Total toast index blocks hit since last sample

relsize_diff bigint

Delta disk space for the table since last sample


F.19.2.1.22. indexes_list

Sample index list, including index names and schemas, captured in samples.

Table F.25. indexes_list Columns

Column Type

Description

server_id integer

Server id

datid oid

OID of database

indexrelid oid

The OID of the pg_class entry for this index

relid oid

The OID of the pg_class entry for the table this index is for

schemaname name

Schema of the table

indexrelname name

Name of the table


F.19.2.1.23. sample_stat_indexes

Sample statistic for indexes.

Table F.26. samples Columns

Column Type

Description

server_id integer

Server id

datid oid

OID of database

tablespaceid oid

OID of tablespace

idx_scan bigint

Delta Index scan since last sample

idx_tup_read bigint

Delta index tuple read since last sample

idx_tup_fetch bigint

Delta index tuple fetch since last sample

idx_blks_read bigint

Delta index blocks read since last sample

idx_blks_hit bigint

Delta Index blocks hit since last sample

relsize bigint

Current disk space for the index

relsize_diff bigint

Delta disk space of the index since last sample

indisunique bigint

If true, this is a unique index


F.19.2.1.24. sample_stat_indexes_total

Sample statistic index, group by server_id,sample_id,datid,tablespaceid.

Table F.27. sample_stat_indexes_total Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of the database

tablespaceid oid

OID of tablespace

idx_scan bigint

Total index scan since last sample

idx_tup_read bigint

Total index tuple read since last sample

idx_tup_fetch bigint

Total index tuple fetch since last sample

idx_blks_read bigint

Total index blocks read since last sample

idx_blks_hit bigint

Total index blocks hit since last sample

relsize_diff bigint

Total delta disk space for the index since last sample


F.19.2.1.25. funcs_list

Sample function list.

Table F.28. funcs_list Columns

Column Type

Description

server_id integer

Server id

datid integer

OID of database

funcid oid

OID of the function (reference pg_proc.oid)

schemaname name

Schemaname of the function (reference pg_namespace.schemaname)

funcname name

Function name (reference pg_proc.proname)

funcargs text

Function arguments


F.19.2.1.26. sample_stat_user_functions

Sample statistic for user functions.

Table F.29. sample_stat_user_functions Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of the database

funcid oid

OID of the function (reference pg_proc.oid)

calls bigint

Delta number of calls since last sample

total_time double precision

Delta total time since last sample

self_time double precision

Delta self time since last sample

trg_fn boolean

True if the data type of the function is trigger type


F.19.2.1.27. sample_stat_user_func_total

Sample statistic for user functions, group by server_id,sample_id,datid,trg_fn.

Table F.30. samples Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of database

calls bigint

Total number of calls since last sample

total_time double precision

Total function time since last sample

trg_fn boolean

True if the data type of the function is trigger type


F.19.2.1.28. sample_stat_cluster

Sample statistic for clsuter.

Table F.31. sample_stat_cluster Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

checkpoints_timed bigint

Delta checkpoints timed since last sample

checkpoints_req bigint

Delta requested checkpoints since last sample

checkpoint_write_time double precision

Delta accumulated checkpoint write time since last sample, in milliseconds

checkpoint_sync_time double precision

Delta accumulated checkpoint sync time since last sample, in milliseconds

buffers_checkpoint bigint

Delta buffer written checkpoints since last sample

buffers_clean bigint

Delta number of dirty buffer written since last sample

maxwritten_clean bigint

Delta number of max written since last sample (count 1 once number of written >= bgwriter_lru_maxpages)

buffers_backend bigint

Delta number of buffer written requested by backend since last sample

buffers_backend_fsync bigint

Delta number of fsync done by backend since last sample

buffers_alloc bigint

Delta number of buffer alloced since last sample

stats_reset timestamp with time zone

Last timestamp that reset pg stat

wal_size bigint

Delta wal size since last sample (wal_size calculated by pg_wal_lsn_diff)


F.19.2.1.29. sample_stat_archiver

Sample statistic for archive actions including success and failure count and timestamp.

Table F.32. sample_stat_archiver Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

archived_count integer

Last archive succeed count

last_archived_wal integer

Last archive wal file

last_archived_time integer

Last archive timestamp

failed_count integer

Last archive failed count

last_failed_wal integer

Last archive failed wal file

last_failed_time integer

Last archive failed timestamp

stats_reset integer

Timestamp of last reset for archive stat


F.19.2.1.30. sample_stat_tables_failures

Sample statistic for tables failures in calculating disk space.

Table F.33. sample_stat_tables_failures Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of database

relid oid

OID of the table (reference pg_class.oid)

size_failed boolean

True indicates a failure in calculating the disk space for the table

toastsize_failed boolean

True indicates a failure in calculating the disk space for the toast table


F.19.2.1.31. sample_stat_indexes_failures

Sample statistic for indexes failures.

Table F.34. sample_stat_indexes_failures Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

datid oid

OID of the database

indexrelid oid

OID of index (reference pg_index.indexrelid)

size_failed boolean

True indicates a failure in calculating the disk space for the index


F.19.2.1.32. lt_stat_backend_type_id_text

This table is utilized to pair the id and display text of each backend type. The backend type could be autovacuum launcher, autovacuum worker, client backend, background worker, background writer etc. Once a new backend type was added in lightdb, We have to add it in this table also. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.35. lt_stat_backend_type_id_text Columns

Column Type

Description

backend_type integer

Id of the backend type

backend_type_text text

Display text for the backend type


F.19.2.1.33. lt_stat_cost_type_id_text

This table is utilized to pair the id and display text of each cost type. The cost type could be DB Time, DB Wait, DB CPU. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.36. lt_stat_cost_type_id_text Columns

Column Type

Description

cost_type integer

Id of the cost type

cost_type_text text

Display text for the cost type


F.19.2.1.34. lt_stat_wait_evt_id_text

This table is utilized to pair the id and display text of each wait event type. Available since LightDB 23.2 with lt_profile 0.3.7 .

Table F.37. lt_stat_wait_evt_id_text Columns

Column Type

Description

evt_id integer

Id of the wait event type

evt_id_text text

Display text for the wait event type


F.19.2.1.35. lt_stat_wait_evt_sub_id_text

This table is utilized to pair the id and display text of each wait event type. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.38. lt_stat_wait_evt_sub_id_text Columns

Column Type

Description

evt_id integer

Id of the wait event (reference lt_stat_wait_evt_id_text.evt_id)

evt_sub_id integer

Id of wait event type

evt_sub_id_text text

Display text for the wait event


F.19.2.1.36. lt_stat_cost_1s

Sample statistic of accumulated cost time for DB Time, DB Wait. New sample would be inserted into this table by interval of 1 second. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.39. lt_stat_cost_1s Columns

Column Type

Description

bg_type INT

Id of backend type of this sample

cost_type INT

Id of cost type of this sample

evt_id INT

Id of wait event type of this sample

evt_sub_id INT

Id of wait event of this sample

cost BIGINT

Total spent time captured on this sample for the event, in microsecond


F.19.2.1.37. lt_stat_cost_60s

Sample statistic for DB Wait related cost time. Accumulated in 1 minute by default, the archive action would be triggered in function call "collect_activity_profile" which would be scheduled every single minute by a default setting. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.40. lt_stat_cost_60s Columns

Column Type

Description

bg_type INT

Id of backend type of this sample

cost_type INT

Id of cost type of this sample

evt_id INT

Id of wait event type of this sample

evt_sub_id INT

Id of wait event of this sample

cost BIGINT

Total accumulated cost time captured on this archive cycle for the event, in microsecond


F.19.2.1.38. lt_stat_cost_600s

Sample statistic for DB Wait related cost . Accumulated in 10 minutes by default, the archive action would be triggered in function call "take_sample" which would be scheduled every 10 minutes by a default setting . The rows in this table would be CASCADE deleted by deleting rows from samples table. Available since LightDB 23.2 with lt_profile 0.3.7.

Table F.41. lt_stat_cost_600s Columns

Column Type

Description

server_id integer

Server id

sample_id integer

Sample id

bg_type INT

Id of backend type of this sample

cost_type INT

Id of cost type of this sample

evt_id INT

Id of wait event type of this sample

evt_sub_id INT

Id of wait event of this sample

cost BIGINT

Total accumulated cost time captured on this sample for the event, in microsecond


F.19.3. Prerequisites

Although lt_profile is usually installed in the target cluster, it also can collect performance data from other clusters. Hence, we have prerequisites for lt_profile database, and for servers.

F.19.3.1. lt_profile database prerequisites

lt_profile extension depends on extensions plpgsql, dblink and ltfce.

F.19.3.2. Servers prerequisites

Consider setting following Statistics Collector parameters:

 track_activities = on
 track_counts = on
 track_io_timing = on
 track_functions = all/pl

To obtain statement statistics in reports, then database, mentioned in server connection string must have lt_stat_statements extension configured. Set lt_stat_statements parameters to meet your needs (see LightDB documentation):

  • lt_stat_statements.max - low setting for this parameter may cause some statements statistics to be wiped out before sample is taken. Report will warn you if your lt_stat_statements.max is seems to be undersized.

  • lt_stat_statements.track = 'top' - all value will affect accuracy of %Total fields for statements-related sections of a report.

F.19.4. Creating Extensions

The most easy way is to install everything in public schema of a database:

 
 lightdb@postgres=# CREATE EXTENSION dblink;
 lightdb@postgres=# CREATE EXTENSION lt_stat_statements;
 lightdb@postgres=# CREATE EXTENSION lt_stat_activity;
 lightdb@postgres=# CREATE EXTENSION system_stats;
 lightdb@postgres=# CREATE SCHEMA IF NOT EXISTS lt_catalog;
 lightdb@postgres=# CREATE EXTENSION ltfce SCHEMA lt_catalog;
 lightdb@postgres=# CREATE EXTENSION lt_profile;

If you want to install lt_profile in other schema, just create it, and install extension in that schema:

 
 lightdb@postgres=# CREATE EXTENSION dblink;
 lightdb@postgres=# CREATE EXTENSION lt_stat_statements;
 lightdb@postgres=# CREATE EXTENSION lt_stat_activity;
 lightdb@postgres=# CREATE EXTENSION system_stats;
 lightdb@postgres=# CREATE SCHEMA IF NOT EXISTS lt_catalog;
 lightdb@postgres=# CREATE EXTENSION ltfce SCHEMA lt_catalog;
 lightdb@postgres=# CREATE SCHEMA profile;
 lightdb@postgres=# CREATE EXTENSION lt_profile SCHEMA profile;

All objects will be created in schema, defined by SCHEMA clause. Installation in dedicated schema is the recommended way - the extension will create its own tables, views, sequences and functions. It is a good idea to keep them separate. If you don't want to specify schema qualifier when using module, consider changing search_path setting.

F.19.5. Privileges

Using lt_profile with a superuser privileges does not have any issues, but if you want to avoid using superuser permissions, here is the guide:

F.19.5.1. On lt_profile database

Create an unprivileged user:

 create role profile_usr password 'pwd';

Create a schema for lt_profile installation:

 create schema profile authorization profile_usr;

Grant usage permission on schema, where dblink extension resides:

 grant usage on schema public to profile_usr;

Create the extension using profile_usr account:

lightdb@postgres=> create extension lt_profile schema profile;

F.19.5.2. On server database

Create a user for lt_profile to connect:

create role profile_mon password 'pwd_mon';

Make sure this user have permissions to connect to any database in a cluster (by default, it is), and lt_hba.conf will permit such connection from lt_profile database host. Also, we need pg_read_all_stats privilege, and execute privilege on pg_stat_statements_reset:

grant pg_read_all_stats to profile_mon;
grant execute on function pg_stat_statements_reset TO profile_mon;

F.19.5.3. Server setup at lt_profile database

Non-superusers can't establish connection without a password. Best way to provide password is using a password file.

N.B. lt_profile will connect to all databases on a server, thus password file must use a wildcard as a database name.

As an insecure way, you can provide a password in connection string:

select server_connstr('local','dbname=postgres port=5432 user=profile_mon password=pwd_mon');

F.19.6. Using lt_profile

F.19.6.1. Setting extension parameters

You can define extension parameters in lightdb.conf. Default values:

  • lt_profile.topn = 10 - Number of top objects (statements, relations, etc.), to be reported in each sorted report table. Also, this parameter affects size of a sample - the more objects you want to appear in your report, the more objects we need to keep in a sample.

  • lt_profile.max_sample_age = 7 - Retention time of samples in days. Samples, aged lt_profile.max_sample_age days and more will be automatically deleted on next take_sample() call.

  • lt_profile.track_sample_timings = on - when this parameter is on, lt_profile will track detailed sample taking timings.

F.19.6.2. Managing servers

Once installed, extension will create one enabled local server - this is for cluster, where extension is installed.

Servers management functions:

  • create_server(server name, server_connstr text, server_enabled boolean = TRUE, max_sample_age integer = NULL, description text = NULL) - creates a new server description Function arguments:

    • server - server name (must be unique)

    • server_connstr - server connection string

    • enabled - server enable flag. When is set, server will be included in the common take_sample() call

    • max_sample_age - server sample retention parameter overrides global lt_profile.max_sample_age setting for this server

    • description - server description text. Will be included in reports

  • drop_server(server name) Drops a server and all its samples.

  • enable_server(server name) Includes server in common take_sample() call.

  • disable_server(server name) Excludes server from common take_sample() call.

  • rename_server(server name, new_name name) Renames a server.

  • set_server_max_sample_age(server name, max_sample_age integer) Set new retention period (in days) for a server. max_sample_age is integer value. To reset a server max_sample_age setting set it to NULL.

  • set_server_db_exclude(server name, exclude_db name[]) Set exclude databases list for a server. Used in cases, when you unable to connect to some databases in cluster (for example in Amazon RDS instances).

  • set_server_connstr(server name, new_connstr text) Set new connection string for a server.

  • set_server_description(server name, description text) Set new server description.

  • show_servers() Display existing servers.

Server creation example:

SELECT server_new('omega','host=name_or_ip dbname=postgres port=5432');

F.19.6.3. Rare relation sizes collection

LightDB relation size functions may take considerable amount of time to collect sizes of all relations in a database. Also those functions require AccessExclusiveLock on a relation. However daily relation sizes collection may be quite enough for you. lt_profile can skip relation sizes collection while taking samples guided by server size collection policy. Policy is defined as a daily window when relation size collection is permitted, and a minimal gap between two samples with relation sizes collected. Thus when size collection policy is defined sample taking function will collect relation sizes only when sample is taken in a window and previous sample with sizes is older then gap. Function set_server_size_sampling defines this policy:

  • set_server_size_sampling(server name, window_start time with time zone = NULL, window_duration interval hour to second = NULL, sample_interval interval day to minute = NULL)

    • server - server name

    • window_start - size collection window start time

    • window_duration - size collection window duration

    • sample_interval - minimum time gap between two samples with relation sizes collected Size collection policy is defined only when all three parameters are set.

Example:

SELECT set_server_size_sampling('local','23:00+03',interval '2 hour',interval '8 hour');

Function show_servers_size_sampling show defined sizes collection policy for all servers:

lightdb@postgres=# SELECT * FROM show_servers_size_sampling();
 server_name | window_start | window_end  | window_duration | sample_interval
-------------+--------------+-------------+-----------------+-----------------
 local       | 23:00:00+03  | 01:00:00+03 | 02:00:00        | 08:00:00

When you build a report between samples either of which lacks relation sizes data then relation growth sections will be excluded from report. However, with_growth parameter of report generation functions will expand report bounds to nearest samples with relation sizes data collected. Relation sizes is needed for calculating sequentially scanned volume for tables and explicit vacuum load for indexes. When rare relation sizes collection is used, corresponding report sections data is based on linear interpolation.

F.19.6.4. Samples

Every sample contains statistic information about database workload since previous sample.

F.19.6.4.1. Sample functions
  • take_sample() Function take_sample() will collect a sample for all enabled servers. Server samples will be taken serially one by one. Function returns a table:

    server      name,
    result      text,
    elapsed     interval

    Where:

    • server is a server name

    • result is a result of taken sample. It can be 'OK' if sample was taken successively, and will contain error text in case of exception

    • elapsed is a time elapsed taking a sample for server Such return makes it easy to control samples creation using SQL query.

  • take_sample_subset([sets_cnt integer], [current_set integer]) Due to serial samples processing take_sample() function can take considerable amount of time. Function take_sample_subset() will take samples for a subset of enabled servers. It is convenient for parallel samples collection. sets_cnt is number of servers subsets. current_set is a subset to process, taking values between 0 and sets_cnt - 1 inclusive. Function returns a table:

    server      name,
    result      text,
    elapsed     interval

    Where:

    • server is a server name

    • result is a result of taken sample. It can be 'OK' if sample was taken successively, and will contain error text in case of exception

    • elapsed is a time elapsed taking a sample for server

  • take_sample(server name [, skip_sizes boolean]) Will collect a sample for specified server. Use it, for example, when you want to use different sampling frequencies, or if you want to take explicit sample on a server.

    • server - name of a server to take sample

    • skip_sizes - override server relation size collection policy. Policy applies only when skip_size argument is omitted or set to null. false value of skip_sizes argument cause take sample with relation sizes, while true will cause skipping relation sizes collection during a sample.

  • show_samples([server name,] [days integer]) Returns a table, containing existing samples of a server (local server assumed if server is omitted) for days last days (all existing samples if omitted):

    sample            integer,
    sample_time       timestamp (0) with time zone,
    dbstats_reset     timestamp (0) with time zone,
    clustats_reset    timestamp (0) with time zone,
    archstats_reset   timestamp (0) with time zone

    Where:

    • sample is a sample identifier

    • sample_time is a time when this sample was taken

    • dbstats_reset, clustats_reset and archstats_reset is usual null, but will contain pg_stat_database, pg_stat_bgwriter and pg_stat_archiver statistics reset timestamp if it was happend since previous sample Sample-collecting functions are also supports the server repository - it will delete obsolete samples and baselines with respect to retention policy.

F.19.6.4.2. Taking samples

You must create at least 2 samples to be able to build your first report between 1st and 2nd samples. Samples for all enabled servers are taken by calling take_sample() function. There is no need in frequent samples creation - usual essential frequency is one or two samples per hour. You can use cron-like tool to schedule samples creation. Example with 30 min period:

*/30 * * * *   ltsql -c 'SELECT take_sample()' > /dev/null 2>&1

However, such call has no error checking on take_sample() function results. Consider using more smart usage of take_sample() function, providing results to your monitoring system.

Function will return 'OK' for all servers with successfully taken samples, and show error text for failed servers:

select * from take_sample();
  server   |                                   result                                    |   elapsed
-----------+-----------------------------------------------------------------------------+-------------
 ok_node   | OK                                                                          | 00:00:00.48
 fail_node | could not establish connection                                             +| 00:00:00
           | SQL statement "SELECT dblink_connect('server_connection',server_connstr)"  +|
           | PL/pgSQL function take_sample(integer) line 69 at PERFORM                  +|
           | PL/pgSQL function take_sample_subset(integer,integer) line 27 at assignment+|
           | SQL function "take_sample" statement 1                                     +|
           | FATAL:  database "nodb" does not exist                                      |
(2 rows)
F.19.6.4.3. Sample data retention

We can't store sample data forever, thus we have a retention policy. You can define retentions on three levels:

  • Setting parameter lt_profile.max_sample_age in lightdb.conf file. This is a common retention, it is effective if none of others is defined.

  • Define server max_sample_age setting while creating a server, or using set_server_max_sample_age() function for existing server. This setting overrides global lt_profile.max_sample_age setting for a specific server.

  • Create a baseline (see below). Baseline will override retention period for included samples with highest priority.

F.19.6.4.4. Listing samples

Use show_samples() function to get list of existing samples in the repository. This function will show detected statistics reset times.

F.19.6.4.5. Sample taking timings

lt_profile will collect detailed sample taking timing statistics when parameter lt_profile.track_sample_timings is on. Results can be obtained from v_sample_timings view. v_sample_timings fields description:

  • server_name - sampled server name

  • sample_id - sample identifier

  • sample_time - when the sample was taken

  • event - sample taking stage

  • time_spent - amount of time spent in the event

Event descriptions:

  • total - Taking the sample (all stages).

  • connect - Making dblink connection to the server.

  • get server environment - Getting server GUC parameters, available extensions, etc.

  • collect database stats - Querying the pg_stat_database view for statistics on databases.

  • calculate database stats - Calculating differential statistics on databases since the previous sample.

  • collect tablespace stats - Querying the pg_tablespace view for statistics on tablespaces.

  • collect statement stats - Collecting statistics on statements using the lt_stat_statements extension.

  • query pg_stat_bgwriter - Collecting cluster statistics using the pg_stat_bgwriter view.

  • query pg_stat_archiver - Collecting cluster statistics using the pg_stat_archiver view.

  • collect object stats - Collecting statistics on database objects. Includes following events:

    • db:dbname collect tables stats - Collecting statistics on tables for the dbname database.

    • db:dbname collect indexes stats - Collecting statistics on indexes for the dbname database.

    • db:dbname collect functions stats - Collecting statistics on functions for the dbname database.

  • maintain repository - Executing support routines.

  • calculate tablespace stats - Calculating differential statistics on tablespaces.

  • calculate object stats - Calculating differential statistics on database objects. Includes following events:

    • calculate tables stats - Calculating differential statistics on tables of all databases.

    • calculate indexes stats - Calculating differential statistics on indexes of all databases.

    • calculate functions stats - Calculating differential statistics on functions of all databases.

  • calculate cluster stats - Calculating cluster differential statistics.

  • calculate archiver stats - Calculating archiever differential statistics.

  • delete obsolete samples - Deleting obsolete baselines and samples.

F.19.6.5. Baselines

Baseline is a named sample sequence, having its own retention setting. Baseline can be used in report-building functions as a sample interval. Undefined baseline retention means infinite retention. You can use baselines to save information about database workload on certain time period. For example, you may want to save samples, gathered during load testing, or during regular load on your system for further reference. Baseline management functions:

  • create_baseline([server name,] baseline_name varchar(25), start_id integer, end_id integer [, days integer]) - create a baseline

    • server - server name. local server is assumed if omitted

    • name - baseline name. Each baseline must have unique name within a server.

    • start_id, end_id - first and last samples, included in baseline.

    • days - baseline retention time. Defined in integer days since now(). This parameter may be omitted (or set to null), meaning infinite retention.

  • create_baseline([server name,] baseline_name varchar(25), time_range tstzrange [, days integer]) - create a baseline

    • server - server name. local server is assumed if omitted

    • name - baseline name. Each baseline must have unique name within a server.

    • time_range - baseline time interval. Baseline will include all available samples, overlapping this interval.

    • days - baseline retention time. Defined in integer days since now(). This parameter may be omitted (or be set to null), meaning infinite retention.

  • drop_baseline([server name,] name varchar(25)) - drop a baseline

    • server - server name. local server is assumed if omitted

    • name - baseline name to drop. Dropping a baseline does not mean immediate drop of all its samples, they are just excluded from baseline, thus is not more covered with baseline retention.

  • keep_baseline([server name,] name varchar(25) [, days integer]) - change retention of baselines

    • server - server name. local server is assumed if omitted

    • name - baseline name. This parameter may be omitted (or be set to null) to change retention of all existing baselines.

    • days - retention time of a baseline in days since now(). Also, may be omitted (or set to null) to set infinite retention.

  • show_baselines([server name]) - displays existing baselines. Call this function to get information about existing baselines (names, sample intervals, and retention times)

    • server - server name. local server is assumed if omitted

    lightdb@postgres=# SELECT * FROM baseline_show('local');

F.19.6.6. Data export and import

Collected samples can be exported from instance of lt_profile extension and than loaded into another one. This feature is useful when you want to move servers from one instance to another, or when you need to send collected data to your support team.

F.19.6.6.1. Data export

Data is exported as a regular table by function export_data(). You can use any method to export this table from your database. For example, you can use copy command of ltsql to obtain single .csv file:

lightdb@postgres=# \copy (select * from export_data()) to 'export.csv'

By default export_data() function will export all samples of all configured servers. However you can limit export to only one server, and further limit sample range too:

  • export_data([server name, [min_sample_id integer,] [max_sample_id integer]] [, obfuscate_queries boolean]) - export collected data

    • server is a server name. All servers is assumed if omitted

    • min_sample_id and max_sample_id - export bounding sample identifiers (inclusive). Null value of min_sample_id bound cause export of all samples till max_sample_id, and null value of max_sample_id cause export of all samples since min_sample_id.

    • obfuscate_queries - use this parameter only when you want to hide query texts - they will be exported as MD5 hash.

F.19.6.6.2. Data import

Data can be imported from local table only, thus previously exported data is need to be loaded first. In our case with copy command:

lightdb@postgres=# CREATE TABLE import (section_id bigint, row_data json);
CREATE TABLE
lightdb@postgres=# \copy import from 'export.csv'
COPY 6437

Now we can perform data import, providing this table to import_data() function:

lightdb@postgres=# SELECT * FROM import_data('import');

Although server descriptions is also imported, your local lt_profile servers with matching names will prohibit import operations. Consider temporarily rename those servers. If you'll need to import new data for previously imported servers, they will be matched by system identifier, so fell free to rename imported sever as you wish. All servers are imported in disabled state. import_data() function takes only the imported table:

  • import_data(data regclass)

    • data - table containing exported data This function returns number of rows actually loaded in extension tables. After successful import operation you can drop import table.

F.19.6.7. Reports

Reports are generated in HTML markup by reporting functions, which containing statistical information about instance workload during report interval.

F.19.6.7.1. PWR report functions
  • get_pwr([server name,] start_id integer, end_id integer [, description text [, with_growth boolean]]) - generate PWR report by sample identifiers

  • get_pwr([server name,] time_range tstzrange [, description text [, with_growth boolean]]) - generate PWR report for the shortest sample interval, covering provided time_range.

  • get_pwr([server name], baseline varchar(25) [, description text [, with_growth boolean]]) - generate PWR report, using baseline as samples interval

  • get_pwr_latest([server name]) - generate PWR report for two latest samples Function arguments:

    • server - server name. local server is assumed if omitted

    • start_id - interval begin sample identifier

    • end_id - interval end sample identifier

    • time_range - time range (tstzrange type)

    • baseline - a baseline name

    • with_growth - a flag, requesting interval expansion to nearest bounds with relation growth data available. Default value is false

    • description - a text memo, it will be included in report as a report description

F.19.6.7.2. PSH report functions
  • get_psh([server name,] start_id integer, end_id integer [, description text [, with_growth boolean]]) - generate PSH report by sample identifiers

  • get_psh([server name,] time_range tstzrange [, description text [, with_growth boolean]]) - generate PSH report for the shortest sample interval, covering provided time_range.

  • get_psh([server name], baseline varchar(25) [, description text [, with_growth boolean]]) - generate PSH report, using baseline as samples interval

  • get_psh_latest([server name]) - generate PSH report for two latest samples Function arguments:

    • server - server name. local server is assumed if omitted

    • start_id - interval begin sample identifier

    • end_id - interval end sample identifier

    • time_range - time range (tstzrange type)

    • baseline - a baseline name

    • with_growth - a flag, requesting interval expansion to nearest bounds with relation growth data available. Default value is false

    • description - a text memo, it will be included in report as a report description

Report generation example:

$ ltsql -Aqtc "SELECT get_pwr(480,482)" -o report_480_482.html

For any other server, use it's name:

$ ltsql -Aqtc "SELECT get_pwr('omega',12,14)" -o report_omega_12_14.html

Report generation using time ranges:

ltsql -Aqtc "select get_psh(tstzrange('2020-05-13 11:51:35+03','2020-05-13 11:52:18+03'))" -o report_range.html

Also, time ranges is useful for generating periodic reports. Let's build last 24-hour report:

ltsql -Aqtc "select get_psh(tstzrange(now() - interval '1 day',now()))" -o last24h_report.html

Now you can view report file using any web browser.

F.19.7. Sections of PWR report

Report tables and their columns are described in this section.

F.19.7.1. Server statistics

F.19.7.1.1. CPU usage statistics

Contains cpu usage statistics such as user cpu, system cpu, idle cpu and iowait at the begin and end of report.

  • %User - user cpu percent

  • %System - system cpu percent

  • %Idle - idle cpu percent

  • %IO Wait - io wait cpu percent

F.19.7.1.2. Database memory usage statistics

Contains memory usage statistics of LightDB at the begin and end of report.

  • Host memory (MB) - total physical memory of the host

  • Shared memory use (MB) - shared memory used in LightDB, which is used by all processes of a LightDB server

  • Local memory use (MB) - local memory used in LightDB, which is allocated by each backend process for its own use

  • Host memory used for shared memory+local memory (%) - shared memory and local memory as a percentage of total host memory

F.19.7.1.3. Database statistics

Contains per-database statistics during report interval, based on pg_stat_database view.

  • Database - database name

  • Transactions - database transaction statistics

    • Commits - number of committed transactions (xact_commit)

    • Rollbacks - number of rolled back transactions (xact_rollback)

    • Deadlocks - number of deadlocks detected (deadlocks)

  • Block statistics - database blocks read and hit statistics

    • Hit(%) - buffer cache hit ratio

    • Read - number of disk blocks read in this database (blks_read)

    • Hit - number of times disk blocks were found already in the buffer cache (blks_hit)

  • Tuples - tuples statistics section

    • Returned - number of returned tuples (tup_returned)

    • Fetched - number of fetched tuples (tup_fetched)

    • Inserted - inserted tuples count (tup_inserted)

    • Updated - number of updated tuples (tup_updated)

    • Deleted - number of deleted tuples (tup_deleted)

  • Temp files - temporary files statistics

    • Size - total amount of data written to temporary files by queries in this database (temp_bytes)

    • Files - number of temporary files created by queries in this database (temp_files)

  • Size - database size at the end of report interval (pg_database_size())

  • Growth - database growth during report interval (pg_database_size() difference)

F.19.7.1.4. Statement statistics by database

Contains per-database aggregated total statistics of pg_stat_statements data (if lt_stat_statements extension was available during report interval)

  • Database - database name

  • Calls - total count of all statements executions (sum of calls)

  • Time (s) - time spent in seconds

    • Plan - time spent planning (sum of total_plan_time) - available since lt_stat_statements 1.8

    • Exec - time spent executing (sum of total_time or total_exec_time)

    • Read - time spent reading blocks (sum of blk_read_time)

    • Write - time spent writing blocks (sum of blk_write_time)

    • Trigger - time spent executing trigger functions

  • Fetched blocks - total blocks fetched from disk and buffer cache

    • Shared - total fetched shared blocks count (sum of shared_blks_read + shared_blks_hit)

    • Local - total fetched local blocks count (sum of local_blks_read + local_blks_hit)

  • Dirtied blocks - total blocks dirtied in database

    • Shared - total number of shared blocks dirtied in the database (sum of shared_blks_dirtied)

    • Local - total number of local blocks dirtied in the database (sum of local_blks_dirtied)

  • Temp blocks - blocks used for operations (like joins and sorts)

    • Read - blocks read (sum of temp_blks_read)

    • Write - blocks written (sum of temp_blks_written)

  • Local blocks - blocks used for temporary tables

    • Read - blocks read (sum of local_blks_read)

    • Write - blocks written (sum of local_blks_written)

  • Statements - total count of captured statements

  • WAL size - total amount of WAL generated by statements (sum of wal_bytes)

F.19.7.1.5. Cluster statistics

This table contains data from pg_stat_bgwriter view

  • Scheduled checkpoints - total number of checkpoints, completed on schedule due to checkpoint_timeout parameter (checkpoints_timed field)

  • Requested checkpoints - total number of other checkpoints: due to values of max_wal_size, archive_timeout and CHECKPOINT commands (checkpoints_req field)

  • Checkpoint write time (s) - total time spent writing checkpoints in seconds (checkpoint_write_time field)

  • Checkpoint sync time (s) - total time spent syncing checkpoints in seconds (checkpoint_sync_time field)

  • Checkpoints buffers written - total number of buffers, written by checkpointer (buffers_checkpoint field)

  • Background buffers written - total number of buffers, written by background writer process (buffers_clean field)

  • Backend buffers written - total number of buffers, written by backends (buffers_backend field)

  • Backend fsync count - total number of backend fsync calls (buffers_backend_fsync field)

  • Bgwriter interrupts (too many buffers) - total count of background writer interrupts due to reaching value of the bgwriter_lru_maxpages parameter.

  • Number of buffers allocated - total count of buffers allocated (buffers_alloc field)

  • WAL generated - total amount of WAL generated (based on pg_current_wal_lsn())

  • WAL segments archived - archived WAL segments count (based on archived_count of pg_stat_archiver view)

  • WAL segments archive failed - WAL segment archive failures count (based on failed_count of pg_stat_archiver view)

F.19.7.1.6. Tablespace statistics

This table contains information about tablespaces sizes and growth:

  • Tablespace - tablespace name

  • Path - tablespace path

  • Size - tablespace size as it was at time of last sample in report interval

  • Growth - tablespace growth during report interval

F.19.7.2. Wait event statistics

This report section contains tables of top wait events statistics during report interval sorted by wait time.

F.19.7.2.1. Main wait event type categories

  • IO - Various file read/write activities during the query of sql statements.

  • LW Lock - LW lock for a variable or LW lock acquire.

F.19.7.2.2. Main wait event categories

  • WALSync - Fsync XLOG file.

  • WALWrite - Write Pages to XLOG file.

  • LockManager - Used to lock shared hash tables to reduce contention.

  • WALInsert - LW lock, Acquire a WAL insertion lock, for inserting to WAL.

  • DataFileExtend - Add a block to the specified relation.

  • DataFileRead - Read the specified block from a relation.

  • DataFileTruncate - Truncate relation to specified number of blocks.

  • BufFileRead - Load some data into buffer.

  • ProcArray - LW lock for memory access across multi-process, such as ProcGlobal, ShmemVariableCache, MyPgXact, procArray.

  • RelationMapRead - load data from the shared or local map file.

  • SLRUSync - Physical write of a page from a buffer slot.

  • SLRUWrite - Physical flush of a page from a buffer slot after writing.

  • RegisterSyncRequest - italic">RegisterSyncRequest.

  • XidGen - LW lock, Set and return the next transaction ID.

  • XactSLRU - LW lock, for xlog, clog related memory data.

F.19.7.2.3. Top wait event type by wait time

This table contains top lt_profile.topn wait event types sorted by wait time.

  • Wait Event Type - Type of wait event, each type may contain several wait events.

  • Wait Time(s) - Total wait time for all wait events of this type.

  • %Total Wait Time - Total wait time of this type as a percentage of total wait time for all types of events in a cluster.

F.19.7.2.4. Top wait event by wait time

This table contains top lt_profile.topn wait events sorted by wait time.

  • Wait Event - The specific event that process waiting for.

  • Wait Event Type - Type of this event.

  • Wait Time(s) - Total wait time of all this event.

  • %Total Wait Time - Total wait time of all this event as a percentage of total wait time for all types of events in a cluster.

F.19.7.3. SQL Query statistics

This report section contains tables of top statements during report interval sorted by several important statistics. Data is captured from pg_stat_statements view if it was available at the time of samples.

F.19.7.3.1. Top SQL by elapsed time

This table contains top lt_profile.topn statements sorted by elapsed time total_plan_time + total_exec_time of pg_stat_statements view.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • %Total - total time of this statement as a percentage of total time of all statements in a cluster

  • Time (s) - time spent in this statement (in seconds)

    • Elapsed - total time, spent in this statement (total_plan_time + total_exec_time)

    • Plan - time spent in planning this statement (total_plan_time field)

    • Exec - time spent executing this query (total_exec_time field)

  • I/O time (s):

    • Read - time spent reading blocks (blk_read_time field)

    • Write - time spent writing blocks (blk_write_time field)

  • Plans - number of times the statement was planned (plans field)

  • Executions - number of times the statement was executed (calls field)

F.19.7.3.2. Top SQL by planning time

Top lt_profile.topn statements sorted by total_plan_time field of pg_stat_statements view

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Plan elapsed (s) - time spent in planning this statement (total_plan_time field)

  • %Elapsed - plan time of this statement as a percentage of statement elapsed time

  • Plan times (ms) - detailed plan time statistics of this statement (in milliseconds)

    • Mean - mean time spent planning this statement (mean_plan_time field)

    • Min - minimum time spent planning this statement (min_plan_time field)

    • Max - maximum time spent planning this statement (max_plan_time field)

    • StdErr - population standard deviation of time spent planning this statement (stddev_plan_time field)

  • Plans - number of times this statement was planned (plans field)

  • Executions - number of times this statement was executed (calls field)

F.19.7.3.3. Top SQL by execution time

Top lt_profile.topn statements sorted by total_time (or total_exec_time) field of pg_stat_statements view

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Exec (s) - time spent executing this statement (total_exec_time field)

  • %Elapsed - execution time of this statement as a percentage of statement elapsed time

  • %Total - exec time of this statement as a percentage of total elapsed time of all statements in a cluster

  • I/O time (s):

    • Read - time spent reading blocks (blk_read_time field)

    • Write - time spent writing blocks (blk_write_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Execution times (ms) - detailed execution time statistics of this statement (in milliseconds)

    • Mean - mean time spent executing this statement (mean_exec_time field)

    • Min - minimum time spent executing this statement (min_exec_time field)

    • Max - maximum time spent executing this statement (max_exec_time field)

    • StdErr - population standard deviation of time spent executing this statement (stddev_exec_time field)

  • Executions - number of times this statement was executed (calls field)

F.19.7.3.4. Top SQL by executions

Top lt_profile.topn statements sorted by calls field of pg_stat_statements view

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Executions - count of statement executions (calls field)

  • %Total - calls of this statement as a percentage of total calls of all statements in a cluster

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Mean(ms) - mean time spent in the statement, in milliseconds (mean_time or mean_exec_time field)

  • Min(ms) - minimum time spent in the statement, in milliseconds (min_time or min_exec_time field)

  • Max(ms) - maximum time spent in the statement, in milliseconds (max_time or max_exec_time field)

  • StdErr(ms) - population standard deviation of time spent in the statement, in milliseconds (stddev_time or stddev_exec_time field)

  • Elapsed(s) - amount of time spent executing this query, in seconds (total_time or total_exec_time field)

F.19.7.3.5. Top SQL by I/O wait time

Top lt_profile.topn statements sorted by read and write time (blk_read_time + blk_write_time)

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • IO(s) - amount of time spent on reading and writing (I/O time) by this statement in seconds (blk_read_time + blk_write_time)

  • Read(s) - amount of time spent on reading by this statement in seconds (blk_read_time)

  • Write(s) - amount of time spent on writing by this statement in seconds (blk_write_time)

  • %Total - I/O time of this statement as a percentage of total I/O time for all statements in a cluster

  • Reads - number of blocks read by this statement divided in three sub-columns:

    • Shared - shared reads (shared_blks_read field)

    • Local - local reads (local_blks_read field)

    • Temp - temp reads (temp_blks_read field)

  • Writes - number of blocks written by this statement divided in three sub-columns:

    • Shared - shared writes (shared_blks_written field)

    • Local - local writes (local_blks_written field)

    • Temp - temp writes (temp_blks_written field)

  • Elapsed(s) - amount of time spent executing this query, in seconds (total_time or total_exec_time field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.6. Top SQL by shared blocks fetched

Top lt_profile.topn statements sorted by read and hit blocks, helping to detect the most data processing statements.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Fetched - number of fetched blocks (expression: shared_blks_hit + shared_blks_read)

  • %Total - blocks fetched for this statement as a percentage of total blocks fetched for all statements in a cluster

  • Hits(%) - percentage of blocks got from buffers within all blocks got

  • Elapsed(s) - amount of time spent in this statement, in seconds (total_time or total_exec_time+total_plan_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.7. Top SQL by shared blocks read

Top lt_profile.topn statements sorted by shared reads, helping to detect most read intensive statements.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Reads - number of shared read blocks for this statement (shared_blks_read field)

  • %Total - shared reads for this statement as a percentage of total shared reads for all statements in a cluster

  • Hits(%) - percentage of blocks got from buffers within all blocks got

  • Elapsed(s) - amount of time spent in this statement, in seconds (total_time or total_exec_time+total_plan_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.8. Top SQL by shared blocks dirtied

Top lt_profile.topn statements sorted by shared dirtied buffer count, helping to detect most data changing statements.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Dirtied - number of shared blocks dirtied by this statement (shared_blks_dirtied field)

  • %Total - shared blocks dirtied by this statement as a percentage of total shared blocks dirtied by all statements in a cluster

  • Hits(%) - percentage of blocks got from buffers within all blocks got

  • Elapsed(s) - amount of time spent in this statement, in seconds (total_time or total_exec_time+total_plan_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.9. Top SQL by shared blocks written

Top lt_profile.topn statements, which had to perform writes sorted by written blocks count.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Written - number of blocks written by this statement (shared_blks_written field)

  • %Total - number of blocks written by this statement as a percentage of total blocks written by all statements in a cluster

  • %BackendW - number of blocks written by this statement as a percentage of all blocks written in a cluster by backends (buffers_backend field of pg_stat_bgwriter view)

  • Hits(%) - percentage of blocks got from buffers within all blocks got

  • Elapsed(s) - amount of time spent in this statement, in seconds (total_time or total_exec_time+total_plan_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.10. Top SQL by WAL size

Top lt_profile.topn statements, sorted by WAL generated (available since lt_stat_statements v1.8)

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • WAL - amount of WAL generated by the statement (wal_bytes field)

  • %Total - amount of WAL generated by the statement as a percentage of total WAL generated in cluster (pg_current_wal_lsn() increment)

  • Dirtied - number of shared blocks dirtied by this statement (shared_blks_dirtied field)

  • WAL full page images - total number of WAL full page images generated by the statement ( wal_fpi field)

  • WAL records - total amount of WAL bytes generated by the statement (wal_bytes field)

F.19.7.3.11. Top SQL by temp usage

Top lt_profile.topn statements sorted by temp I/O, calculated as the sum of temp_blks_read, temp_blks_written, local_blks_read and local_blks_written fields

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Database - Statement database name (derived from dbid field)

  • Local fetched - number of retrieved local blocks (expression: local_blks_hit + local_blks_read)

  • Hits(%) - percentage of local blocks got from temp buffers within all local blocks got

  • Local blocks - I/O statistics of blocks used in temporary tables

    • Write - number of written local blocks (local_blks_written)

    • %Total - local_blks_written of this statement as a percentage of total local_blks_written for all statements in a cluster

    • Read - number of read local blocks (local_blks_read)

    • %Total - local_blks_read of this statement as a percentage of total local_blks_read for all statements in a cluster

  • Temp blocks - I/O statistics of blocks used in operations (like sorts and joins)

    • Write - number of written temp blocks (temp_blks_written)

    • %Total - temp_blks_written of this statement as a percentage of total temp_blks_written for all statements in a cluster

    • Read - number of read local blocks (temp_blks_read)

    • %Total - temp_blks_read of this statement as a percentage of total temp_blks_read for all statements in a cluster

  • Elapsed(s) - amount of time spent in this statement, in seconds (total_time or total_exec_time+total_plan_time field)

  • Rows - number of rows retrieved or affected by the statement (rows field)

  • Executions - number of executions for this statement (calls field)

F.19.7.3.12. Complete list of SQL texts

Query texts of all statements mentioned in report. You can use Query ID link in any statistic table to get there and see query text.

F.19.7.4. Schema object statistics

This section of report contains top database objects, using statistics from Statistics Collector views.

F.19.7.4.1. Top tables by estimated sequentially scanned volume

Top database tables sorted by estimated volume, read by sequential scans. Based on pg_stat_all_tables view. Here you can search for tables, possibly lacks some index on it.

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • ~SeqBytes - estimated volume, read by sequential scans. Calculated as a sum of relation size multiplied by seq_scan for all samples of a report.

  • SeqScan - number of sequential scans performed on the table (seq_scan field)

  • IndexScan - number of index scans initiated on this table (idx_scan field)

  • IndexFetch - number of live rows fetched by index scans (idx_tup_fetch field)

  • Inserted - number of rows inserted (n_tup_ins field)

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

F.19.7.4.2. Top tables by blocks fetched

Fetched block is a block being processed from disk (read), or from shared buffers (hit). Tables in this list are sorted by sum of fetched blocks for table relation, its indexes, TOAST of a table (if exists), and TOAST index (if exists). This section can focus your attention on tables with excessive blocks processing. Based on data of pg_statio_all_tables view.

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Heap - statistics for relation blocks fetched (heap_blks_read + heap_blks_hit)

  • Index - statistics for all relation indexes blocks fetched (idx_blks_read + idx_blks_hit)

  • TOAST - statistics for TOAST-table blocks fetched (toast_blks_read + toast_blks_hit)

  • TOAST-Index - statistics for TOAST index blocks fetched (tidx_blks_read + tidx_blks_hit)

Each statistic field in this table is divided in two columns:

  • Blocks - number of blocks fetched for relation heap, index, TOAST or TOAST index

  • %Total - blocks fetched for relation heap, index, TOAST or TOAST index as a percentage of all blocks fetched in a whole cluster

F.19.7.4.3. Top tables by blocks read

Top tables sorted by block reads. Tables in this list are sorted by sum of block reads for table, its indexes, TOAST of a table (if exists), and TOAST index (if exists). This section can focus your attention on tables with excessive blocks reading. Based on data of pg_statio_all_tables view.

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Heap - statistics for relation block reads (heap_blks_read)

  • Index - statistics for all relation indexes blocks reads (idx_blks_read)

  • TOAST - statistics for TOAST-table block reads (toast_blks_read)

  • TOAST-Index - statistics for TOAST index block reads (tidx_blks_read)

Each read statistic in this table is divided in two columns:

  • Blocks - number of block reads for relation heap, index, TOAST or TOAST index

  • %Total - block reads for relation heap, index, TOAST or TOAST index as a percentage of all block reads in a whole cluster

F.19.7.4.4. Top DML tables

Top tables sorted by amount of DML-affected rows, i.e. sum of n_tup_ins, n_tup_upd and n_tup_del (including TOAST tables).

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Inserted - number of rows inserted (n_tup_ins field)

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

  • SeqScan - number of sequential scans performed on the table (seq_scan field)

  • SeqFetch - number of live rows fetched by sequential scans (seq_tup_read field)

  • IndexScan - number of index scans initiated on this table (idx_scan field)

  • IndexFetch - number of live rows fetched by index scans (idx_tup_fetch field)

F.19.7.4.5. Top tables by updated/deleted tuples

Top tables sorted by amount of operations, causing autovacuum load, i.e. sum of n_tup_upd and n_tup_del (including TOAST tables). Consider fine-tune of vacuum-related parameters based on provided vacuum and analyze run statistics.

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • Vacuum - number of times this table has been manually vacuumed (not counting VACUUM FULL) (vacuum_count field)

  • AutoVacuum - number of times this table has been vacuumed by the autovacuum daemon (autovacuum_count field)

  • Analyze - number of times this table has been manually analyzed (analyze_count field)

  • AutoAnalyze - number of times this table has been analyzed by the autovacuum daemon (autoanalyze_count field)

F.19.7.4.6. Top growing tables

Top tables sorted by growth

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Size - table size, as it was at the moment of last sample in report interval

  • Growth - table growth

  • Inserted - number of rows inserted (n_tup_ins field)

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

F.19.7.4.7. Top indexes by blocks fetched

Fetched block is a block being processed from disk (read), or from shared buffers (hit). Based on data of pg_statio_all_indexes view.

  • Database - database name of the index

  • Tablespace - tablespace name, where the index is located

  • Schema - schema name of the index

  • Table - table name

  • Index - index name

  • Scans - number of scans, performed on index (idx_scan field)

  • Blocks - blocks fetched from this index (idx_blks_read + idx_blks_hit)

  • %Total - blocks fetched for this index as a percentage of all blocks fetched in a whole cluster

F.19.7.4.8. Top indexes by blocks read

Top indexes sorted by block reads. Based on data of pg_statio_all_indexes view.

  • Database - database name of the index

  • Tablespace - tablespace name, where the index is located

  • Schema - schema name of the index

  • Table - table name

  • Index - index name

  • Scans - number of scans, performed on index (idx_scan field)

  • Block Reads - number of disk blocks read from this index (idx_blks_read)

  • %Total - block reads from this index as a percentage of all block reads in a whole cluster

  • Hits(%) - percentage of index blocks got from buffer cache within all index blocks fetched for this index

F.19.7.4.9. Top growing indexes

Top indexes sorted by growth

  • Database - database name of the index

  • Tablespace - tablespace name, where the index is located

  • Schema - schema name of the index

  • Table - table name

  • Index - index name

  • Index - index statistics

    • Size - index size, as it was at the moment of last sample in report interval

    • Growth - index growth during report interval

  • Table - underlying table statistics

    • Inserted - number of rows inserted into underlying table (n_tup_ins field)

    • Updated - number of rows updated in underlying table (without HOT) (n_tup_upd - n_tup_hot_upd)

    • Deleted - number of rows deleted from underlying table (n_tup_del field)

F.19.7.4.10. Unused indexes

Non-scanned indexes during report interval sorted by DML operations on underlying tables, causing index support. Constraint indexes are excluded.

  • Database - database name of the index

  • Tablespace - tablespace name, where the index is located

  • Schema - schema name of the index

  • Table - table name

  • Index - index name

  • Index - index statistics

    • Size - index size, as it was at the moment of last sample in report interval

    • Growth - index growth during report interval

  • Table - underlying table statistics

    • Inserted - number of rows inserted into underlying table (n_tup_ins field)

    • Updated - number of rows updated in underlying table (without HOT) (n_tup_upd - n_tup_hot_upd)

    • Deleted - number of rows deleted from underlying table (n_tup_del field)

F.19.7.5. User function statistics

This report section contains top functions in cluster, based on pg_stat_user_functions view.

F.19.7.5.1. Top functions by total time

Top functions sorted by time elapsed.

  • Database - database name of the function

  • Schema - schema name of the index

  • Function - function name

  • Executions - number of times this function has been called (calls field)

  • Time (s) - function timing statistics in seconds

    • Total - total time spent in this function and all other functions called by it (total_time field)

    • Self - total time spent in this function itself, not including other functions called by it (self_time field)

    • Mean - mean time of single function execution

    • Mean self - mean self time of single function execution

F.19.7.5.2. Top functions by executions

Top functions sorted by executions count.

  • Database - database name of the function

  • Schema - schema name of the index

  • Function - function name

  • Executions - number of times this function has been called (calls field)

  • Time (s) - function timing statistics in seconds

    • Total - total time spent in this function and all other functions called by it (total_time field)

    • Self - total time spent in this function itself, not including other functions called by it (self_time field)

    • Mean - mean time of single function execution

    • Mean self - mean self time of single function execution

F.19.7.5.3. Top trigger functions by total time

Top trigger functions sorted by time elapsed.

  • Database - database name of the function

  • Schema - schema name of the index

  • Function - function name

  • Executions - number of times this function has been called (calls field)

  • Time (s) - function timing statistics in seconds

    • Total - total time spent in this function and all other functions called by it (total_time field)

    • Self - total time spent in this function itself, not including other functions called by it (self_time field)

    • Mean - mean time of single function execution

    • Mean self - mean self time of single function execution

F.19.7.6. Vacuum-related statistics

F.19.7.6.1. Top tables by vacuum operations

Top tables sorted by vacuums (manual and automatic) processed

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Vacuum count - number of times this table has been manually vacuumed (not counting VACUUM FULL) (vacuum_count field)

  • Autovacuum count - number of times this table has been vacuumed by the autovacuum daemon (autovacuum_count field)

  • Inserted - number of rows inserted (n_tup_ins field)

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

F.19.7.6.2. Top tables by analyze operations

Top tables sorted by analyze run (manual and automatic) count

  • Database - database name of the table

  • Tablespace - tablespace name, where the table is located

  • Schema - schema name of the table

  • Table - table name

  • Analyze count - number of times this table has been manually analyzed (analyze_count field)

  • Autoanalyze count - number of times this table has been analyzed by the autovacuum daemon (autoanalyze_count field)

  • Inserted - number of rows inserted (n_tup_ins field)

  • Updated - number of rows updated (including HOT) (n_tup_upd field)

  • Deleted - number of rows deleted (n_tup_del field)

  • HOT Updated - number of rows HOT updated (n_tup_hot_upd field)

F.19.7.6.3. Top indexes by estimated vacuum I/O load

This table provides estimation of implicit vacuum load caused by table indexes. Here is top indexes sorted by count of vacuums performed on underlying table multiplied by index size.

  • Database - database name of the index

  • Tablespace - tablespace name, where the index is located

  • Schema - schema name of the index

  • Table - table name

  • Index - index name

  • ~Vacuum bytes - vacuum load estimation calculated as (vacuum_count + autovacuum_count) * index_size

  • Vacuum count - number of times this table has been manually vacuumed (not counting VACUUM FULL) (vacuum_count field)

  • Autovacuum count - number of times this table has been vacuumed by the autovacuum daemon (autovacuum_count field)

  • Index size - average index size during report interval

  • Relsize - average relation size during report interval

F.19.7.6.4. Top tables by dead tuples ratio

This section contains modified tables with last vacuum run. Statistics is valid for last sample in report interval. Based on pg_stat_all_tables view.

Top tables, sized 5 MB and more, sorted by dead tuples ratio.

  • Database - database name of the table

  • Schema - schema name of the table

  • Table - table name

  • Live - estimated number of live rows (n_live_tup)

  • Dead - estimated number of dead rows (n_dead_tup)

  • %Dead - dead rows of the table as a percentage of all rows in the table

  • Last AutoVacuum - last time when this table was vacuumed by the autovacuum daemon (last_autovacuum)

  • Size - table size, as it was at the moment of last report sample.

F.19.7.6.5. Top tables by modified tuples ratio

This section contains modified tables with last vacuum run. Statistics is valid for last sample in report interval. Based on pg_stat_all_tables view.

Top tables, sized 5 MB and more, sorted by modified tuples ratio.

  • Database - database name of the table

  • Schema - schema name of the table

  • Table - table name

  • Live - estimated number of live rows (n_live_tup)

  • Dead - estimated number of dead rows (n_dead_tup)

  • Mod - estimated number of rows modified since this table was last analyzed (n_mod_since_analyze)

  • %Mod - modified rows of the table as a percentage of all rows in the table

  • Last AutoAnalyze - last time when this table was analyzed by the autovacuum daemon

  • Size - table size, as it was at the moment of last report sample.

F.19.7.7. Cluster settings during the report interval

This section of a report contains LightDB GUC parameters, and values of functions version(), pg_postmaster_start_time(), pg_conf_load_time() and field system_identifier of pg_control_system() function during report interval.

  • Setting - name of a parameter

  • reset_val - reset_val field of pg_settings view. Bold font is used to show settings, changed during report interval.

  • Unit - setting unit

  • Source - configuration file, where this setting was defined, line number after semicolon.

  • Notes - This field will contain timestamp of a sample, when this value was observed first time during report interval.

F.19.7.8. Kernel settings during the report interval

This section of a report contains several important linux kernel parameters.

  • Setting - name of a parameter

  • Value - value of a parameter.

  • Source - configuration file, where this setting was defined.

F.19.7.9. What you need to remember...

  1. LightDB collects execution statistics after execution is complete. If single execution of a statement lasts for several samples, it will affect statistics of only last sample (when it was completed). And you can't get statistics on still running statements. Also, maintenance processes like vacuum and checkpointer will update statistics only on completion.

  2. Resetting any LightDB statistics may affect accuracy of a next sample.

  3. Exclusive locks on relations conflicts with calculating relation size. Sample won't collect relation sizes of relations with AccessExclusiveLock held by any session. However, a session can aquire AccessExclusiveLock on relation during sample processing. To workaround this problem, lock_timeout is set to 3s, so if take_sample() function will be unable to acquire a lock for 3 seconds, it will fail, and no sample will be generated.

F.19.8. Sections of PSH report

Report tables and their columns are described in this section.

F.19.8.1. Top Wait Events

F.19.8.1.1. Top User Events

This table contains top lt_profile.topn user wait event sorted by wait time.

  • Wait Event - The specific event that process waiting for.

  • Wait Event Type - Type of this event.

  • Wait Time(s) - Total wait time of all this event.

  • %Total Wait Time - Total wait time of all this event as a percentage of total wait time for all types of events in a cluster.

F.19.8.1.2. Top Background Events

This table contains top lt_profile.topn background wait event sorted by wait time.

  • Wait Event - The specific event that process waiting for.

  • Wait Event Type - Type of this event.

  • Wait Time(s) - Total wait time of all this event.

  • %Total Wait Time - Total wait time of all this event as a percentage of total wait time for all types of events in a cluster.

F.19.8.2. Top SQL

F.19.8.2.1. Top SQL with Top Events

Top wait events sorted by wait time for the top statements sorted by total_exec_time field of pg_stat_statements view.

  • Query ID - Query identifier as a hash of database, user and query text. Compatible with ltcenter utility. Native pg_stat_statements field qieryid in hexadecimal notation is shown in square brackets.

  • SQL Text - First 100 characters of the complete query.

  • Exec (s) - time spent executing this statement (total_exec_time field).

  • %Total - exec time of this statement as a percentage of total elapsed time of all statements in a cluster

  • Wait Event - The specific event that process waiting for.

  • Wait Time(s) - Total wait time of the event in this query.

F.19.8.2.2. Complete list of SQL texts

Query texts of all statements mentioned in report. You can use Query ID link in any statistic table to get there and see query text.