ltbench — run a benchmark test on LightDB
ltbench
-i
[option
...] [dbname
]
ltbench
[option
...] [dbname
]
ltbench is a simple program for running benchmark
tests on LightDB. It runs the same sequence of SQL
commands over and over, possibly in multiple concurrent database sessions,
and then calculates the average transaction rate (transactions per second).
By default, ltbench tests a scenario that is
loosely based on TPC-B, involving five SELECT
,
UPDATE
, and INSERT
commands per transaction.
However, it is easy to test other cases by writing your own transaction
script files.
Typical output from ltbench looks like:
transaction type: <builtin: TPC-B (sort of)> scaling factor: 10 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 1000 number of transactions actually processed: 10000/10000 tps = 85.184871 (including connections establishing) tps = 85.296346 (excluding connections establishing)
The first six lines report some of the most important parameter
settings. The next line reports the number of transactions completed
and intended (the latter being just the product of number of clients
and number of transactions per client); these will be equal unless the run
failed before completion. (In -T
mode, only the actual
number of transactions is printed.)
The last two lines report the number of transactions per second,
figured with and without counting the time to start database sessions.
The default TPC-B-like transaction test requires specific tables to be
set up beforehand. ltbench should be invoked with
the -i
(initialize) option to create and populate these
tables. (When you are testing a custom script, you don't need this
step, but will instead need to do whatever setup your test needs.)
Initialization looks like:
ltbench -i [other-options
]dbname
where dbname
is the name of the already-created
database to test in. (You may also need -h
,
-p
, and/or -U
options to specify how to
connect to the database server.)
ltbench -i
creates four tables ltbench_accounts
,
ltbench_branches
, ltbench_history
, and
ltbench_tellers
,
destroying any existing tables of these names.
Be very careful to use another database if you have tables having these
names!
At the default “scale factor” of 1, the tables initially contain this many rows:
table # of rows --------------------------------- ltbench_branches 1 ltbench_tellers 10 ltbench_accounts 100000 ltbench_history 0
You can (and, for most purposes, probably should) increase the number
of rows by using the -s
(scale factor) option. The
-F
(fillfactor) option might also be used at this point.
Once you have done the necessary setup, you can run your benchmark
with a command that doesn't include -i
, that is
ltbench [options
]dbname
In nearly all cases, you'll need some options to make a useful test.
The most important options are -c
(number of clients),
-t
(number of transactions), -T
(time limit),
and -f
(specify a custom script file).
See below for a full list.
The following is divided into three subsections. Different options are used during database initialization and while running benchmarks, but some options are useful in both cases.
ltbench accepts the following command-line initialization arguments:
dbname
Specifies the name of the database to test in. If this is
not specified, the environment variable
LTDATABASE
is used. If that is not set, the
user name specified for the connection is used.
-i
--initialize
Required to invoke initialization mode.
-I init_steps
--init-steps=init_steps
Perform just a selected set of the normal initialization steps.
init_steps
specifies the
initialization steps to be performed, using one character per step.
Each step is invoked in the specified order.
The default is dtgvp
.
The available steps are:
d
(Drop)Drop any existing ltbench tables.
t
(create Tables)
Create the tables used by the
standard ltbench scenario, namely
ltbench_accounts
,
ltbench_branches
,
ltbench_history
, and
ltbench_tellers
.
g
or G
(Generate data, client-side or server-side)Generate data and load it into the standard tables, replacing any data already present.
With g
(client-side data generation),
data is generated in ltbench
client and then
sent to the server. This uses the client/server bandwidth
extensively through a COPY
.
Using g
causes logging to print one message
every 100,000 rows while generating data for the
ltbench_accounts
table.
With G
(server-side data generation),
only small queries are sent from the ltbench
client and then data is actually generated in the server.
No significant bandwidth is required for this variant, but
the server will do more work.
Using G
causes logging not to print any progress
message while generating data.
The default initialization behavior uses client-side data
generation (equivalent to g
).
v
(Vacuum)
Invoke VACUUM
on the standard tables.
p
(create Primary keys)Create primary key indexes on the standard tables.
f
(create Foreign keys)Create foreign key constraints between the standard tables. (Note that this step is not performed by default.)
-F
fillfactor
--fillfactor=
fillfactor
Create the ltbench_accounts
,
ltbench_tellers
and
ltbench_branches
tables with the given fillfactor.
Default is 100.
-n
--no-vacuum
Perform no vacuuming during initialization.
(This option suppresses the v
initialization step,
even if it was specified in -I
.)
-q
--quiet
Switch logging to quiet mode, producing only one progress message per 5 seconds. The default logging prints one message each 100,000 rows, which often outputs many lines per second (especially on good hardware).
This setting has no effect if G
is specified
in -I
.
-s
scale_factor
--scale=
scale_factor
Multiply the number of rows generated by the scale factor.
For example, -s 100
will create 10,000,000 rows
in the ltbench_accounts
table. Default is 1.
When the scale is 20,000 or larger, the columns used to
hold account identifiers (aid
columns)
will switch to using larger integers (bigint
),
in order to be big enough to hold the range of account
identifiers.
--foreign-keys
Create foreign key constraints between the standard tables.
(This option adds the f
step to the initialization
step sequence, if it is not already present.)
--index-tablespace=index_tablespace
Create indexes in the specified tablespace, rather than the default tablespace.
--partition-method=NAME
Create a partitioned ltbench_accounts
table with
NAME
method.
Expected values are range
or hash
.
This option requires that --partitions
is set to non-zero.
If unspecified, default is range
.
--partitions=NUM
Create a partitioned ltbench_accounts
table with
NUM
partitions of nearly equal size for
the scaled number of accounts.
Default is 0
, meaning no partitioning.
--tablespace=tablespace
Create tables in the specified tablespace, rather than the default tablespace.
--unlogged-tables
Create all tables as unlogged tables, rather than permanent tables.
ltbench accepts the following command-line benchmarking arguments:
-b
scriptname[@weight]
--builtin
=scriptname[@weight]
Add the specified built-in script to the list of scripts to be executed.
Available built-in scripts are: tpcb-like
,
simple-update
and select-only
.
Unambiguous prefixes of built-in names are accepted.
With the special name list
, show the list of built-in scripts
and exit immediately.
Optionally, write an integer weight after @
to
adjust the probability of selecting this script versus other ones.
The default weight is 1.
See below for details.
-c
clients
--client=
clients
Number of clients simulated, that is, number of concurrent database sessions. Default is 1.
-C
--connect
Establish a new connection for each transaction, rather than doing it just once per client session. This is useful to measure the connection overhead.
-d
--debug
Print debugging output.
-D
varname
=
value
--define=
varname
=
value
Define a variable for use by a custom script (see below).
Multiple -D
options are allowed.
-f
filename[@weight]
--file=
filename[@weight]
Add a transaction script read from filename
to the list of scripts to be executed.
Optionally, write an integer weight after @
to
adjust the probability of selecting this script versus other ones.
The default weight is 1.
(To use a script file name that includes an @
character, append a weight so that there is no ambiguity, for
example filen@me@1
.)
See below for details.
-j
threads
--jobs=
threads
Number of worker threads within ltbench. Using more than one thread can be helpful on multi-CPU machines. Clients are distributed as evenly as possible among available threads. Default is 1.
-l
--log
Write information about each transaction to a log file. See below for details.
-L
limit
--latency-limit=
limit
Transactions that last more than limit
milliseconds
are counted and reported separately, as late.
When throttling is used (--rate=...
), transactions that
lag behind schedule by more than limit
ms, and thus
have no hope of meeting the latency limit, are not sent to the server
at all. They are counted and reported separately as
skipped.
-M
querymode
--protocol=
querymode
Protocol to use for submitting queries to the server:
simple
: use simple query protocol.
extended
: use extended query protocol.
prepared
: use extended query protocol with prepared statements.
In the prepared
mode, ltbench
reuses the parse analysis result starting from the second query
iteration, so ltbench runs faster
than in other modes.
The default is simple query protocol. (See Chapter 49 for more information.)
-n
--no-vacuum
Perform no vacuuming before running the test.
This option is necessary
if you are running a custom test scenario that does not include
the standard tables ltbench_accounts
,
ltbench_branches
, ltbench_history
, and
ltbench_tellers
.
-N
--skip-some-updates
Run built-in simple-update script.
Shorthand for -b simple-update
.
-P
sec
--progress=
sec
Show progress report every sec
seconds. The report
includes the time since the beginning of the run, the TPS since the
last report, and the transaction latency average and standard
deviation since the last report. Under throttling (-R
),
the latency is computed with respect to the transaction scheduled
start time, not the actual transaction beginning time, thus it also
includes the average schedule lag time.
-r
--report-latencies
Report the average per-statement latency (execution time from the perspective of the client) of each command after the benchmark finishes. See below for details.
-R
rate
--rate=
rate
Execute transactions targeting the specified rate instead of running as fast as possible (the default). The rate is given in transactions per second. If the targeted rate is above the maximum possible rate, the rate limit won't impact the results.
The rate is targeted by starting transactions along a Poisson-distributed schedule time line. The expected start time schedule moves forward based on when the client first started, not when the previous transaction ended. That approach means that when transactions go past their original scheduled end time, it is possible for later ones to catch up again.
When throttling is active, the transaction latency reported at the end of the run is calculated from the scheduled start times, so it includes the time each transaction had to wait for the previous transaction to finish. The wait time is called the schedule lag time, and its average and maximum are also reported separately. The transaction latency with respect to the actual transaction start time, i.e., the time spent executing the transaction in the database, can be computed by subtracting the schedule lag time from the reported latency.
If --latency-limit
is used together with --rate
,
a transaction can lag behind so much that it is already over the
latency limit when the previous transaction ends, because the latency
is calculated from the scheduled start time. Such transactions are
not sent to the server, but are skipped altogether and counted
separately.
A high schedule lag time is an indication that the system cannot process transactions at the specified rate, with the chosen number of clients and threads. When the average transaction execution time is longer than the scheduled interval between each transaction, each successive transaction will fall further behind, and the schedule lag time will keep increasing the longer the test run is. When that happens, you will have to reduce the specified transaction rate.
-s
scale_factor
--scale=
scale_factor
Report the specified scale factor in ltbench's
output. With the built-in tests, this is not necessary; the
correct scale factor will be detected by counting the number of
rows in the ltbench_branches
table.
However, when testing only custom benchmarks (-f
option),
the scale factor will be reported as 1 unless this option is used.
-S
--select-only
Run built-in select-only script.
Shorthand for -b select-only
.
-t
transactions
--transactions=
transactions
Number of transactions each client runs. Default is 10.
-T
seconds
--time=
seconds
Run the test for this many seconds, rather than a fixed number of
transactions per client. -t
and
-T
are mutually exclusive.
-v
--vacuum-all
Vacuum all four standard tables before running the test.
With neither -n
nor -v
, ltbench will vacuum the
ltbench_tellers
and ltbench_branches
tables, and will truncate ltbench_history
.
--aggregate-interval=seconds
Length of aggregation interval (in seconds). May be used only
with -l
option. With this option, the log contains
per-interval summary data, as described below.
--log-prefix=prefix
Set the filename prefix for the log files created by
--log
. The default is ltbench_log
.
--progress-timestamp
When showing progress (option -P
), use a timestamp
(Unix epoch) instead of the number of seconds since the
beginning of the run. The unit is in seconds, with millisecond
precision after the dot.
This helps compare logs generated by various tools.
--random-seed=
seed
Set random generator seed. Seeds the system random number generator,
which then produces a sequence of initial generator states, one for
each thread.
Values for seed
may be:
time
(the default, the seed is based on the current time),
rand
(use a strong random source, failing if none
is available), or an unsigned decimal integer value.
The random generator is invoked explicitly from a ltbench script
(random...
functions) or implicitly (for instance option
--rate
uses it to schedule transactions).
When explicitly set, the value used for seeding is shown on the terminal.
Any value allowed for seed
may also be
provided through the environment variable
PGBENCH_RANDOM_SEED
.
To ensure that the provided seed impacts all possible uses, put this option
first or use the environment variable.
Setting the seed explicitly allows to reproduce a ltbench
run exactly, as far as random numbers are concerned.
As the random state is managed per thread, this means the exact same
ltbench
run for an identical invocation if there is one
client per thread and there are no external or data dependencies.
From a statistical viewpoint reproducing runs exactly is a bad idea because
it can hide the performance variability or improve performance unduly,
e.g., by hitting the same pages as a previous run.
However, it may also be of great help for debugging, for instance
re-running a tricky case which leads to an error.
Use wisely.
--sampling-rate=rate
Sampling rate, used when writing data into the log, to reduce the amount of log generated. If this option is given, only the specified fraction of transactions are logged. 1.0 means all transactions will be logged, 0.05 means only 5% of the transactions will be logged.
Remember to take the sampling rate into account when processing the log file. For example, when computing TPS values, you need to multiply the numbers accordingly (e.g., with 0.01 sample rate, you'll only get 1/100 of the actual TPS).
--show-script=
scriptname
Show the actual code of builtin script scriptname
on stderr, and exit immediately.
ltbench also accepts the following common command-line arguments for connection parameters:
-h
hostname
--host=
hostname
The database server's host name
-p
port
--port=
port
The database server's port number
-U
login
--username=
login
The user name to connect as
-V
--version
Print the ltbench version and exit.
-?
--help
Show help about ltbench command line arguments, and exit.
A successful run will exit with status 0. Exit status 1 indicates static problems such as invalid command-line options. Errors during the run such as database errors or problems in the script will result in exit status 2. In the latter case, ltbench will print partial results.
LTDATABASE
LTHOST
LTPORT
LTUSER
Default connection parameters.
This utility, like most other LightDB utilities, uses the environment variables supported by libpq (see Section 32.14).
The environment variable LT_COLOR
specifies whether to use
color in diagnostic messages. Possible values are
always
, auto
and
never
.
ltbench executes test scripts chosen randomly
from a specified list.
The scripts may include built-in scripts specified with -b
and user-provided scripts specified with -f
.
Each script may be given a relative weight specified after an
@
so as to change its selection probability.
The default weight is 1
.
Scripts with a weight of 0
are ignored.
The default built-in transaction script (also invoked with -b tpcb-like
)
issues seven commands per transaction over randomly chosen aid
,
tid
, bid
and delta
.
The scenario is inspired by the TPC-B benchmark, but is not actually TPC-B,
hence the name.
BEGIN;
UPDATE ltbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM ltbench_accounts WHERE aid = :aid;
UPDATE ltbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE ltbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO ltbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
END;
If you select the simple-update
built-in (also -N
),
steps 4 and 5 aren't included in the transaction.
This will avoid update contention on these tables, but
it makes the test case even less like TPC-B.
If you select the select-only
built-in (also -S
),
only the SELECT
is issued.
ltbench has support for running custom
benchmark scenarios by replacing the default transaction script
(described above) with a transaction script read from a file
(-f
option). In this case a “transaction”
counts as one execution of a script file.
A script file contains one or more SQL commands terminated by
semicolons. Empty lines and lines beginning with
--
are ignored. Script files can also contain
“meta commands”, which are interpreted by ltbench
itself, as described below.
Before LightDB 9.6, SQL commands in script files were terminated by newlines, and so they could not be continued across lines. Now a semicolon is required to separate consecutive SQL commands (though a SQL command does not need one if it is followed by a meta command). If you need to create a script file that works with both old and new versions of ltbench, be sure to write each SQL command on a single line ending with a semicolon.
There is a simple variable-substitution facility for script files.
Variable names must consist of letters (including non-Latin letters),
digits, and underscores, with the first character not being a digit.
Variables can be set by the command-line -D
option,
explained above, or by the meta commands explained below.
In addition to any variables preset by -D
command-line options,
there are a few variables that are preset automatically, listed in
Table 275. A value specified for these
variables using -D
takes precedence over the automatic presets.
Once set, a variable's
value can be inserted into a SQL command by writing
:
variablename
. When running more than
one client session, each session has its own set of variables.
ltbench supports up to 255 variable uses in one
statement.
Table 275. ltbench Automatic Variables
Variable | Description |
---|---|
client_id | unique number identifying the client session (starts from zero) |
default_seed | seed used in hash functions by default |
random_seed | random generator seed (unless overwritten with -D ) |
scale | current scale factor |
Script file meta commands begin with a backslash (\
) and
normally extend to the end of the line, although they can be continued
to additional lines by writing backslash-return.
Arguments to a meta command are separated by white space.
These meta commands are supported:
\gset [prefix
]
\aset [prefix
]
These commands may be used to end SQL queries, taking the place of the
terminating semicolon (;
).
When the \gset
command is used, the preceding SQL query is
expected to return one row, the columns of which are stored into variables
named after column names, and prefixed with prefix
if provided.
When the \aset
command is used, all combined SQL queries
(separated by \;
) have their columns stored into variables
named after column names, and prefixed with prefix
if provided. If a query returns no row, no assignment is made and the variable
can be tested for existence to detect this. If a query returns more than one
row, the last value is kept.
The following example puts the final account balance from the first query
into variable abalance
, and fills variables
p_two
and p_three
with integers from the third query.
The result of the second query is discarded.
The result of the two last combined queries are stored in variables
four
and five
.
UPDATE ltbench_accounts SET abalance = abalance + :delta WHERE aid = :aid RETURNING abalance \gset -- compound of two queries SELECT 1 \; SELECT 2 AS two, 3 AS three \gset p_ SELECT 4 AS four \; SELECT 5 AS five \aset
\if
expression
\elif
expression
\else
\endif
This group of commands implements nestable conditional blocks,
similarly to ltsql
's \if
expression
.
Conditional expressions are identical to those with \set
,
with non-zero values interpreted as true.
\set varname
expression
Sets variable varname
to a value calculated
from expression
.
The expression may contain the NULL
constant,
Boolean constants TRUE
and FALSE
,
integer constants such as 5432
,
double constants such as 3.14159
,
references to variables :
variablename
,
operators
with their usual SQL precedence and associativity,
function calls,
SQL CASE
generic conditional
expressions and parentheses.
Functions and most operators return NULL
on
NULL
input.
For conditional purposes, non zero numerical values are
TRUE
, zero numerical values and NULL
are FALSE
.
Too large or small integer and double constants, as well as
integer arithmetic operators (+
,
-
, *
and /
)
raise errors on overflows.
When no final ELSE
clause is provided to a
CASE
, the default value is NULL
.
Examples:
\set ntellers 10 * :scale \set aid (1021 * random(1, 100000 * :scale)) % \ (100000 * :scale) + 1 \set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
\sleep number
[ us | ms | s ]
Causes script execution to sleep for the specified duration in
microseconds (us
), milliseconds (ms
) or seconds
(s
). If the unit is omitted then seconds are the default.
number
can be either an integer constant or a
:
variablename
reference to a variable
having an integer value.
Example:
\sleep 10 ms
\setshell varname
command
[ argument
... ]
Sets variable varname
to the result of the shell command
command
with the given argument
(s).
The command must return an integer value through its standard output.
command
and each argument
can be either
a text constant or a :
variablename
reference
to a variable. If you want to use an argument
starting
with a colon, write an additional colon at the beginning of
argument
.
Example:
\setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
\shell command
[ argument
... ]
Same as \setshell
, but the result of the command
is discarded.
Example:
\shell command literal_argument :variable ::literal_starting_with_colon
The arithmetic, bitwise, comparison and logical operators listed in
Table 276 are built into ltbench
and may be used in expressions appearing in
\set
.
The operators are listed in increasing precedence order.
Except as noted, operators taking two numeric inputs will produce
a double value if either input is double, otherwise they produce
an integer result.
Table 276. ltbench Operators
Operator Description Example(s) |
---|
Logical OR
|
Logical AND
|
Logical NOT
|
Boolean value tests
|
Nullness tests
|
Equal
|
Not equal
|
Not equal
|
Less than
|
Less than or equal to
|
Greater than
|
Greater than or equal to
|
Bitwise OR
|
Bitwise XOR
|
Bitwise AND
|
Bitwise NOT
|
Bitwise shift left
|
Bitwise shift right
|
Addition
|
Subtraction
|
Multiplication
|
Division (truncates the result towards zero if both inputs are integers)
|
Modulo (remainder)
|
Negation
|
The functions listed in Table 277 are built
into ltbench and may be used in expressions appearing in
\set
.
Table 277. ltbench Functions
Function Description Example(s) |
---|
Absolute value
|
Prints the argument to stderr, and returns the argument.
|
Casts to double.
|
Exponential (
|
Selects the largest value among the arguments.
|
This is an alias for
|
Computes FNV-1a hash.
|
Computes MurmurHash2 hash.
|
Casts to integer.
|
Selects the smallest value among the arguments.
|
Natural logarithm
|
Modulo (remainder)
|
Approximate value of π
|
|
Computes a uniformly-distributed random integer in
|
Computes an exponentially-distributed random integer in
|
Computes a Gaussian-distributed random integer in
|
Computes a Zipfian-distributed random integer in
|
Square root
|
The random
function generates values using a uniform
distribution, that is all the values are drawn within the specified
range with equal probability. The random_exponential
,
random_gaussian
and random_zipfian
functions require an additional double parameter which determines the precise
shape of the distribution.
For an exponential distribution, parameter
controls the distribution by truncating a quickly-decreasing
exponential distribution at parameter
, and then
projecting onto integers between the bounds.
To be precise, with
f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))
Then value i
between min
and
max
inclusive is drawn with probability:
f(i) - f(i + 1)
.
Intuitively, the larger the parameter
, the more
frequently values close to min
are accessed, and the
less frequently values close to max
are accessed.
The closer to 0 parameter
is, the flatter (more
uniform) the access distribution.
A crude approximation of the distribution is that the most frequent 1%
values in the range, close to min
, are drawn
parameter
% of the time.
The parameter
value must be strictly positive.
For a Gaussian distribution, the interval is mapped onto a standard
normal distribution (the classical bell-shaped Gaussian curve) truncated
at -parameter
on the left and +parameter
on the right.
Values in the middle of the interval are more likely to be drawn.
To be precise, if PHI(x)
is the cumulative distribution
function of the standard normal distribution, with mean mu
defined as (max + min) / 2.0
, with
f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
(2.0 * PHI(parameter) - 1)
then value i
between min
and
max
inclusive is drawn with probability:
f(i + 0.5) - f(i - 0.5)
.
Intuitively, the larger the parameter
, the more
frequently values close to the middle of the interval are drawn, and the
less frequently values close to the min
and
max
bounds. About 67% of values are drawn from the
middle 1.0 / parameter
, that is a relative
0.5 / parameter
around the mean, and 95% in the middle
2.0 / parameter
, that is a relative
1.0 / parameter
around the mean; for instance, if
parameter
is 4.0, 67% of values are drawn from the
middle quarter (1.0 / 4.0) of the interval (i.e., from
3.0 / 8.0
to 5.0 / 8.0
) and 95% from
the middle half (2.0 / 4.0
) of the interval (second and third
quartiles). The minimum allowed parameter
value is 2.0.
random_zipfian
generates a bounded Zipfian
distribution.
parameter
defines how skewed the distribution
is. The larger the parameter
, the more
frequently values closer to the beginning of the interval are drawn.
The distribution is such that, assuming the range starts from 1,
the ratio of the probability of drawing k
versus drawing k+1
is
((
.
For example, k
+1)/k
)**parameter
random_zipfian(1, ..., 2.5)
produces
the value 1
about (2/1)**2.5 =
5.66
times more frequently than 2
, which
itself is produced (3/2)**2.5 = 2.76
times more
frequently than 3
, and so on.
ltbench's implementation is based on
"Non-Uniform Random Variate Generation", Luc Devroye, p. 550-551,
Springer 1986. Due to limitations of that algorithm,
the parameter
value is restricted to
the range [1.001, 1000].
Hash functions hash
, hash_murmur2
and
hash_fnv1a
accept an input value and an optional seed parameter.
In case the seed isn't provided the value of :default_seed
is used, which is initialized randomly unless set by the command-line
-D
option. Hash functions can be used to scatter the
distribution of random functions such as random_zipfian
or
random_exponential
. For instance, the following ltbench
script simulates possible real world workload typical for social media and
blogging platforms where few accounts generate excessive load:
\set r random_zipfian(0, 100000000, 1.07) \set k abs(hash(:r)) % 1000000
In some cases several distinct distributions are needed which don't correlate with each other and this is when implicit seed parameter comes in handy:
\set k1 abs(hash(:r, :default_seed + 123)) % 1000000 \set k2 abs(hash(:r, :default_seed + 321)) % 1000000
As an example, the full definition of the built-in TPC-B-like transaction is:
\set aid random(1, 100000 * :scale) \set bid random(1, 1 * :scale) \set tid random(1, 10 * :scale) \set delta random(-5000, 5000) BEGIN; UPDATE ltbench_accounts SET abalance = abalance + :delta WHERE aid = :aid; SELECT abalance FROM ltbench_accounts WHERE aid = :aid; UPDATE ltbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid; UPDATE ltbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid; INSERT INTO ltbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); END;
This script allows each iteration of the transaction to reference different, randomly-chosen rows. (This example also shows why it's important for each client session to have its own variables — otherwise they'd not be independently touching different rows.)
With the -l
option (but without
the --aggregate-interval
option),
ltbench writes information about each transaction
to a log file. The log file will be named
,
where prefix
.nnn
prefix
defaults to ltbench_log
, and
nnn
is the PID of the
ltbench process.
The prefix can be changed by using the --log-prefix
option.
If the -j
option is 2 or higher, so that there are multiple
worker threads, each will have its own log file. The first worker will
use the same name for its log file as in the standard single worker case.
The additional log files for the other workers will be named
,
where prefix
.nnn
.mmm
mmm
is a sequential number for each worker starting
with 1.
The format of the log is:
client_id
transaction_no
time
script_no
time_epoch
time_us
[schedule_lag
]
where
client_id
indicates which client session ran the transaction,
transaction_no
counts how many transactions have been
run by that session,
time
is the total elapsed transaction time in microseconds,
script_no
identifies which script file was used (useful when
multiple scripts were specified with -f
or -b
),
and time_epoch
/time_us
are a
Unix-epoch time stamp and an offset
in microseconds (suitable for creating an ISO 8601
time stamp with fractional seconds) showing when
the transaction completed.
The schedule_lag
field is the difference between the
transaction's scheduled start time, and the time it actually started, in
microseconds. It is only present when the --rate
option is used.
When both --rate
and --latency-limit
are used,
the time
for a skipped transaction will be reported as
skipped
.
Here is a snippet of a log file generated in a single-client run:
0 199 2241 0 1175850568 995598 0 200 2465 0 1175850568 998079 0 201 2513 0 1175850569 608 0 202 2038 0 1175850569 2663
Another example with --rate=100
and --latency-limit=5
(note the additional
schedule_lag
column):
0 81 4621 0 1412881037 912698 3005 0 82 6173 0 1412881037 914578 4304 0 83 skipped 0 1412881037 914578 5217 0 83 skipped 0 1412881037 914578 5099 0 83 4722 0 1412881037 916203 3108 0 84 4142 0 1412881037 918023 2333 0 85 2465 0 1412881037 919759 740
In this example, transaction 82 was late, because its latency (6.173 ms) was over the 5 ms limit. The next two transactions were skipped, because they were already late before they were even started.
When running a long test on hardware that can handle a lot of transactions,
the log files can become very large. The --sampling-rate
option
can be used to log only a random sample of transactions.
With the --aggregate-interval
option, a different
format is used for the log files:
interval_start
num_transactions
sum_latency
sum_latency_2
min_latency
max_latency
[sum_lag
sum_lag_2
min_lag
max_lag
[skipped
] ]
where
interval_start
is the start of the interval (as a Unix
epoch time stamp),
num_transactions
is the number of transactions
within the interval,
sum_latency
is the sum of the transaction
latencies within the interval,
sum_latency_2
is the sum of squares of the
transaction latencies within the interval,
min_latency
is the minimum latency within the interval,
and
max_latency
is the maximum latency within the interval.
The next fields,
sum_lag
, sum_lag_2
, min_lag
,
and max_lag
, are only present if the --rate
option is used.
They provide statistics about the time each transaction had to wait for the
previous one to finish, i.e., the difference between each transaction's
scheduled start time and the time it actually started.
The very last field, skipped
,
is only present if the --latency-limit
option is used, too.
It counts the number of transactions skipped because they would have
started too late.
Each transaction is counted in the interval when it was committed.
Here is some example output:
1345828501 5601 1542744 483552416 61 2573 1345828503 7884 1979812 565806736 60 1479 1345828505 7208 1979422 567277552 59 1391 1345828507 7685 1980268 569784714 60 1398 1345828509 7073 1979779 573489941 236 1411
Notice that while the plain (unaggregated) log file shows which script was used for each transaction, the aggregated log does not. Therefore if you need per-script data, you need to aggregate the data on your own.
With the -r
option, ltbench collects
the elapsed transaction time of each statement executed by every
client. It then reports an average of those values, referred to
as the latency for each statement, after the benchmark has finished.
For the default script, the output will look similar to this:
starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 1 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 1000 number of transactions actually processed: 10000/10000 latency average = 15.844 ms latency stddev = 2.715 ms tps = 618.764555 (including connections establishing) tps = 622.977698 (excluding connections establishing) statement latencies in milliseconds: 0.002 \set aid random(1, 100000 * :scale) 0.005 \set bid random(1, 1 * :scale) 0.002 \set tid random(1, 10 * :scale) 0.001 \set delta random(-5000, 5000) 0.326 BEGIN; 0.603 UPDATE ltbench_accounts SET abalance = abalance + :delta WHERE aid = :aid; 0.454 SELECT abalance FROM ltbench_accounts WHERE aid = :aid; 5.528 UPDATE ltbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid; 7.335 UPDATE ltbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid; 0.371 INSERT INTO ltbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); 1.212 END;
If multiple script files are specified, the averages are reported separately for each script file.
Note that collecting the additional timing information needed for per-statement latency computation adds some overhead. This will slow average execution speed and lower the computed TPS. The amount of slowdown varies significantly depending on platform and hardware. Comparing average TPS values with and without latency reporting enabled is a good way to measure if the timing overhead is significant.
It is very easy to use ltbench to produce completely meaningless numbers. Here are some guidelines to help you get useful results.
In the first place, never believe any test that runs
for only a few seconds. Use the -t
or -T
option
to make the run last at least a few minutes, so as to average out noise.
In some cases you could need hours to get numbers that are reproducible.
It's a good idea to try the test run a few times, to find out if your
numbers are reproducible or not.
For the default TPC-B-like test scenario, the initialization scale factor
(-s
) should be at least as large as the largest number of
clients you intend to test (-c
); else you'll mostly be
measuring update contention. There are only -s
rows in
the ltbench_branches
table, and every transaction wants to
update one of them, so -c
values in excess of -s
will undoubtedly result in lots of transactions blocked waiting for
other transactions.
The default test scenario is also quite sensitive to how long it's been since the tables were initialized: accumulation of dead rows and dead space in the tables changes the results. To understand the results you must keep track of the total number of updates and when vacuuming happens. If autovacuum is enabled it can result in unpredictable changes in measured performance.
A limitation of ltbench is that it can itself become the bottleneck when trying to test a large number of client sessions. This can be alleviated by running ltbench on a different machine from the database server, although low network latency will be essential. It might even be useful to run several ltbench instances concurrently, on several client machines, against the same database server.
If untrusted users have access to a database that has not adopted a secure schema usage pattern, do not run ltbench in that database. ltbench uses unqualified names and does not manipulate the search path.