About Concurrency Control in LightDB-A Database
LightDB-A Database uses the PostgreSQL Multiversion Concurrency Control (MVCC) model to manage concurrent transactions for heap tables.
Concurrency control in a database management system allows concurrent queries to complete with correct results while ensuring the integrity of the database. Traditional databases use a two-phase locking protocol that prevents a transaction from modifying data that has been read by another concurrent transaction and prevents any concurrent transaction from reading or writing data that another transaction has updated. The locks required to coordinate transactions add contention to the database, reducing overall transaction throughput.
LightDB-A Database uses the PostgreSQL Multiversion Concurrency Control (MVCC) model to manage concurrency for heap tables. With MVCC, each query operates on a snapshot of the database when the query starts. While it runs, a query cannot see changes made by other concurrent transactions. This ensures that a query sees a consistent view of the database. Queries that read rows can never block waiting for transactions that write rows. Conversely, queries that write rows cannot be blocked by transactions that read rows. This allows much greater concurrency than traditional database systems that employ locks to coordinate access between transactions that read and write data.
Note Append-optimized tables are managed with a different concurrency control model than the MVCC model discussed in this topic. They are intended for “write-once, read-many” applications that never, or only very rarely, perform row-level updates.
Snapshots
The MVCC model depends on the system’s ability to manage multiple versions of data rows. A query operates on a snapshot of the database at the start of the query. A snapshot is the set of rows that are visible at the beginning of a statement or transaction. The snapshot ensures the query has a consistent and valid view of the database for the duration of its execution.
Each transaction is assigned a unique transaction ID (XID), an incrementing 32-bit value. When a new transaction starts, it is assigned the next XID. An SQL statement that is not enclosed in a transaction is treated as a single-statement transaction—the BEGIN
and COMMIT
are added implicitly. This is similar to autocommit in some database systems.
Note LightDB-A Database assigns XID values only to transactions that involve DDL or DML operations, which are typically the only transactions that require an XID.
When a transaction inserts a row, the XID is saved with the row in the xmin
system column. When a transaction deletes a row, the XID is saved in the xmax
system column. Updating a row is treated as a delete and an insert, so the XID is saved to the xmax
of the current row and the xmin
of the newly inserted row. The xmin
and xmax
columns, together with the transaction completion status, specify a range of transactions for which the version of the row is visible. A transaction can see the effects of all transactions less than xmin
, which are guaranteed to be committed, but it cannot see the effects of any transaction greater than or equal to xmax
.
Multi-statement transactions must also record which command within a transaction inserted a row (cmin
) or deleted a row (cmax
) so that the transaction can see changes made by previous commands in the transaction. The command sequence is only relevant during the transaction, so the sequence is reset to 0 at the beginning of a transaction.
XID is a property of the database. Each segment database has its own XID sequence that cannot be compared to the XIDs of other segment databases. The coordinator coordinates distributed transactions with the segments using a cluster-wide session ID number, called gp_session_id
. The segments maintain a mapping of distributed transaction IDs with their local XIDs. The coordinator coordinates distributed transactions across all of the segment with the two-phase commit protocol. If a transaction fails on any one segment, it is rolled back on all segments.
You can see the xmin
, xmax
, cmin
, and cmax
columns for any row with a SELECT
statement:
SELECT xmin, xmax, cmin, cmax, * FROM <tablename>;
Because you run the SELECT
command on the coordinator, the XIDs are the distributed transactions IDs. If you could run the command in an individual segment database, the xmin
and xmax
values would be the segment’s local XIDs.
Note LightDB-A Database distributes all of a replicated table’s rows to every segment, so each row is duplicated on every segment. Each segment instance maintains its own values for the system columns
xmin
,xmax
,cmin
, andcmax
, as well as for thegp_segment_id
andctid
system columns. LightDB-A Database does not permit user queries to access these system columns for replicated tables because they have no single, unambiguous value to evaluate in a query.
Transaction ID Wraparound
The MVCC model uses transaction IDs (XIDs) to determine which rows are visible at the beginning of a query or transaction. The XID is a 32-bit value, so a database could theoretically run over four billion transactions before the value overflows and wraps to zero. However, LightDB-A Database uses modulo 232 arithmetic with XIDs, which allows the transaction IDs to wrap around, much as a clock wraps at twelve o'clock. For any given XID, there could be about two billion past XIDs and two billion future XIDs. This works until a version of a row persists through about two billion transactions, when it suddenly appears to be a new row. To prevent this, LightDB-A has a special XID, called FrozenXID
, which is always considered older than any regular XID it is compared with. The xmin
of a row must be replaced with FrozenXID
within two billion transactions, and this is one of the functions the VACUUM
command performs.
Vacuuming the database at least every two billion transactions prevents XID wraparound. LightDB-A Database monitors the transaction ID and warns if a VACUUM
operation is required.
A warning is issued when a significant portion of the transaction IDs are no longer available and before transaction ID wraparound occurs:
WARNING: database "<database_name>" must be vacuumed within <number_of_transactions> transactions
When the warning is issued, a VACUUM
operation is required. If a VACUUM
operation is not performed, LightDB-A Database stops creating transactions to avoid possible data loss when it reaches a limit prior to when transaction ID wraparound occurs and issues this error:
FATAL: database is not accepting commands to avoid wraparound data loss in database "<database_name>"
See Recovering from a Transaction ID Limit Error for the procedure to recover from this error.
The server configuration parameters xid_warn_limit
and xid_stop_limit
control when the warning and error are displayed. The xid_warn_limit
parameter specifies the number of transaction IDs before the xid_stop_limit
when the warning is issued. The xid_stop_limit
parameter specifies the number of transaction IDs before wraparound would occur when the error is issued and new transactions cannot be created.
Transaction Isolation Levels
The SQL standard defines four levels of transaction isolation. The most strict is Serializable, which the standard defines as any concurrent execution of a set of Serializable transactions is guaranteed to produce the same effect as running them one at a time in some order. The other three levels are defined in terms of phenomena, resulting from interaction between concurrent transactions, which must not occur at each level. The standard notes that due to the definition of Serializable, none of these phenomena are possible at that level.
The phenomena which are prohibited at various levels are:
- dirty read – A transaction reads data written by a concurrent uncommitted transaction.
- non-repeatable read – A transaction re-reads data that it has previously read and finds that the data has been modified by another transaction (that committed since the initial read).
- phantom read – A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.
- serialization anomaly - The result of successfully committing a group of transactions is inconsistent with all possible orderings of running those transactions one at a time.
The four transaction isolation levels defined in the SQL standard and the corresponding behaviors are described in the table below.
Isolation Level | Dirty Read | Non-Repeatable | Phantom Read | Serialization Anomoly |
---|---|---|---|---|
READ UNCOMMITTED |
Allowed, but not in LightDB-A Database | Possible | Possible | Possible |
READ COMMITTED |
Impossible | Possible | Possible | Possible |
REPEATABLE READ |
Impossible | Impossible | Allowed, but not in LightDB-A Database | Possible |
SERIALIZABLE |
Impossible | Impossible | Impossible | Impossible |
LightDB-A Database implements only two distinct transaction isolation levels, although you can request any of the four described levels. The LightDB-A Database READ UNCOMMITTED
level behaves like READ COMMITTED
, and the SERIALIZABLE
level falls back to REPEATABLE READ
.
The table also shows that LightDB-A Database’s REPEATABLE READ
implementation does not allow phantom reads. This is acceptable under the SQL standard because the standard specifies which anomalies must not occur at certain isolation levels; higher guarantees are acceptable.
The following sections detail the behavior of the available isolation levels.
Important: Some LightDB-A Database data types and functions have special rules regarding transactional behavior. In particular, changes made to a sequence (and therefore the counter of a column declared using serial
) are immediately visible to all other transactions, and are not rolled back if the transaction that made the changes aborts.
Read Committed Isolation Level
The default isolation level in LightDB-A Database is READ COMMITTED
. When a transaction uses this isolation level, a SELECT
query (without a FOR UPDATE/SHARE
clause) sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions. In effect, a SELECT
query sees a snapshot of the database at the instant the query begins to run. However, SELECT
does see the effects of previous updates executed within its own transaction, even though they are not yet committed. Also note that two successive SELECT
commands can see different data, even though they are within a single transaction, if other transactions commit changes after the first SELECT
starts and before the second SELECT
starts.
UPDATE
, DELETE
, SELECT FOR UPDATE
, and SELECT FOR SHARE
commands behave the same as SELECT
in terms of searching for target rows: they find only the target rows that were committed as of the command start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the would-be updater waits for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the second updater can proceed with updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of the row. The search condition of the command (the WHERE
clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation using the updated version of the row. In the case of SELECT FOR UPDATE
and SELECT FOR SHARE
, this means the updated version of the row is locked and returned to the client.
INSERT
with an ON CONFLICT DO UPDATE
clause behaves similarly. In READ COMMITTED
mode, each row proposed for insertion will either insert or update. Unless there are unrelated errors, one of those two outcomes is guaranteed. If a conflict originates in another transaction whose effects are not yet visible to the INSERT
, the UPDATE
clause will affect that row, even though possibly no version of that row is conventionally visible to the command.
INSERT
with an ON CONFLICT DO NOTHING
clause may have insertion not proceed for a row due to the outcome of another transaction whose effects are not visible to the INSERT
snapshot. Again, this is only the case in READ COMMITTED
mode.
Because of the above rules, it is possible for an updating command to see an inconsistent snapshot: it can see the effects of concurrent updating commands on the same rows it is trying to update, but it does not see effects of those commands on other rows in the database. This behavior makes READ COMMITTED
mode unsuitable for commands that involve complex search conditions; however, it is just right for simpler cases. For example, consider updating bank balances with transactions like:
BEGIN;
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
COMMIT;
If two such transactions concurrently try to change the balance of account 12345
, we clearly want the second transaction to start with the updated version of the account’s row. Because each command is affecting only a predetermined row, letting it access the updated version of the row does not create any troublesome inconsistency.
More complex usage may produce undesirable results in READ COMMITTED
mode. For example, consider a DELETE
command operating on data that is being both added and removed from its restriction criteria by another command; assume website
is a two-row table with website.hits
equaling 9
and 10
:
BEGIN;
UPDATE website SET hits = hits + 1;
-- run from another session: DELETE FROM website WHERE hits = 10;
COMMIT;
The DELETE
will have no effect even though there is a website.hits = 10
row before and after the UPDATE
. This occurs because the pre-update row value 9
is skipped, and when the UPDATE
completes and DELETE
obtains a lock, the new row value is no longer 10
but 11
, which no longer matches the criteria.
Because READ COMMITTED
mode starts each command with a new snapshot that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point at issue above is whether or not a single command sees an absolutely consistent view of the database.
The partial transaction isolation provided by READ COMMITTED
mode is adequate for many applications, and this mode is fast and simple to use; however, it is not sufficient for all cases. Applications that do complex queries and updates might require a more rigorously consistent view of the database than READ COMMITTED
mode provides.
Repeatable Read Isolation Level
The REPEATABLE READ
isolation level only sees data committed before the transaction began; it never sees either uncommitted data or changes committed during transaction execution by concurrent transactions. (However, the query does see the effects of previous updates executed within its own transaction, even though they are not yet committed.) This is a stronger guarantee than is required by the SQL standard for this isolation level, and prevents all of the phenomena described in the table above. As mentioned previously, this is specifically allowed by the standard, which only describes the minimum protections each isolation level must provide.
The REPEATABLE READ
isolation level is different from READ COMMITTED
in that a query in a REPEATABLE READ
transaction sees a snapshot as of the start of the first non-transaction-control statement in the transaction, not as of the start of the current statement within the transaction. Successive SELECT
commands within a single transaction see the same data; they do not see changes made by other transactions that committed after their own transaction started.
Applications using this level must be prepared to retry transactions due to serialization failures.
UPDATE
, DELETE
, SELECT FOR UPDATE
, and SELECT FOR SHARE
commands behave the same as SELECT
in terms of searching for target rows: they will only find target rows that were committed as of the transaction start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the REPEATABLE READ
transaction will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the REPEATABLE READ
can proceed with updating the originally found row. But if the first updater commits (and actually updated or deleted the row, not just locked it), then LightDB-A Database rolls back the REPEATABLE READ
transaction with the message:
ERROR: could not serialize access due to concurrent update
because a REPEATABLE READ
transaction cannot modify or lock rows changed by other transactions after the REPEATABLE READ
transaction began.
When an application receives this error message, it should abort the current transaction and retry the whole transaction from the beginning. The second time through, the transaction will see the previously-committed change as part of its initial view of the database, so there is no logical conflict in using the new version of the row as the starting point for the new transaction’s update.
Note that you may need to retry only updating transactions; read-only transactions will never have serialization conflicts.
The REPEATABLE READ
mode provides a rigorous guarantee that each transaction sees a completely stable view of the database. However, this view will not necessarily always be consistent with some serial (one at a time) execution of concurrent transactions of the same level. For example, even a read-only transaction at this level may see a control record updated to show that a batch has been completed but not see one of the detail records which is logically part of the batch because it read an earlier revision of the control record. Attempts to enforce business rules by transactions running at this isolation level are not likely to work correctly without careful use of explicit locks to block conflicting transactions.
Serializable Isolation Level
The SERIALIZABLE
level, which LightDB-A Database does not fully support, guarantees that a set of transactions run concurrently produces the same result as if the transactions ran sequentially one after the other. If SERIALIZABLE
is specified, LightDB-A Database falls back to REPEATABLE READ
. The MVCC Snapshot Isolation (SI) model prevents dirty reads, non-repeatable reads, and phantom reads without expensive locking, but there are other interactions that can occur between some SERIALIZABLE
transactions in LightDB-A Database that prevent them from being truly serializable. These anomalies can often be attributed to the fact that LightDB-A Database does not perform predicate locking, which means that a write in one transaction can affect the result of a previous read in another concurrent transaction.
About Setting the Transaction Isolation Level
The default transaction isolation level for LightDB-A Database is specified by the default_transaction_isolation server configuration parameter, and is initially READ COMMITTED
.
When you set default_transaction_isolation
in a session, you specify the default transaction isolation level for all transactions in the session.
To set the isolation level for the current transaction, you can use the SET TRANSACTION SQL command. Be sure to set the isolation level before any SELECT
, INSERT
, DELETE
, UPDATE
, or COPY
statement:
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
...
COMMIT;
You can also specify the isolation mode in a BEGIN
statement:
BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
Removing Dead Rows from Tables
Updating or deleting a row leaves an expired version of the row in the table. When an expired row is no longer referenced by any active transactions, it can be removed and the space it occupied can be reused. The VACUUM
command marks the space used by expired rows for reuse.
When expired rows accumulate in a table, the disk files must be extended to accommodate new rows. Performance degrades due to the increased disk I/O required to run queries. This condition is called bloat and it should be managed by regularly vacuuming tables.
The VACUUM
command (without FULL
) can run concurrently with other queries. It marks the space previously used by the expired rows as free, and updates the free space map. When LightDB-A Database later needs space for new rows, it first consults the table’s free space map to find pages with available space. If none are found, new pages will be appended to the file.
VACUUM
(without FULL
) does not consolidate pages or reduce the size of the table on disk. The space it recovers is only available through the free space map. To prevent disk files from growing, it is important to run VACUUM
often enough. The frequency of required VACUUM
runs depends on the frequency of updates and deletes in the table (inserts only ever add new rows). Heavily updated tables might require several VACUUM
runs per day, to ensure that the available free space can be found through the free space map. It is also important to run VACUUM
after running a transaction that updates or deletes a large number of rows.
The VACUUM FULL
command rewrites the table without expired rows, reducing the table to its minimum size. Every page in the table is checked, and visible rows are moved up into pages which are not yet fully packed. Empty pages are discarded. The table is locked until VACUUM FULL
completes. This is very expensive compared to the regular VACUUM
command, and can be avoided or postponed by vacuuming regularly. It is best to run VACUUM FULL
during a maintenance period. An alternative to VACUUM FULL
is to recreate the table with a CREATE TABLE AS
statement and then drop the old table.
You can run VACUUM VERBOSE tablename
to get a report, by segment, of the number of dead rows removed, the number of pages affected, and the number of pages with usable free space.
Query the pg_class
system table to find out how many pages a table is using across all segments. Be sure to ANALYZE
the table first to get accurate data.
SELECT relname, relpages, reltuples FROM pg_class WHERE relname='<tablename>';
Another useful tool is the gp_bloat_diag
view in the gp_toolkit
schema, which identifies bloat in tables by comparing the actual number of pages used by a table to the expected number. See “The gp_toolkit Administrative Schema” in the LightDB-A Database Reference Guide for more about gp_bloat_diag
.
Parent topic: LightDB-A Database Concepts