lt_distributed_dump.py

lt_distributed_dump.py — extract a distributed LightDB database into an directory include script file or other archive file

Synopsis

lt_distributed_dump.py [connection-option...] [option...]

Description

lt_distributed_dump is a utility for backing up a distributed LightDB database. It makes consistent backups only if the database is not being used concurrently. lt_distributed_dump does not block other users accessing the database (readers or writers).

lt_distributed_dump use lt_dump to dump database.

lt_distributed_dump only dumps a single distributed database.

Dumps can be output in script or archive file formats. The script format is not recommended, because for a distributed database, after a distributed table is created in coordinator, the name of the distributed sub-table in the plain-text files will be different from that of the sub-table in worker node.

Script dumps are plain-text files containing the SQL commands required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, feed it to ltsql. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products.

The alternative archive file formats must be used with lt_distributed_restore.py to rebuild the database. They allow lt_distributed_restore.py to be selective about what is restored, or even to reorder the items prior to being restored. The archive file formats are designed to be portable across architectures.

When used with one of the archive file formats and combined with lt_distributed_restore.py, lt_distributed_dump.py provides a flexible archival and transfer mechanism. lt_distributed_dump.py can be used to backup an entire database, then lt_distributed_restore.py can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the custom format (-Fc) and the directory format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The directory format is the only format that supports parallel dumps.

lt_distributed_dump.py support dump plorasql packages and procedures, like dump tables and so on.

While running lt_distributed_dump.py, one should examine the output for any warnings (printed on standard error), especially in light of the limitations listed below.

Options

The following command-line options control the content and format of the output. Use '--folder' to specify a directory to store dump files. the directory must be not exist. Not support '--create', because for distributed database, restore muse have database exist.

-a
--data-only

Dump only the worker node data and schema

This option is similar to specifying --section=data.

-b
--blobs

Include large objects in the dump. This is the default behavior except when --schema, --table, or --schema-only is specified. The -b switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when --data-only is used, but not when --schema-only is.

-B
--no-blobs

Exclude large objects in the dump.

When both -b and -B are given, the behavior is to output large objects, when data is being dumped, see the -b documentation.

-K
--recreate-schema

Output commands to directly drop schema(use cascade mode) prior to outputting the commands for creating them, more faster then normal mode. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)

This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call lt_distributed_restore.py.

-E encoding
--encoding=encoding

Create the dump in the specified character set encoding. By default, the dump is created in the database encoding. (Another way to get the same result is to set the LTCLIENTENCODING environment variable to the desired dump encoding.)

-f dirname
--folder=dirname

Send output to the specified directory. The directory is created by lt_distributed_dump.py and must not exist before.

-F format
--format=format

Selects the format of the output. format can be one of the following:

p
plain

Output a plain-text SQL script file .

c
custom

Output a custom-format archive suitable for input into lt_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.

d
directory

Output a directory-format archive suitable for input into lt_distributed_restore.py (the default). This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that lt_distributed_restore.py can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip tool. This format is compressed by default and also supports parallel dumps.

t
tar

Output a tar-format archive suitable for input into lt_distributed_restore.py. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore.

-j njobs
--jobs=njobs

Run the dump in parallel by dumping njobs tables simultaneously. This option may reduce the time needed to perform the dump but it also increases the load on the database server. You can only use this option with the directory output format because this is the only output format where multiple processes can write their data at the same time.

lt_dump will open njobs + 1 connections to the database, so make sure your max_connections setting is high enough to accommodate all connections.

Requesting exclusive locks on database objects while running a parallel dump could cause the dump to fail. The reason is that the lt_dump master process requests shared locks on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. If another client then requests an exclusive lock on a table, that lock will not be granted but will be queued waiting for the shared lock of the master process to be released. Consequently any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. To detect this conflict, the lt_dump worker process requests another shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so lt_dump has no choice but to abort the dump.

For a consistent backup, the database server needs to support synchronized snapshots, a feature that was introduced in LightDB 9.2 for primary servers and 10 for standbys. With this feature, database clients can ensure they see the same data set even though they use different connections. lt_distributed_dump.py -j uses multiple database connections; it connects to the database once with the master process and once again for each worker job. Without the synchronized snapshot feature, the different worker jobs wouldn't be guaranteed to see the same data in each connection, which could lead to an inconsistent backup.

-n name
--schema=name

Dump only schemas matching name; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple -n switches. see Examples below.

Note

When -n is specified, lt_distributed_dump.py makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.

Note

Non-schema objects such as blobs are not dumped when -n is specified. You can add blobs back to the dump with the --blobs switch.

-N name
--exclude-schema=name

Do not dump any schemas matching name. The pattern is interpreted according to the same rules as for -n. -N can be given more than once to exclude schemas matching any of several patterns.

When both -n and -N are given, the behavior is to dump just the schemas that match at least one -n switch but no -N switches. If -N appears without -n, then schemas matching -N are excluded from what is otherwise a normal dump.

-O
--no-owner

Do not output commands to set ownership of objects to match the original database. By default, lt_distributed_dump.py issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O.

This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call lt_distributed_restore.py.

-R
--no-reconnect

This option is obsolete but still accepted for backwards compatibility.

-s
--schema-only

Dump only the object definitions (schema) in coordinator node, not data. worker node is not dumped.

This option is the inverse of --data-only. It is similar to specifying --section=pre-data --section=post-data.

(Do not confuse this with the --schema option, which uses the word schema in a different meaning.)

-S username
--superuser=username

Specify the superuser user name to use when disabling triggers. This is relevant only if --disable-triggers is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)

-t name
--table=name

Dump only tables with names matching name. Multiple tables can be selected by writing multiple -t switches. see Examples below.

As well as tables, this option can be used to dump the definition of matching views, materialized views, foreign tables, and sequences. It will not dump the contents of views or materialized views.

The -n and -N switches have no effect when -t is used, because tables selected by -t will be dumped regardless of those switches, and non-table objects will not be dumped.

Note

When -t is specified, lt_distributed_dump.py makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.

Note

Writing -t tab would dumps whichever one is visible in your default search path. you must write something like -t sch.tab to select a table in a particular schema.

-T name
--exclude-table=name

Do not dump any tables matching name. -T can be given more than once to exclude tables matching any of several patterns.

When both -t and -T are given, the behavior is to dump just the tables that match at least one -t switch but no -T switches. If -T appears without -t, then tables matching -T are excluded from what is otherwise a normal dump.

-v
--verbose

Specifies verbose mode. This will cause lt_dump to output detailed object comments and start/stop times to the dump file, and progress messages to standard error. This also will cause lt_distributed_dump.py to output in debug log level.

-V
--version

Print the lt_distributed_dump.py version and exit.

-x
--no-privileges
--no-acl

Prevent dumping of access privileges (grant/revoke commands).

-Z 0..9
--compress=0..9

Specify the compression level to use. Zero means no compression. For the custom and directory archive formats, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all.

--binary-upgrade

Not support yet

--column-inserts
--attribute-inserts

Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-LightDB databases. Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents.

--disable-dollar-quoting

This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.

--disable-triggers

This option is relevant only when creating a data-only dump. It instructs lt_distributed_dump.py to include commands to temporarily disable triggers on the target tables while the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data restore.

Presently, the commands emitted for --disable-triggers must be done as superuser. So, you should also specify a superuser name with -S, or preferably be careful to start the resulting script as a superuser.

This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call lt_restore.

--enable-row-security

This option is relevant only when dumping the contents of a table which has row security. By default, lt_dump called by lt_distributed_dump.py willset row_security to off, to ensure that all data is dumped from the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs lt_dump called by lt_distributed_dump.py to set row_security to on instead, allowing the user to dump the parts of the contents of the table that they have access to.

Note that if you use this option currently, you probably also want the dump be in INSERT format, as the COPY FROM during restore does not support row security.

--exclude-table-data=pattern

Not support yet.

--extra-float-digits=ndigits

Use the specified value of extra_float_digits when dumping floating-point data, instead of the maximum available precision. Routine dumps made for backup purposes should not use this option.

--if-exists

Use conditional commands (i.e., add an IF EXISTS clause) when cleaning database objects. This option is not valid unless --clean is also specified.

--include-foreign-data=foreignserver

Not support yet.

--lt-exclude-lightdb-objects

Exclude all built-in objects created by LightDB in the dump.

Note

For built-in tables or schemas created by LightDB, when both -t or -n are given, the behavior is to ignore the table or the schema, when data is being dumped, see the -t and -n documentation.

--lt-dump-lightdb-tables

Not support yet.

--lt-disable-auto-analyze

Do not automatic collect statistics when dump index. (see CREATE INDEX for more details).

--inserts

Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-LightDB databases. Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts option is safe against column order changes, though even slower.

--load-via-partition-root

When dumping data for a table partition, make the COPY or INSERT statements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used to sort the partitioning column.

It is best not to use parallelism when restoring from an archive made with this option, because lt_restore will not know exactly which partition(s) a given archive data item will load data into. This could result in inefficiency due to lock conflicts between parallel jobs, or perhaps even restore failures due to foreign key constraints being set up before all the relevant data is loaded.

--lock-wait-timeout=timeout

Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified timeout. The timeout may be specified in any of the formats accepted by SET statement_timeout. (Allowed formats vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions.)

--no-comments

Do not dump comments.

--no-publications

Do not dump publications.

--no-security-labels

Do not dump security labels.

--no-subscriptions

Do not dump subscriptions.

--no-sync

By default, lt_dump called by lt_distributed_dump.py will wait for all files to be written safely to disk. This option causes lt_dump called by lt_distributed_dump.py to return without waiting, which is faster, but means that a subsequent operating system crash can leave the dump corrupt. Generally, this option is useful for testing but should not be used when dumping data from production installation.

--no-tablespaces

Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.

This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call lt_distributed_restore.py.

--no-unlogged-table-data

Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data. Data in unlogged tables is always excluded when dumping from a standby server.

--on-conflict-do-nothing

Add ON CONFLICT DO NOTHING to INSERT commands. This option is not valid unless --inserts, --column-inserts or --rows-per-insert is also specified.

--quote-all-identifiers

Force quoting of all identifiers. This option is recommended when dumping a database from a server whose LightDB major version is different from lt_dump's called by lt_distributed_dump.py, or when the output is intended to be loaded into a server of a different major version. By default, lt_dump quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words. Using --quote-all-identifiers prevents such issues, at the price of a harder-to-read dump script.

--rows-per-insert=nrows

Dump data as INSERT commands (rather than COPY). Controls the maximum number of rows per INSERT command. The value specified must be a number greater than zero. Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents.

--section=sectionname

Only dump the named section. The section name can be pre-data, data, or post-data. This option can be specified more than once to select multiple sections. The default is to dump all sections.

The data section contains actual table data, large-object contents, sequence values and data definition items in worker node. Post-data items include definitions of indexes, triggers, rules, and constraints other than validated check constraints in coordinator node. Pre-data items include all other data definition items in coordinator node.

--serializable-deferrable

Not support yet

--snapshot=snapshotname

Not support yet

--strict-names

Not support yet

--use-set-session-authorization

Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. Also, a dump using SET SESSION AUTHORIZATION will certainly require superuser privileges to restore correctly, whereas ALTER OWNER requires lesser privileges.

--parallel-num=number-of-lt_dump

Parallel execute lt_dump.

--help

Show help about lt_distributed_dump.py command line arguments, and exit.

The following command-line options control the database connection parameters.

-d dbname
--dbname=dbname

Specifies the name of the database to connect to and dump. The dbname can't be a connection string.

If this is not specified, the environment variable LTDATABASE is used. If that is not set, the user name specified for the connection is used.

-h host
--host=host

Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the LTHOST environment variable, if set, else 'localhost'.

-p port
--port=port

Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the LTPORT environment variable, if set, or '5432'.

-U username
--username=username

User name to connect as. Defaults to the LTUSER environment variable, if set, or current user.

-w
--no-password

Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.

-W
--password

Force lt_distributed_dump.py to prompt for a password before connecting to a database.

--role=rolename

Specifies a role name to be used to create the dump. This option causes lt_dump called by lt_distributed_dump.py to issue a SET ROLE rolename command after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by lt_dump called by lt_distributed_dump.py , but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.

The following command-line options control the loggin parameters.

-l log-directory
--log-directory=log-directory

Specifies the log directory path. default is '/tmp/ltAdminLogs'

--log-level-console=log-level-console

Specifies the console log level.

--log-level-file=log-level-file

Specifies the file log level.

--log-filename=log-filename

Specifies the log file name. Default is 'lt_distributed_dump-%Y-%m-%d.log'.

Environment

LTDATABASE
LTHOST
LTOPTIONS
LTPORT
LTUSER

Default connection parameters.

Diagnostics

lt_dump called by lt_distributed_dump.py internally executes SELECT statements. If you have problems running lt_dump, make sure you are able to select information from the database using, for example, ltsql. Also, any default connection settings and environment variables used by the libpq front-end library will apply.

The database activity of lt_dump is normally collected by the statistics collector. If this is undesirable, you can set parameter track_counts to false via LTOPTIONS or the ALTER USER command.

Notes

If your database cluster has any local additions to the template1 database, be careful to dump with '--lt-exclude-lightdb-objects'.

When a data-only dump is chosen and the option --disable-triggers is used, lt_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.

The dump file produced by lt_distributed_dump.py does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run ANALYZE after restoring from a dump file to ensure optimal performance; see Section 22.1.3 and Section 22.1.6 for more information.

Because lt_distributed_dump.py is used to transfer data to newer versions of LightDB, the output of lt_distributed_dump.py can be expected to load into LightDB server versions newer than lt_dump's version. lt_distributed_dump.py can also dump from LightDB servers older than its own version. (Currently, servers back to version 8.0 are supported.) However, lt_distributed_dump.py cannot dump from LightDB servers newer than its own major version; it will refuse to even try, rather than risk making an invalid dump. Also, it is not guaranteed that lt_distributed_dump.py's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. Use of the --quote-all-identifiers option is recommended in cross-version cases, as it can prevent problems arising from varying reserved-word lists in different LightDB versions.

When dumping logical replication subscriptions, lt_distributed_dump.py will generate CREATE SUBSCRIPTION commands that use the connect = false option, so that restoring the subscription does not make remote connections for creating a replication slot or for initial table copy. That way, the dump can be restored without requiring network access to the remote servers. It is then up to the user to reactivate the subscriptions in a suitable way. If the involved hosts have changed, the connection information might have to be changed. It might also be appropriate to truncate the target tables before initiating a new full table copy.

Examples

To dump a database into a custom-format archive file:

$ lt_distributed_dump.py -Fc --folder dumpdir -d mydb --lt-exclude-lightdb-objects

To dump a database into a directory-format archive:

$ lt_distributed_dump.py -Fd --folder dumpdir -d mydb --lt-exclude-lightdb-objects

To dump a database into a directory-format archive in parallel with 5 worker jobs:

$ lt_distributed_dump.py -Fd --folder dumpdir -d mydb -j 5 --lt-exclude-lightdb-objects

To reload an archive file into a (freshly created) database named newdb:

$ lt_distributed_restore.py -d newdb --folder dumpdir

To reload an archive file into the same database it was dumped from, discarding the current contents of that database:

$ lt_distributed_restore.py -d postgres --clean --create --folder dumpdir

To dump a single table named mytab:

$ lt_distributed_dump.py -t mytab -d mydb --folder dumpdir --lt-exclude-lightdb-objects

See Also

lt_dump, lt_restore, lt_distributed_restore.py, ltsql
English|中文