ltcluster standby clone — clone a LightDB standby node from another LightDB node
ltcluster standby clone
clones a LightDB node from another
LightDB node, typically the primary, but optionally from any other node in
the cluster. It creates the replication configuration required
to attach the cloned node to the primary node (or another standby, if cascading replication
is in use).
ltcluster standby clone
does not start the standby, and after cloning
a standby, the command ltcluster standby register
must be executed to
notify ltcluster of its existence.
Note that by default, all configuration files in the source node's data
directory will be copied to the cloned node. Typically these will be
lightdb.conf
, lightdb.auto.conf
,
lt_hba.conf
and lt_ident.conf
.
These may require modification before the standby is started.
In some cases (e.g. on Debian or Ubuntu Linux installations), LightDB's
configuration files are located outside of the data directory and will
not be copied by default. ltcluster can copy these files, either to the same
location on the standby server (provided appropriate directory and file permissions
are available), or into the standby's data directory. This requires passwordless
SSH access to the primary server. Add the option --copy-external-config-files
to the ltcluster standby clone
command; by default files will be copied to
the same path as on the upstream server. Note that the user executing ltcluster
must have write access to those directories.
To have the configuration files placed in the standby's data directory, specify
--copy-external-config-files=pgdata
, but note that
any include directives in the copied files may need to be updated.
When executing ltcluster standby clone
with the
--copy-external-config-files
aand --dry-run
options, ltcluster will check the SSH connection to the source node, but
will not verify whether the files can actually be copied.
During the actual clone operation, a check will be made before the database itself is cloned to determine whether the files can actually be copied; if any problems are encountered, the clone operation will be aborted, enabling the user to fix any issues before retrying the clone operation.
For reliable configuration file management we recommend using a configuration management tool such as Ansible, Chef, Puppet or Salt.
By default, ltcluster will create a minimal replication configuration containing following parameters:
primary_conninfo
primary_slot_name
(if replication slots in use)
The following additional parameters can be specified in ltcluster.conf
for inclusion in the replication configuration:
restore_command
archive_cleanup_command
recovery_min_apply_delay
When initially cloning a standby, you will need to ensure
that all required WAL files remain available while the cloning is taking
place. To ensure this happens when using the default lt_basebackup
method,
ltcluster will set lt_basebackup
's --wal-method
parameter to stream
,
which will ensure all WAL files generated during the cloning process are
streamed in parallel with the main backup. Note that this requires two
replication connections to be available (ltcluster will verify sufficient
connections are available before attempting to clone, and this can be checked
before performing the clone using the --dry-run
option).
To override this behaviour, in ltcluster.conf
set
lt_basebackup
's --wal-method
parameter to fetch
:
pg_basebackup_options='--wal-method=fetch'
and ensure that wal_keep_segments
(LightDB 21 and later:
wal_keep_size
) is set to an appropriately high value. Note
however that this is not a particularly reliable way of ensuring sufficient
WAL is retained and is not recommended.
See the
lt_basebackup documentation for details.
To ensure that WAL files are placed in a directory outside of the main data
directory (e.g. to keep them on a separate disk for performance reasons),
specify the location with --waldir
in
the ltcluster.conf
parameter pg_basebackup_options
,
e.g.:
pg_basebackup_options='--waldir=/path/to/wal-directory'
This setting will also be honored by ltcluster when cloning from Barman (ltcluster 5.2 and later).
ltcluster supports standbys cloned by another method (e.g. using barman's
barman recover
command).
To integrate the standby as a ltcluster node, once the standby has been cloned,
ensure the ltcluster.conf
file is created for the node, and that it has been registered using
ltcluster standby register
.
To register a standby which is not running, execute ltcluster standby register --force and provide the connection details for the primary.
See Registering an inactive node for more details.
Then execute the command ltcluster standby clone --replication-conf-only
.
This will create the recovery.conf
file needed to attach
the node to its upstream (in LightDB 21 and later: append replication configuration
to lightdb.auto.conf
), and will also create a replication slot on the
upstream node if required.
The upstream node must be running so the correct replication configuration can be obtained.
If the standby is running, the replication configuration will not be written unless the
-F/--force
option is provided.
Execute ltcluster standby clone --replication-conf-only --dry-run
to check the prerequisites for creating the recovery configuration,
and display the configuration changes which would be made without actually
making any changes.
In LightDB 21 and later, the LightDB configuration must be reloaded for replication configuration changes to take effect.
-d, --dbname=CONNINFO
Connection string of the upstream node to use for cloning.
--dry-run
Check prerequisites but don't actually clone the standby.
If --replication-conf-only
specified, the contents of
the generated recovery configuration will be displayed
but not written.
-c, --fast-checkpoint
Force fast checkpoint (not effective when cloning from Barman).
--copy-external-config-files[={samepath|pgdata}]
Copy configuration files located outside the data directory on the source node to the same path on the standby (default) or to the LightDB data directory.
--no-upstream-connection
When using Barman, do not connect to upstream node.
--recovery-min-apply-delay
Set LightDB configuration recovery_min_apply_delay
parameter
to the provided value.
This overrides any recovery_min_apply_delay
provided via
ltcluster.conf
.
For more details on this parameter, see: recovery_min_apply_delay.
-R, --remote-user=USERNAME
Remote system username for SSH operations (default: current local system username).
--replication-conf-only
Create recovery configuration for a previously cloned instance.
In LightDB 21 and later, the replication configuration will be
written to lightdb.auto.conf
.
--replication-user
User to make replication connections with (optional, not usually required).
--superuser
If the ltcluster user is not a superuser, the name of a valid superuser must be provided with this option.
--upstream-conninfo
primary_conninfo
value to include in the recovery configuration
when the intended upstream server does not yet exist.
Note that ltcluster may modify the provided value, in particular to set the
correct application_name
.
--upstream-node-id
ID of the upstream node to replicate from (optional, defaults to primary node)
--verify-backup
Verify a cloned node using the lt_verifybackup utility (LightDB 21 and later).
This option can currently only be used when cloning directly from an upstream node.
--without-barman
Do not use Barman even if configured.
A standby_clone
event notification will be generated.
See cloning standbys for details about various aspects of cloning.