ltcluster standby register — add a standby's information to the ltcluster metadata
ltcluster standby register
adds a standby's information to
the ltcluster metadata. This command needs to be executed to enable
promote/follow operations and to allow ltclusterd to work with the node.
An existing standby can be registered using this command. Execute with the
--dry-run
option to check what would happen without actually registering the
standby.
If providing the configuration file location with -f/--config-file
,
avoid using a relative path, as ltcluster stores the configuration file location
in the ltcluster metadata for use when ltcluster is executed remotely (e.g. during
ltcluster standby switchover). ltcluster will attempt to convert the
a relative path into an absolute one, but this may not be the same as the path you
would explicitly provide (e.g. ./ltcluster.conf
might be converted
to /path/to/./ltcluster.conf
, whereas you'd normally write
/path/to/ltcluster.conf
).
By default, ltcluster will wait 30 seconds for the standby to become available before
aborting with a connection error. This is useful when setting up a standby from a script,
as the standby may not have fully started up by the time ltcluster standby register
is executed.
To change the timeout, pass the desired value with the --wait-start
option.
A value of 0
will disable the timeout.
The timeout will be ignored if -F/--force
was provided.
Depending on your environment and workload, it may take some time for the standby's node record to propagate from the primary to the standby. Some actions (such as starting ltclusterd) require that the standby's node record is present and up-to-date to function correctly.
By providing the option --wait-sync
to the
ltcluster standby register
command, ltcluster will wait
until the record is synchronised before exiting. An optional timeout (in
seconds) can be added to this option (e.g. --wait-sync=60
).
Under some circumstances you may wish to register a standby which is not yet running; this can be the case when using provisioning tools to create a complex replication cluster, or if the node was not cloned by ltcluster.
In this case, by using the -F/--force
option and providing the connection parameters to the primary server,
the standby can be registered even if it has not yet been started.
Connection parameters can either be provided either as a conninfo
string
(e.g. -d 'host=node1 user=ltcluster'
or as individual connection parameters
(-h/--host
, -d/--dbname
,
-U/--user
, -p/--port
etc.).
Similarly, with cascading replication it may be necessary to register
a standby whose upstream node has not yet been registered - in this case,
using -F/--force
will result in the creation of an inactive placeholder
record for the upstream node, which will however later need to be registered
with the -F/--force
option too.
When used with ltcluster standby register
, care should be taken that use of the
-F/--force
option does not result in an incorrectly configured cluster.
If you've cloned a standby using another method (e.g. barman's
barman recover
command), register the node as detailed in section
Registering an inactive node then execute
ltcluster standby clone --replication-conf-only
to generate the appropriate replication configuration.
--dry-run
Check prerequisites but don't actually register the standby.
-F
/--force
Overwrite an existing node record
--upstream-node-id
ID of the upstream node to replicate from (optional)
--wait-start
wait for the standby to start (timeout in seconds, default 30 seconds)
--wait-sync
wait for the node record to synchronise to the standby (optional timeout in seconds)
A standby_register
event notification
will be generated immediately after the node record is updated on the primary.
If the --wait-sync
option is provided, a standby_register_sync
event notification will be generated immediately after the node record has synchronised to the
standby.
If provided, ltcluster will substitute the placeholders %p
with the node ID of the
primary node, %c
with its conninfo
string, and
%a
with its node name.