Cascading replication, introduced with LightDB 21, enables a standby server to replicate from another standby server rather than directly from the primary, meaning replication changes "cascade" down through a hierarchy of servers. This can be used to reduce load on the primary and minimize bandwith usage between sites. For more details, see the LightDB cascading replication documentation.
ltcluster supports cascading replication. When cloning a standby,
set the command-line parameter --upstream-node-id
to the
node_id
of the server the standby should connect to, and
ltcluster will create recovery.conf
to point to it. Note
that if --upstream-node-id
is not explicitly provided,
ltcluster will set the standby's recovery.conf
to
point to the primary node.
To demonstrate cascading replication, first ensure you have a primary and standby
set up as shown in the Quick-start guide.
Then create an additional standby server with ltcluster.conf
looking
like this:
node_id=3 node_name=node3 conninfo='host=node3 user=ltcluster dbname=ltcluster' data_directory='/var/lib/lightdb/data'
Clone this standby (using the connection parameters for the existing standby),
ensuring --upstream-node-id
is provide with the node_id
of the previously created standby (if following the example, this will be 2
):
$ ltcluster -h node2 -U ltcluster -d ltcluster -f /etc/ltcluster.conf standby clone --upstream-node-id=2 NOTICE: using configuration file "/etc/ltcluster.conf" NOTICE: destination directory "/var/lib/lightdb/data" provided INFO: connecting to upstream node INFO: connected to source node, checking its state NOTICE: checking for available walsenders on upstream node (2 required) INFO: sufficient walsenders available on upstream node (2 required) INFO: successfully connected to source node DETAIL: current installation size is 29 MB INFO: creating directory "/var/lib/lightdb/data"... NOTICE: starting backup (using lt_basebackup)... HINT: this may take some time; consider using the -c/--fast-checkpoint option INFO: executing: 'pg_basebackup -l "ltcluster base backup" -D /var/lib/lightdb/data -h node2 -U ltcluster -X stream ' NOTICE: standby clone (using pg_basebackup) complete NOTICE: you can now start your LightDB server HINT: for example: lt_ctl -D /var/lib/lightdb/data start
then register it (note that --upstream-node-id
must be provided here
too):
$ ltcluster -f /etc/ltcluster.conf standby register --upstream-node-id=2 NOTICE: standby node "node2" (ID: 2) successfully registered
After starting the standby, the cluster will look like this, showing that node3
is attached to node2
, not the primary (node1
).
$ ltcluster -f /etc/ltcluster.conf cluster show ID | Name | Role | Status | Upstream | Location | Connection string ----+-------+---------+-----------+----------+----------+-------------------------------------- 1 | node1 | primary | * running | | default | host=node1 dbname=ltcluster user=ltcluster 2 | node2 | standby | running | node1 | default | host=node2 dbname=ltcluster user=ltcluster 3 | node3 | standby | running | node2 | default | host=node3 dbname=ltcluster user=ltcluster
Under some circumstances when setting up a cascading replication
cluster, you may wish to clone a downstream standby whose upstream node
does not yet exist. In this case you can clone from the primary (or
another upstream node); provide the parameter --upstream-conninfo
to explictly set the upstream's primary_conninfo
string
in recovery.conf
.