11.1. ltclusterd configuration

11.1.1. Required configuration for automatic failover
11.1.2. Optional configuration for automatic failover
11.1.3. Configuring ltclusterd and pgbouncer to fence a failed primary node
11.1.4. LightDB service configuration
11.1.5. ltclusterd service configuration
11.1.6. Monitoring configuration
11.1.7. Applying configuration changes to ltclusterd

To use ltclusterd, its associated function library must be included via lightdb.conf with:

        shared_preload_libraries = 'ltcluster'

Changing this setting requires a restart of LightDB; for more details see the LightDB documentation.

The following configuraton options apply to ltclusterd in all circumstances:

monitor_interval_secs

The interval (in seconds, default: 2) to check the availability of the upstream node.

connection_check_type

The option connection_check_type is used to select the method ltclusterd uses to determine whether the upstream node is available.

Possible values are:

  • ping (default) - uses PQping() to determine server availability
  • connection - determines server availability by attempting to make a new connection to the upstream node
  • query - determines server availability by executing an SQL statement on the node via the existing connection

reconnect_attempts

The number of attempts (default: 6) will be made to reconnect to an unreachable upstream node before initiating a failover.

There will be an interval of reconnect_interval seconds between each reconnection attempt.

reconnect_interval

Interval (in seconds, default: 10) between attempts to reconnect to an unreachable upstream node.

The number of reconnection attempts is defined by the parameter reconnect_attempts.

degraded_monitoring_timeout

Interval (in seconds) after which ltclusterd will terminate if either of the servers (local node and or upstream node) being monitored is no longer available (degraded monitoring mode).

-1 (default) disables this timeout completely.

See also ltcluster.conf.sample for an annotated sample configuration file.

11.1.1. Required configuration for automatic failover

The following ltclusterd options must be set in ltcluster.conf:

  • failover
  • promote_command
  • follow_command

Example:

          failover=automatic
          promote_command='/usr/bin/ltcluster standby promote -f /etc/ltcluster.conf --log-to-file'
          follow_command='/usr/bin/ltcluster standby follow -f /etc/ltcluster.conf --log-to-file --upstream-node-id=%n'

Details of each option are as follows:

failover

failover can be one of automatic or manual.

Note

If failover is set to manual, ltclusterd will not take any action if a failover situation is detected, and the node may need to be modified manually (e.g. by executing ltcluster standby follow).

promote_command

The program or script defined in promote_command will be executed in a failover situation when ltclusterd determines that the current node is to become the new primary node.

Normally promote_command is set as ltcluster's ltcluster standby promote command.

Note

When invoking ltcluster standby promote (either directly via the promote_command, or in a script called via promote_command), --siblings-follow must not be included as a command line option for ltcluster standby promote.

It is also possible to provide a shell script to e.g. perform user-defined tasks before promoting the current node. In this case the script must at some point execute ltcluster standby promote to promote the node; if this is not done, ltcluster metadata will not be updated and ltcluster will no longer function reliably.

Example:

                promote_command='/usr/bin/ltcluster standby promote -f /etc/ltcluster.conf --log-to-file'

Note that the --log-to-file option will cause output generated by the ltcluster command, when executed by ltclusterd, to be logged to the same destination configured to receive log output for ltclusterd.

Note

ltcluster will not apply pg_bindir when executing promote_command or follow_command; these can be user-defined scripts so must always be specified with the full path.

follow_command

The program or script defined in follow_command will be executed in a failover situation when ltclusterd determines that the current node is to follow the new primary node.

Normally follow_command is set as ltcluster's ltcluster standby follow command.

The follow_command parameter should provide the --upstream-node-id=%n option to ltcluster standby follow; the %n will be replaced by ltclusterd with the ID of the new primary node. If this is not provided, ltcluster standby follow will attempt to determine the new primary by itself, but if the original primary comes back online after the new primary is promoted, there is a risk that ltcluster standby follow will result in the node continuing to follow the original primary.

It is also possible to provide a shell script to e.g. perform user-defined tasks before promoting the current node. In this case the script must at some point execute ltcluster standby follow to promote the node; if this is not done, ltcluster metadata will not be updated and ltcluster will no longer function reliably.

Example:

          follow_command='/usr/bin/ltcluster standby follow -f /etc/ltcluster.conf --log-to-file --upstream-node-id=%n'

Note that the --log-to-file option will cause output generated by the ltcluster command, when executed by ltclusterd, to be logged to the same destination configured to receive log output for ltclusterd.

Note

ltcluster will not apply pg_bindir when executing promote_command or follow_command; these can be user-defined scripts so must always be specified with the full path.

11.1.2. Optional configuration for automatic failover

The following configuraton options can be use to fine-tune automatic failover:

priority

Indicates a preferred priority (default: 100) for promoting nodes; a value of zero prevents the node being promoted to primary.

Note that the priority setting is only applied if two or more nodes are determined as promotion candidates; in that case the node with the higher priority is selected.

failover_validation_command

User-defined script to execute for an external mechanism to validate the failover decision made by ltclusterd.

Note

This option must be identically configured on all nodes.

One or more of the following parameter placeholders may be provided, which will be replaced by ltclusterd with the appropriate value:

  • %n: node ID
  • %a: node name
  • %v: number of visible nodes
  • %u: number of shared upstream nodes
  • %t: total number of nodes

See also: Failover validation.

primary_visibility_consensus

If true, only continue with failover if no standbys (or the witness server, if present) have seen the primary node recently.

Note

This option must be identically configured on all nodes.

always_promote

Default: false.

If true, promote the local node even if its ltcluster metadata is not up-to-date.

Normally ltcluster expects its metadata (stored in the ltcluster.nodes table) to be up-to-date so ltclusterd can take the correct action during a failover. However it's possible that updates made on the primary may not have propagated to the standby (promotion candidate). In this case ltclusterd will default to not promoting the standby. This behaviour can be overridden by setting always_promote to true.

standby_disconnect_on_failover

In a failover situation, disconnect the local node's WAL receiver.

This option is available from LightDB 21 and later.

Note

This option must be identically configured on all nodes.

Additionally the ltcluster user must be a superuser for this option.

ltclusterd will refuse to start if this option is set but either of these prerequisites is not met.

See also: Standby disconnection on failover.

The following options can be used to further fine-tune failover behaviour. In practice it's unlikely these will need to be changed from their default values, but are available as configuration options should the need arise.

election_rerun_interval

If failover_validation_command is set, and the command returns an error, pause the specified amount of seconds (default: 15) before rerunning the election.

sibling_nodes_disconnect_timeout

If standby_disconnect_on_failover is true, the maximum length of time (in seconds, default: 30) to wait for other standbys to confirm they have disconnected their WAL receivers.

11.1.3. Configuring ltclusterd and pgbouncer to fence a failed primary node

For further details and a reference implementation, see the separate document Fencing a failed master node with ltclusterd and PgBouncer.

11.1.4. LightDB service configuration

If using automatic failover, currently ltclusterd will need to execute ltcluster standby follow to restart LightDB on standbys to have them follow a new primary.

To ensure this happens smoothly, it's essential to provide the appropriate system/service restart command appropriate to your operating system via service_restart_command in ltcluster.conf. If you don't do this, ltclusterd will default to using lt_ctl, which can result in unexpected problems, particularly on systemd-based systems.

For more details, see service command settings.

11.1.5. ltclusterd service configuration

If you are intending to use the ltcluster daemon start and ltcluster daemon stop commands, the following parameters must be set in ltcluster.conf:

  • ltclusterd_service_start_command
  • ltclusterd_service_stop_command

Example (for ltcluster with LightDB 21 on CentOS 7):

ltclusterd_service_start_command='sudo systemctl ltcluster12 start'
ltclusterd_service_stop_command='sudo systemctl ltcluster12 stop'

For more details see the reference page for each command.

11.1.6. Monitoring configuration

To enable monitoring, set:

          monitoring_history=yes

in ltcluster.conf.

Monitoring data is written at the interval defined by the option monitor_interval_secs (see above).

For more details on monitoring, see Storing monitoring data. For information on monitoring standby disconnections, see Monitoring standby disconnections on the primary.

11.1.7. Applying configuration changes to ltclusterd

To apply configuration file changes to a running ltclusterd daemon, execute the operating system's ltclusterd service reload command , or for instances which were manually started, execute kill -HUP, e.g. kill -HUP `cat /tmp/ltclusterd.pid`.

Tip

Check the ltclusterd log to see what changes were applied, or if any issues were encountered when reloading the configuration.

Note that only the following subset of configuration file parameters can be changed on a running ltclusterd daemon:

  • async_query_timeout
  • child_nodes_check_interval
  • child_nodes_connected_include_witness
  • child_nodes_connected_min_count
  • child_nodes_disconnect_command
  • child_nodes_disconnect_min_count
  • child_nodes_disconnect_timeout
  • connection_check_type
  • conninfo
  • degraded_monitoring_timeout
  • event_notification_command
  • event_notifications
  • failover_validation_command
  • failover
  • follow_command
  • log_facility
  • log_file
  • log_level
  • log_status_interval
  • monitor_interval_secs
  • monitoring_history
  • primary_notification_timeout
  • primary_visibility_consensus
  • always_promote
  • promote_command
  • reconnect_attempts
  • reconnect_interval
  • retry_promote_interval_secs
  • ltclusterd_standby_startup_timeout
  • sibling_nodes_disconnect_timeout
  • standby_disconnect_on_failover

The following set of configuration file parameters must be updated via ltcluster standby register --force, as they require changes to the ltcluster.nodes table so they are visible to all nodes in the replication cluster:

  • node_id
  • node_name
  • data_directory
  • location
  • priority

Note

After executing ltcluster standby register --force, ltclusterd must be restarted for the changes to take effect.