CentOS or Red Hat

This section describes the steps needed to set up a multi-node Canopy cluster on your own Linux machines.

Steps to be executed on all nodes

1. Install LightDB + Canopy and initialize a cluster on all nodes

2. Configure connection and authentication

Before starting the database let’s change its access permissions. As a part of this step, we configure the client authentication file to allow all incoming connections from the local network.

vim lt_hba.conf
# Allow unrestricted access to nodes in the local network. The following ranges
# correspond to 24, 20, and 16-bit blocks in Private IPv4 address spaces.
host    all             all             10.0.0.0/8              trust

# Also allow the host unrestricted access to connect to itself
host    all             all             127.0.0.1/32            trust
host    all             all             ::1/128                 trust

备注

Your DNS settings may differ. Also these settings are too permissive for some environments, see our notes about Increasing Worker Security. The LightDB manual explains how to make them more restrictive.

3. Start database servers, create Canopy extension

You must add the Canopy extension to every database you would like to use in a cluster.

Steps to be executed on the coordinator node

The steps listed below must be executed only on the coordinator node after the previously mentioned steps have been executed.

1. Add worker node information

We need to inform the coordinator about its workers. To add this information, we call a UDF which adds the node information to the pg_dist_node catalog table, which the coordinator uses to get the list of worker nodes. For our example, we assume that there are two workers (named worker-101, worker-102). Add the workers’ DNS names (or IP addresses) and server ports to the table.

# Register the hostname that future workers will use to connect
# to the coordinator node.
#
# You'll need to change the example, 'coord.example.com',
# to match the actual hostname

ltsql -c \
  "SELECT canopy_set_coordinator_host('coord.example.com', 5432);

# Add the worker nodes.
#
# Similarly, you'll need to change 'worker-101' and 'worker-102' to the
# actual hostnames

SELECT * from canopy_add_node('worker-101', 5432);
SELECT * from canopy_add_node('worker-102', 5432);

2. Verify that installation has succeeded

To verify that the installation has succeeded, we check that the coordinator node has picked up the desired worker configuration. This command when run in the ltsql shell should output the worker nodes we added to the pg_dist_node table above.

SELECT * FROM canopy_get_active_worker_nodes();

Ready to use Canopy

At this step, you have completed the installation process and are ready to use your Canopy cluster. The new Canopy database is accessible in ltsql through the lightdb user:

sudo -i -u lightdb ltsql