lt_upgrade — upgrade a LightDB server instance
lt_upgrade
-b
oldbindir
[-B
newbindir
] -d
oldconfigdir
-D
newconfigdir
[option
...]
lt_upgrade (formerly called lt_migrator) allows data stored in LightDB data files to be upgraded to a later LightDB major version without the data dump/reload typically required for major version upgrades, e.g., from 22.1 to 22.2.
Major LightDB releases regularly add new features that often change the layout of the system tables, but the internal data storage format rarely changes. lt_upgrade uses this fact to perform rapid upgrades by creating new system tables and simply reusing the old user data files. If a future major release ever changes the data storage format in a way that makes the old data format unreadable, lt_upgrade will not be usable for such upgrades. (We will attempt to avoid such situations.)
lt_upgrade does its best to make sure the old and new clusters are binary-compatible, e.g., by checking for compatible compile-time settings, including 32/64-bit binaries. It is important that any external modules are also binary compatible, though this cannot be checked by lt_upgrade.
lt_upgrade supports upgrades from 22.1 and later to the current major release of LightDB, including snapshot and beta releases.
lt_upgrade accepts the following command-line arguments:
-b
bindir
--old-bindir=
bindir
the old LightDB executable directory;
environment variable LTBINOLD
-B
bindir
--new-bindir=
bindir
the new LightDB executable directory;
default is the directory where lt_upgrade resides;
environment variable LTBINNEW
-c
--check
check clusters only, don't change any data
-d
configdir
--old-datadir=
configdir
the old database cluster configuration directory; environment
variable LTDATAOLD
-D
configdir
--new-datadir=
configdir
the new database cluster configuration directory; environment
variable LTDATANEW
-j njobs
--jobs=njobs
number of simultaneous processes or threads to use
-k
--link
use hard links instead of copying files to the new cluster
-o
options
--old-options
options
options to be passed directly to the
old lightdb
command; multiple
option invocations are appended
-O
options
--new-options
options
options to be passed directly to the
new lightdb
command; multiple
option invocations are appended
-p
port
--old-port=
port
the old cluster port number; environment
variable LTPORTOLD
-P
port
--new-port=
port
the new cluster port number; environment
variable LTPORTNEW
-r
--retain
retain SQL and log files even after successful completion
-s
dir
--socketdir=
dir
directory to use for postmaster sockets during upgrade;
default is current working directory; environment
variable LTSOCKETDIR
-U
username
--username=
username
cluster's install user name; environment
variable LTUSER
-v
--verbose
enable verbose internal logging
-V
--version
display version information, then exit
--clone
Use efficient file cloning (also known as “reflinks” on
some systems) instead of copying files to the new cluster. This can
result in near-instantaneous copying of the data files, giving the
speed advantages of -k
/--link
while
leaving the old cluster untouched.
File cloning is only supported on some operating systems and file systems. If it is selected but not supported, the lt_upgrade run will error. At present, it is supported on Linux (kernel 4.5 or later) with Btrfs and XFS (on file systems created with reflink support), and on macOS with APFS.
-?
--help
show help, then exit
These are the steps to perform an upgrade with lt_upgrade:
Optionally move the old cluster
It is necessary to move the current LightDB install
directory so it does not interfere with the new LightDB installation.
Once the current LightDB server is shut down, it is safe to rename the
LightDB installation directory; assuming the old directory is
/usr/local/lightdbx
, you can do:
mv /usr/local/lightdbx /usr/local/lightdbx.old
to rename the directory.
Install the new LightDB binaries
Install the new server's binaries and support files. lt_upgrade is included in a default installation.
Initialize the new LightDB cluster
If your LightDB is stand-alone, initialize the new cluster using lt_initdb
.
Again, use compatible lt_initdb
flags that match the old cluster, Then add another option --upgrade-mode
. There is no need to
start the new cluster.
lt_initdb -p 5432 -D /home/lightdb/lightdbx/data --upgrade-mode
If your LightDB is deployed in high availability or distributed.
Initialize the new cluster using lightdb-installer
.
You must modify shell script script/8_lightdb_create_extension.sh
delete all
exeuteSQL command in it Before start install.sh
.
Install custom shared object files
Install any custom shared object files used by the old cluster
into the new cluster, e.g., ltcrypto.so
,
whether they are from contrib
or some other source. Do not install the schema definitions, e.g.,
CREATE EXTENSION ltcrypto
, because these will be upgraded
from the old cluster.
You don't need to worry about the built-in extensions, LightDB will take care of it.
Adjust authentication
lt_upgrade
will connect to the old and new servers several
times, so you might want to set authentication to peer
in lt_hba.conf
or use a ~/.pgpass
file
(see Section 32.15).
Stop both servers
Make sure both database servers are stopped using, on Unix, e.g.:
lt_ctl -D /opt/LightDB/13.8-22.3 stop lt_ctl -D /opt/LightDB/13 stop
Streaming replication and log-shipping standby servers can remain running until a later step.
Prepare for standby server upgrades
If you are upgrading standby servers using methods outlined in section Step 9, verify that the old standby
servers are caught up by running lt_controldata
against the old primary and standby clusters. Verify that the
“Latest checkpoint location” values match in all clusters.
(There will be a mismatch if old standby servers were shut down
before the old primary or if the old standby servers are still running.)
Also, make sure wal_level
is not set to
minimal
in the lightdb.conf
file on the
new primary cluster.
Run lt_upgrade
Always run the lt_upgrade binary of the new server, not the old one.
lt_upgrade requires the specification of the old and new cluster's
data and executable (bin
) directories. You can also specify
user and port values, and whether you want the data files linked or cloned
instead of the default copy behavior.
If you use link mode, the upgrade will be much faster (no file
copying) and use less disk space, but you will not be able to access
your old cluster
once you start the new cluster after the upgrade. Link mode also
requires that the old and new cluster data directories be in the
same file system. (Tablespaces and lt_wal
can be on
different file systems.)
Clone mode provides the same speed and disk space advantages but
does not cause the old cluster to be unusable once the new cluster
is started. Clone mode also requires that the old and new data
directories be in the same file system. This mode is only available
on certain operating systems and file systems.
The --jobs
option allows multiple CPU cores to be used
for copying/linking of files and to dump and restore database schemas
in parallel; a good place to start is the maximum of the number of
CPU cores and tablespaces. This option can dramatically reduce the
time to upgrade a multi-database server running on a multiprocessor
machine.
For Windows users, you must be logged into an administrative account, and
then start a shell as the postgres
user and set the proper path:
Once started, lt_upgrade
will verify the two clusters are compatible
and then do the upgrade. You can use lt_upgrade --check
to perform only the checks, even if the old server is still
running. lt_upgrade --check
will also outline any
manual adjustments you will need to make after the upgrade. If you
are going to be using link or clone mode, you should use the option
--link
or --clone
with
--check
to enable mode-specific checks.
lt_upgrade
requires write permission in the current directory.
Obviously, no one should be accessing the clusters during the upgrade. lt_upgrade defaults to running servers on port 50432 to avoid unintended client connections. You can use the same port number for both clusters when doing an upgrade because the old and new clusters will not be running at the same time. However, when checking an old running server, the old and new port numbers must be different.
If an error occurs while restoring the database schema, lt_upgrade
will
exit and you will have to revert to the old cluster as outlined in Step 15
below. To try lt_upgrade
again, you will need to modify the old
cluster so the lt_upgrade schema restore succeeds. If the problem is a
contrib
module, you might need to uninstall the contrib
module from
the old cluster and install it in the new cluster after the upgrade,
assuming the module is not being used to store user data.
Upgrade streaming replication and log-shipping standby servers
If you used link mode and have Streaming Replication (see Section 24.2.5) or Log-Shipping (see Section 24.2) standby servers, you can follow these steps to quickly upgrade them. You will not be running lt_upgrade on the standby servers, but rather rsync on the primary. Do not start any servers yet.
If you did not use link mode, do not have or do not want to use rsync, or want an easier solution, skip the instructions in this section and simply recreate the standby servers once lt_upgrade completes and the new primary is running.
Install the new LightDB binaries on standby servers
Make sure the new binaries and support files are installed on all standby servers.
Make sure the new standby data directories do not exist
Make sure the new standby data directories do not exist or are empty. If initdb was run, delete the standby servers' new data directories.
Install custom shared object files
Install the same custom shared object files on the new standbys that you installed in the new primary cluster.
Stop standby servers
If the standby servers are still running, stop them now using the above instructions.
Save configuration files
Save any configuration files from the old standbys' configuration
directories you need to keep, e.g., lightdb.conf
(and any files included by it), lightdb.auto.conf
,
lt_hba.conf
, because these will be overwritten
or removed in the next step.
Run rsync
When using link mode, standby servers can be quickly upgraded using rsync. To accomplish this, from a directory on the primary server that is above the old and new database cluster directories, run this on the primary for each standby server:
rsync --archive --delete --hard-links --size-only --no-inc-recursive old_cluster new_cluster remote_dir
where old_cluster
and new_cluster
are relative
to the current directory on the primary, and remote_dir
is above the old and new cluster directories
on the standby. The directory structure under the specified
directories on the primary and standbys must match. Consult the
rsync manual page for details on specifying the
remote directory, e.g.,
rsync --archive --delete --hard-links --size-only --no-inc-recursive /opt/LightDB/22.1 \ /opt/LightDB/22.2 standby.example.com:/opt/LightDB
You can verify what the command will do using
rsync's --dry-run
option. While
rsync must be run on the primary for at least one
standby, it is possible to run rsync on an upgraded
standby to upgrade other standbys, as long as the upgraded standby
has not been started.
What this does is to record the links created by lt_upgrade's link mode that connect files in the old and new clusters on the primary server. It then finds matching files in the standby's old cluster and creates links for them in the standby's new cluster. Files that were not linked on the primary are copied from the primary to the standby. (They are usually small.) This provides rapid standby upgrades. Unfortunately, rsync needlessly copies files associated with temporary and unlogged tables because these files don't normally exist on standby servers.
If you have tablespaces, you will need to run a similar rsync command for each tablespace directory, e.g.:
rsync --archive --delete --hard-links --size-only --no-inc-recursive /home/lightdbx/LT_22.1_201510051 \ /home/lightdbx/LT_22.2_201608131 standby.example.com:/home/lightdbx
If you have relocated lt_wal
outside the data
directories, rsync must be run on those directories
too.
Configure streaming replication and log-shipping standby servers
Configure the servers for log shipping. (You do not need to run
pg_start_backup()
and pg_stop_backup()
or take a file system backup as the standbys are still synchronized
with the primary.) Replication slots are not copied and must
be recreated.
Restore lt_hba.conf
If you modified lt_hba.conf
, restore its original settings.
It might also be necessary to adjust other configuration files in the new
cluster to match the old cluster, e.g., lightdb.conf
(and any files included by it), lightdb.auto.conf
.
Start the new server
The new server can now be safely started, and then any rsync'ed standby servers.
Post-upgrade processing
If any post-upgrade processing is required, lt_upgrade will issue warnings as it completes. It will also generate script files that must be run by the administrator. The script files will connect to each database that needs post-upgrade processing. Each script should be run using:
ltsql --username=lightdb --file=script.sql postgres
The scripts can be run in any order and can be deleted once they have been run.
In general it is unsafe to access tables referenced in rebuild scripts until the rebuild scripts have run to completion; doing so could yield incorrect results or poor performance. Tables not referenced in rebuild scripts can be accessed immediately.
Statistics
Because optimizer statistics are not transferred by lt_upgrade
, you will
be instructed to run a command to regenerate that information at the end
of the upgrade. You might need to set connection parameters to
match your new cluster.
Delete old cluster
Once you are satisfied with the upgrade, you can delete the old
cluster's data directories by running the script mentioned when
lt_upgrade
completes. (Automatic deletion is not
possible if you have user-defined tablespaces inside the old data
directory.) You can also delete the old installation directories
(e.g., bin
, share
).
Reverting to old cluster
If, after running lt_upgrade
, you wish to revert to the old cluster,
there are several options:
If the --check
option was used, the old cluster
was unmodified; it can be restarted.
If the --link
option was not
used, the old cluster was unmodified; it can be restarted.
If the --link
option was used, the data
files might be shared between the old and new cluster:
If lt_upgrade
aborted before linking started,
the old cluster was unmodified; it can be restarted.
If you did not start the new cluster, the old
cluster was unmodified except that, when linking started, a
.old
suffix was appended to
$LTDATA/global/lt_control
. To reuse the old
cluster, remove the .old
suffix from
$LTDATA/global/lt_control
; you can then restart
the old cluster.
If you did start the new cluster, it has written to shared files and it is unsafe to use the old cluster. The old cluster will need to be restored from backup in this case.
lt_upgrade creates various working files, such as schema dumps, in the current working directory. For security, be sure that that directory is not readable or writable by any other users.
lt_upgrade launches short-lived postmasters in
the old and new data directories. Temporary Unix socket files for
communication with these postmasters are, by default, made in the current
working directory. In some situations the path name for the current
directory might be too long to be a valid socket name. In that case you
can use the -s
option to put the socket files in some
directory with a shorter path name. For security, be sure that that
directory is not readable or writable by any other users.
All failure, rebuild, and reindex cases will be reported by lt_upgrade if they affect your installation; post-upgrade scripts to rebuild tables and indexes will be generated automatically. If you are trying to automate the upgrade of many clusters, you should find that clusters with identical database schemas require the same post-upgrade steps for all cluster upgrades; this is because the post-upgrade steps are based on the database schemas, and not user data.
LightDB has built-in extensions and built-in scheduled tasks. These are described in lt_extension.dat and lt_cron.dat. You cannot modify the two files, which are maintained by LightDB. lt_upgrade uses these two files to upgrade and update built-in extensions and scheduled tasks. Starting from LightDB 22.3, lt_extension.dat and lt_cron.dat are merged into one file lt_builtin.dat. But lt_extension.dat and lt_cron.dat are still be reserved.
Starting from LightDB 22.3, lt_upgrade will check the consistency of
lightdb_syntax_compatible_type
.
The value of old cluster must be same as that of the new cluster.
Starting from LightDB 22.3, lt_upgrade will create database
lt_test
for new cluster after the upgrade.
Starting from LightDB 22.3, An empty string is stored as null value while the
value of lightdb_syntax_compatible_type
is 'oracle'.
Therefore, if you are upgrading your cluster from 22.2 to 22.3 and the compatible
are both 'oracle', it is not recommended to use lt_upgrade.
For deployment testing, create a schema-only copy of the old cluster, insert dummy data, and upgrade that.
lt_upgrade does not support upgrading of databases
containing table columns using these reg*
OID-referencing system data types:
regcollation |
regconfig |
regdictionary |
regnamespace |
regoper |
regoperator |
regproc |
regprocedure |
(regclass
, regrole
, and regtype
can be upgraded.)
If you want to use link mode and you do not want your old cluster
to be modified when the new cluster is started, consider using the clone mode.
If that is not available, make a copy of the
old cluster and upgrade that in link mode. To make a valid copy
of the old cluster, use rsync
to create a dirty
copy of the old cluster while the server is running, then shut down
the old server and run rsync --checksum
again to update the
copy with any changes to make it consistent. (--checksum
is necessary because rsync
only has file modification-time
granularity of one second.) You might want to exclude some
files, e.g., lightdb.pid
, as documented in Section 23.3.3. If your file system supports
file system snapshots or copy-on-write file copies, you can use that
to make a backup of the old cluster and tablespaces, though the snapshot
and copies must be created simultaneously or while the database server
is down.