Migrating gphdfs External Tables to PXF
You may be using the gphdfs
external table protocol in a LightDB-A Database version 4 or 5 cluster to access data stored in Hadoop. LightDB-A Database version 6 removes support for the gphdfs
protocol. To maintain access to Hadoop data in LightDB-A 6, you must migrate your gphdfs
external tables to use the LightDB-A Platform Extension Framework (PXF). This involves setting up PXF and creating new external tables that use the pxf
external table protocol.
To migrate your gphdfs
external tables to use the pxf
external table protocol, you:
- Prepare for the migration.
- Map configuration properties, and then set up PXF.
- Create a new
pxf
external table to replace eachgphdfs
external table. - Verify access to Hadoop files with the
pxf
external tables. - Remove the
gphdfs
external tables. - Revoke privileges to the
gphdfs
protocol. - Migrate data to LightDB-A 6.
gphdfs
from a LightDB-A Database 5 installation, you perform the migration in the order above in your LightDB-A 5 cluster before you migrate data to LightDB-A 6.gphdfs
from a LightDB-A Database 4 installation, you perform the migration in a similar order. However, since PXF is not available in LightDB-A Database 4, you must perform certain actions in the LightDB-A 6 installation before you migrate the data:
- LightDB-A 4: Prepare for the migration.
- LightDB-A 6:
- Install and configure the LightDB-A 6 software.
- Map configuration properties, and then install and set up PXF.
- Create a new
pxf
external table to replace eachgphdfs
external table. - Verify access to Hadoop files with the
pxf
external tables.
- LightDB-A 4:
- Migrate LightDB-A 4 data to LightDB-A 6.
Preparing for the Migration
As you prepare for migrating from gphdfs
to PXF:
Determine which
gphdfs
tables you want to migrate.You can run the following query to list the
gphdfs
external tables in a database:SELECT n.nspname, d.objid::regclass as tablename FROM pg_depend d JOIN pg_exttable x ON ( d.objid = x.reloid ) JOIN pg_extprotocol p ON ( p.oid = d.refobjid ) JOIN pg_class c ON ( c.oid = d.objid ) JOIN pg_namespace n ON ( c.relnamespace = n.oid ) WHERE d.refclassid = 'pg_extprotocol'::regclass AND p.ptcname = 'gphdfs';
For each table that you choose to migrate, identify the format of the external data and the column definitions. Also identify the options with which the
gphdfs
table was created. You can use the\dt+
SQL meta-command to obtain this information. For example:\d+ public.gphdfs_writable_parquet External table "public.gphdfs_writable_parquet" Column | Type | Modifiers | Storage | Description --------+---------+-----------+----------+------------- id | integer | | plain | msg | text | | extended | Type: writable Encoding: UTF8 Format type: parquet Format options: formatter 'gphdfs_export' External options: {} External location: gphdfs://hdfshost:8020/data/dir1/gphdfs_writepq?codec=GZIP Execute on: all segments
Save the information that you gathered above.
Setting Up PXF
PXF does not use the following gphdfs
configuration options:
gphdfs Configuration Option | Description | pxf Consideration |
---|---|---|
HADOOP_HOME | Environment variable that identifies the Hadoop installation directory | Not applicable; PXF is bundled with the required dependent Hadoop libraries and JARs |
CLASSPATH | Environment variable that identifies the locations of Hadoop JAR and configuration files | Not applicable, PXF automatically includes the Hadoop libraries, JARs, and configuration files that it bundles in the CLASSPATH . PXF also automatically includes in the CLASSPATH user-registered dependencies found in the $PXF_CONF/lib (PXF 5.x) or $PXF_BASE/lib (PXF 6.x) directory. |
gp_hadoop_target_version | Server configuration parameter that identifies the Hadoop distribution | Not applicable, PXF works out-of-the-box with the different Hadoop distributions |
gp_hadoop_home | Server configuration parameter that identifies the Hadoop installation directory | Not applicable, PXF works out-of-the-box with the different Hadoop distributions |
Configuration properties required by PXF, and the gphdfs
equivalent, if applicable, include:
Configuration Item | Description | gphdfs Config | pxf Config |
---|---|---|---|
JAVA_HOME | Environment variable that identifies the Java installation directory | Set JAVA_HOME on each segment host |
Set JAVA_HOME in the $PXF_CONF/conf/pxf-env.sh (PXF 5.x) or $PXF_BASE/conf/pxf-env.sh (PXF 6.x) configuration file. |
JVM option settings | Options with which to start the JVM | Set options in the GP_JAVA_OPT environment variable in hadoop_env.sh |
Set options in the PXF_JVM_OPTS environment variable in $PXF_CONF/conf/pxf-env.sh (PXF 5.x) or $PXF_BASE/conf/pxf-env.sh (PXF 6.x). |
PXF Server | PXF server configuration for Hadoop | Not applicable | Configure a PXF server for Hadoop |
Privileges | The LightDB-A Database privileges required to create external tables in the given protocol | Grant SELECT and INSERT privileges on the gphdfs protocol to appropriate users |
Grant SELECT and INSERT privileges on the pxf protocol to appropriate users |
After you determine the equivalent PXF configuration properties, you will:
Update the Java version installed on each LightDB-A Database host, if necessary. PXF supports Java version 8 and 11. If your LightDB-A Database cluster hosts are running Java 7, upgrade to Java version 8 or 11 as described in Installing Java for PXF. Note the
$JAVA_HOME
setting.If you are migrating from LightDB-A Database 4, or you have not previously used PXF in your LightDB-A 5 installation:
- Install the newest version of the independent PXF distribution on your LightDB-A Database hosts.
Inform PXF of the $JAVA_HOME setting by specifying its value in the
pxf-env.sh
configuration file.Edit the
pxf-env.sh
file on the LightDB-A coordinator node.gpadmin@gpcoordinator$ vi /usr/local/pxf-gp6/conf/pxf-env.sh
Locate the
JAVA_HOME
setting in thepxf-env.sh
file, uncomment if necessary, and set it to your$JAVA_HOME
value. For example:export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk/jre/
Register the PXF extension with LightDB-A Database:
gpadmin@gpcoordinator$ pxf cluster register
Configure the PXF Hadoop Connectors. This procedure creates a PXF server configuration that provides PXF the information that it requires to access Hadoop. This procedure also synchronizes the configuration changes to all hosts in your LightDB-A cluster.
Start or restart PXF in your LightDB-A Database cluster:
gpadmin@gpcoordinator$ pxf cluster start
Creating a PXF External Table
gphdfs
and pxf
are both external table protocols. Creating an external table using these protocols is similar. You specify the external table name and its column definitions. You also specify LOCATION
and FORMAT
clauses. gphdfs
and pxf
use information in these clauses to determine the location and type of the external data.
Mapping the LOCATION Clause
The LOCATION
clause of an external table identifies the external table protocol, location of the external data, and protocol- and operation-specific custom options.
The format of gphdfs
’s LOCATION
clause is as follows:
LOCATION('gphdfs://<hdfs_host>:<hdfs_port>/<path-to-data>?[&<custom-option>=<value>[...]]')
PXF’s LOCATION
clause takes the following format when you access data stored on Hadoop:
LOCATION('pxf://<path-to-data>?PROFILE=<profile_name>[&SERVER=<server_name>][&<custom-option>=<value>[...]]')
You are not required to specify the HDFS host and port number when you create a PXF external table. PXF obtains this information from the default
server configuration, or from the server configuration name that you specify in <server_name>
.
Refer to Creating an External Table in the PXF documentation for more information about the PXF CREATE EXTERNAL TABLE
syntax and keywords.
When you create an external table specifying the gphdfs
protocol, you identify the format of the external data in the FORMAT
clause (discussed in the next section). PXF uses a PROFILE
option in the LOCATION
clause to identify the source and type of the external data.
Data Format | pxf PROFILE |
---|---|
Avro | hdfs:avro |
Parquet | hdfs:parquet |
Text | hdfs:text |
Refer to Connectors, Data Formats, and Profiles in the PXF documentation for more information about the PXF profiles supported for Hadoop.
Both gphdfs
and pxf
utilize custom options in the LOCATION
clause to identify data-format-, operation-, or profile-specific options supported by the protocol. For example, both gphdfs
and pxf
support parquet and compression options on INSERT
operations.
Should you need to migrate a gphdfs
writable external table that references an HDFS file to PXF, map gphdfs
to PXF writable external table compression options as follows:
Description | gphdfs LOCATION Option | pxf LOCATION Option |
---|---|---|
Use of Compression | compress | Not applicable; depends on the profile - may be uncompressed by default or specified via COMPRESSION_CODEC |
Type of compression | compression_type | COMPRESSION_TYPE |
Compression codec | codec | COMPRESSION_CODEC |
Level of Compression1 | codec_level | CODEC_LEVEL (supported in PXF 5.14.0 and newer versions) |
1 Avro format deflate
codec only.
If the HDFS file is a Parquet-format file, map these additional parquet options as follows:
Description | gphdfs LOCATION Option | pxf LOCATION Option |
---|---|---|
Parquet schema | schema | SCHEMA |
Page size | pagesize | PAGE_SIZE |
Row group size | rowgroupsize | ROWGROUP_SIZE |
Parquet version | parquetversion or pqversion | PARQUET_VERSION |
Enable a dictionary | dictionaryenable | The dictionary is always enabled when writing Parquet data with PXF |
Dictionary page size | dictionarypagesize | DICTIONARY_PAGE_SIZE |
Mapping the FORMAT Options
The gphdfs
protocol uses the FORMAT
clause to determine the format of the external data. For Avro- and Parquet-format data, the PXF FORMAT
clause must identify the name of a custom formatter.
Data Format | gphdfs FORMAT Option | pxf FORMAT Option |
---|---|---|
Avro | FORMAT ‘AVRO’ | FORMAT 'CUSTOM’ (FORMATTER='pxfwritable_import’) (read) FORMAT 'CUSTOM’ (FORMATTER='pxfwritable_export’) (write) |
Parquet | FORMAT 'PARQUET’ | FORMAT 'CUSTOM’ (FORMATTER='pxfwritable_import’) (read) FORMAT 'CUSTOM’ (FORMATTER='pxfwritable_export’) (write) |
Text | FORMAT 'TEXT’ (DELIMITER ’,’) | FORMAT 'TEXT’ (DELIMITER ’,’) |
For text data, the FORMAT
clause may identify a delimiter or other formatting option as described on the CREATE EXTERNAL TABLE command reference page.
Example gphdfs to pxf External Table Mapping for an HDFS Text File
Example gphdfs
CREATE EXTERNAL TABLE
command to read a text file on HDFS:
CREATE EXTERNAL TABLE ext_expenses (
name text,
date date,
amount float4,
category text,
desc1 text )
LOCATION ('gphdfs://hdfshost-1:8081/dir/filename.txt')
FORMAT 'TEXT' (DELIMITER ',');
Equivalent pxf
CREATE EXTERNAL TABLE
command, providing that the default
PXF server contains the Hadoop configuration:
CREATE EXTERNAL TABLE ext_expenses_pxf (
name text,
date date,
amount float4,
category text,
desc1 text )
LOCATION ('pxf://dir/filename.txt?PROFILE=hdfs:text')
FORMAT 'TEXT' (DELIMITER ',');
Verifying Access with PXF
Ensure that you can read from, or write to, each pxf
external table that you have created.
Removing the gphdfs External Tables
You must remove all gphdfs
external tables before you can successfully migrate a LightDB-A Database 4 or 5 database to LightDB-A 6.
Drop an external table as follows:
DROP EXTERNAL TABLE <schema_name>.<external_table_name>;
Revoking Privileges to the gphdfs Protocol
Before you migrate, you must revoke privileges to the gphdfs
protocol from each LightDB-A Database role to which you assigned the privileges.
Revoke the privilege as follows:
ALTER ROLE <role_name> NOCREATEEXTTABLE(protocol='gphdfs',type='readable');
ALTER ROLE <role_name> NOCREATEEXTTABLE(protocol='gphdfs',type='writable');