As described in Section 36.2, LightDB can be extended to support new data types. This section describes how to define new base types, which are data types defined below the level of the SQL language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C.
The examples in this section can be found in
complex.sql
and complex.c
in the src/tutorial
directory of the source distribution.
See the README
file in that directory for instructions
about running the examples.
A user-defined type must always have input and output functions. These functions determine how the type appears in strings (for input by the user and output to the user) and how the type is organized in memory. The input function takes a null-terminated character string as its argument and returns the internal (in memory) representation of the type. The output function takes the internal representation of the type as argument and returns a null-terminated character string. If we want to do anything more with the type than merely store it, we must provide additional functions to implement whatever operations we'd like to have for the type.
Suppose we want to define a type complex
that represents
complex numbers. A natural way to represent a complex number in
memory would be the following C structure:
typedef struct Complex { double x; double y; } Complex;
We will need to make this a pass-by-reference type, since it's too
large to fit into a single Datum
value.
As the external string representation of the type, we choose a
string of the form (x,y)
.
The input and output functions are usually not hard to write, especially the output function. But when defining the external string representation of the type, remember that you must eventually write a complete and robust parser for that representation as your input function. For instance:
PG_FUNCTION_INFO_V1(complex_in); Datum complex_in(FunctionCallInfo fcinfo) { char *str = PG_GETARG_CSTRING(0); double x, y; Complex *result; if (sscanf(str, " ( %lf , %lf )", &x, &y) != 2) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("invalid input syntax for type %s: \"%s\"", "complex", str))); result = (Complex *) palloc(sizeof(Complex)); result->x = x; result->y = y; LT_RETURN_POINTER(result); }
The output function can simply be:
PG_FUNCTION_INFO_V1(complex_out); Datum complex_out(FunctionCallInfo fcinfo) { Complex *complex = (Complex *) PG_GETARG_POINTER(0); char *result; result = psprintf("(%g,%g)", complex->x, complex->y); LT_RETURN_CSTRING(result); }
You should be careful to make the input and output functions inverses of each other. If you do not, you will have severe problems when you need to dump your data into a file and then read it back in. This is a particularly common problem when floating-point numbers are involved.
Optionally, a user-defined type can provide binary input and output
routines. Binary I/O is normally faster but less portable than textual
I/O. As with textual I/O, it is up to you to define exactly what the
external binary representation is. Most of the built-in data types
try to provide a machine-independent binary representation. For
complex
, we will piggy-back on the binary I/O converters
for type float8
:
PG_FUNCTION_INFO_V1(complex_recv); Datum complex_recv(FunctionCallInfo fcinfo) { StringInfo buf = (StringInfo) PG_GETARG_POINTER(0); Complex *result; result = (Complex *) palloc(sizeof(Complex)); result->x = pq_getmsgfloat8(buf); result->y = pq_getmsgfloat8(buf); LT_RETURN_POINTER(result); } PG_FUNCTION_INFO_V1(complex_send); Datum complex_send(FunctionCallInfo fcinfo) { Complex *complex = (Complex *) PG_GETARG_POINTER(0); StringInfoData buf; pq_begintypsend(&buf); pq_sendfloat8(&buf, complex->x); pq_sendfloat8(&buf, complex->y); LT_RETURN_BYTEA_P(pq_endtypsend(&buf)); }
Once we have written the I/O functions and compiled them into a shared
library, we can define the complex
type in SQL.
First we declare it as a shell type:
CREATE TYPE complex;
This serves as a placeholder that allows us to reference the type while defining its I/O functions. Now we can define the I/O functions:
CREATE FUNCTION complex_in(cstring) RETURNS complex AS 'filename
' LANGUAGE C IMMUTABLE STRICT; CREATE FUNCTION complex_out(complex) RETURNS cstring AS 'filename
' LANGUAGE C IMMUTABLE STRICT; CREATE FUNCTION complex_recv(internal) RETURNS complex AS 'filename
' LANGUAGE C IMMUTABLE STRICT; CREATE FUNCTION complex_send(complex) RETURNS bytea AS 'filename
' LANGUAGE C IMMUTABLE STRICT;
Finally, we can provide the full definition of the data type:
CREATE TYPE complex ( internallength = 16, input = complex_in, output = complex_out, receive = complex_recv, send = complex_send, alignment = double );
When you define a new base type,
LightDB automatically provides support
for arrays of that type. The array type typically
has the same name as the base type with the underscore character
(_
) prepended.
Once the data type exists, we can declare additional functions to provide useful operations on the data type. Operators can then be defined atop the functions, and if needed, operator classes can be created to support indexing of the data type. These additional layers are discussed in following sections.
If the internal representation of the data type is variable-length, the
internal representation must follow the standard layout for variable-length
data: the first four bytes must be a char[4]
field which is
never accessed directly (customarily named vl_len_
). You
must use the SET_VARSIZE()
macro to store the total
size of the datum (including the length field itself) in this field
and VARSIZE()
to retrieve it. (These macros exist
because the length field may be encoded depending on platform.)
For further details see the description of the CREATE TYPE command.
If the values of your data type vary in size (in internal form), it's usually desirable to make the data type TOAST-able (see Section 62.2). You should do this even if the values are always too small to be compressed or stored externally, because TOAST can save space on small data too, by reducing header overhead.
To support TOAST storage, the C functions operating on the data
type must always be careful to unpack any toasted values they are handed
by using LT_DETOAST_DATUM
. (This detail is customarily hidden
by defining type-specific GETARG_DATATYPE_P
macros.)
Then, when running the CREATE TYPE
command, specify the
internal length as variable
and select some appropriate storage
option other than plain
.
If data alignment is unimportant (either just for a specific function or
because the data type specifies byte alignment anyway) then it's possible
to avoid some of the overhead of LT_DETOAST_DATUM
. You can use
LT_DETOAST_DATUM_PACKED
instead (customarily hidden by
defining a GETARG_DATATYPE_PP
macro) and using the macros
VARSIZE_ANY_EXHDR
and VARDATA_ANY
to access
a potentially-packed datum.
Again, the data returned by these macros is not aligned even if the data
type definition specifies an alignment. If the alignment is important you
must go through the regular LT_DETOAST_DATUM
interface.
Older code frequently declares vl_len_
as an
int32
field instead of char[4]
. This is OK as long as
the struct definition has other fields that have at least int32
alignment. But it is dangerous to use such a struct definition when
working with a potentially unaligned datum; the compiler may take it as
license to assume the datum actually is aligned, leading to core dumps on
architectures that are strict about alignment.
Another feature that's enabled by TOAST support is the possibility of having an expanded in-memory data representation that is more convenient to work with than the format that is stored on disk. The regular or “flat” varlena storage format is ultimately just a blob of bytes; it cannot for example contain pointers, since it may get copied to other locations in memory. For complex data types, the flat format may be quite expensive to work with, so LightDB provides a way to “expand” the flat format into a representation that is more suited to computation, and then pass that format in-memory between functions of the data type.
To use expanded storage, a data type must define an expanded format that
follows the rules given in src/include/utils/expandeddatum.h
,
and provide functions to “expand” a flat varlena value into
expanded format and “flatten” the expanded format back to the
regular varlena representation. Then ensure that all C functions for
the data type can accept either representation, possibly by converting
one into the other immediately upon receipt. This does not require fixing
all existing functions for the data type at once, because the standard
LT_DETOAST_DATUM
macro is defined to convert expanded inputs
into regular flat format. Therefore, existing functions that work with
the flat varlena format will continue to work, though slightly
inefficiently, with expanded inputs; they need not be converted until and
unless better performance is important.
C functions that know how to work with an expanded representation
typically fall into two categories: those that can only handle expanded
format, and those that can handle either expanded or flat varlena inputs.
The former are easier to write but may be less efficient overall, because
converting a flat input to expanded form for use by a single function may
cost more than is saved by operating on the expanded format.
When only expanded format need be handled, conversion of flat inputs to
expanded form can be hidden inside an argument-fetching macro, so that
the function appears no more complex than one working with traditional
varlena input.
To handle both types of input, write an argument-fetching function that
will detoast external, short-header, and compressed varlena inputs, but
not expanded inputs. Such a function can be defined as returning a
pointer to a union of the flat varlena format and the expanded format.
Callers can use the VARATT_IS_EXPANDED_HEADER()
macro to
determine which format they received.
The TOAST infrastructure not only allows regular varlena values to be distinguished from expanded values, but also distinguishes “read-write” and “read-only” pointers to expanded values. C functions that only need to examine an expanded value, or will only change it in safe and non-semantically-visible ways, need not care which type of pointer they receive. C functions that produce a modified version of an input value are allowed to modify an expanded input value in-place if they receive a read-write pointer, but must not modify the input if they receive a read-only pointer; in that case they have to copy the value first, producing a new value to modify. A C function that has constructed a new expanded value should always return a read-write pointer to it. Also, a C function that is modifying a read-write expanded value in-place should take care to leave the value in a sane state if it fails partway through.
For examples of working with expanded values, see the standard array
infrastructure, particularly
src/backend/utils/adt/array_expanded.c
.