The UDF Python API consists of one file, kinetica_proc.py. This needs to be included in the UDF source code. There are no external dependencies beyond the Python standard library.
To take advantage of GPU processing within a UDF, the CUDA Toolkit must be downloaded & installed from the NVIDIA Developer Zone.
Also, any Python packages the UDF may need to use should be installed on all cluster nodes. The gpudb-pip.sh script is provided to install any given package into the bundled Python installation on all nodes. To run:
$ /opt/gpudb/udf/api/python/gpudb-pip.sh install <package name>
A UDF must first get a handle to the ProcData
class imported from
kinetica_proc.py. This will parse the primary control file and set
up all the necessary structures. It will return ProcData
object, which is
used to access everything else. All configuration information is cached, so
repeated calls to get a ProcData
will not reload any configuration files.
Once the UDF has been initialized, the following calls are possible:
Call | Description |
---|---|
proc_data.request_info |
Returns a map of basic information about the /execute/proc request, map values being accessed using: proc_data.request_info[<map_key>]
The full set of map keys is listed below, under Request Info Keys. |
proc_data.params |
Returns a map of string-valued parameters that were passed into the UDF |
proc_data.bin_params |
Returns a map of binary-valued parameters that were passed into the UDF |
proc_data.input_data |
Returns an InputDataSet object for accessing input table data that
was passed into the UDF |
Call | Description |
---|---|
proc_data.results |
Returns a map that can be populated with string-valued results to be returned from the UDF |
proc_data.bin_results |
Returns a map that can be populated with binary-valued results to be returned from the UDF |
proc_data.output_data |
Returns an OutputDataSet object for writing output table data that
will be written to the database |
The UDF must finish with a call to proc_data.complete
. This writes out
some final control information to indicate that the UDF completed
successfully.
NOTE: If this call is not made, the database will assume that the UDF didn’t finish and will return an error to the caller.
The InputDataSet
& OutputDataSet
objects contain InputTable
&
OutputTable
objects, which, in turn, contain InputColumn
&
OutputColumn
, holding the actual data sets. Tables and columns can be
accessed by index or by name. For example, given a customer
table at
InputDataSet
index 5
and a name
column at that InputTable
's
index 1
, either of the following calls will retrieve the column values
associated with customer.name
:
proc_data.input_data["customer"]["name"]
proc_data.input_data[5][1]
Unlike the other Kinetica APIs, the UDF Python API does not process data using records or schemas, operating in terms of columns of data instead. The raw column values returned closely map to the data types used in the tables being accessed:
Column Category | Column Type | UDF Value Type |
---|---|---|
Numeric | int | int |
int8 | int | |
int16 | int | |
long | int | |
float | float | |
double | float | |
decimal | decimal.Decimal | |
String | string | str |
char[N] | str | |
ipv4 | int | |
Date/Time | date | datetime.date |
time | datetime.time | |
timestamp | int | |
Binary | bytes | str |
Column data values can be accessed through array indices:
column[i]
For example, to retrieve the value for the 10th record:
column[9]
To output data to a table, the size of the table must be set in order to allocate enough space in all of the columns to hold the correct number of values. To do this, call:
table.size = <total number of output records>
Table column values can then be assigned by index to each OutputColumn
of
each OutputTable
.
Any output from the UDF to stdout is written to the
/opt/gpudb/core/logs/gpudb.log
file.
A variety of details about the executing UDF can be extracted from the request information map made available to each running proc. The full list of keys follows.
Map Key | Description |
---|---|
proc_name |
The name of the proc (UDF) being executed |
run_id |
The run ID of the proc being executed; this is also displayed in Gadmin on the UDF page in the Status section as a link you can click on to get more detailed information; note that although this is an integer, it should not be relied upon as such, as its format may change in the future |
rank_number |
The rank number on which the current proc instance is executing; for distributed procs [1..n], for non-distributed procs 0 |
tom_number |
The TOM number within the rank on which the current proc instance is executing; for distributed procs [0..n-1], where n is the number of TOMs per rank, for non-distributed procs it is not provided, since these do not run on a TOM |
<option_name> |
Any options passed in the options map in the
/execute/proc request will also be in the request
info map |
Data is passed into procs in segments. Each segment consists of the entirety of the data on a single TOM and is processed by the proc instance executing on that TOM. Thus, there is a 1-to-1 mapping of data segment and executing proc instance, though this relationship may change in the future.
Running the same proc multiple times should result in the same set of segments, assuming the same environment and system state across runs.
Map Key | Description |
---|---|
data_segment_id |
A unique identifier for the segment of the currently executing proc instance; all of the data segment IDs for a given proc execution are displayed in Gadmin when you click on the run ID; note that although it is possible to identify rank and TOM numbers from this ID, it should not be relied upon, as its format may change in the future |
data_segment_count |
The total cluster-wide count of data segments for distributed procs; for non-distributed procs 1 |
data_segment_number |
The number of the current data segment or executing proc instance [0..data_segment_count-1] |
These can be used to connect back to Kinetica using the regular API endpoint calls. Use with caution in distributed procs, particularly in large clusters, to avoid overwhelming the head node. Also note, multi-head ingest may not work from a proc in some cases without overriding the worker URLs to use internal IP addresses.
Map Key | Description |
---|---|
head_url |
The URL to connect to |
username |
The username to connect as (corresponds to the user that called /execute/proc) |
password |
The password for the username |
Note: username
and password
are not the actual login credentials of
the user; they are randomly generated temporary credentials, which, for security
reasons, should still not be printed or output to logs, etc., as they are live
credentials for a period of time.