The information below includes all information one needs to know to begin running Python UDFs. For more information on writing Python UDFs, see Python UDF API; for more information on simulating running UDFs, see UDF Simulator. Example Python UDFs can be found here.
Important
Though any of the native APIs can be used for running UDFs written in any UDF API language, all the examples below are written using the native Python API.
Calling the create_proc() method will deploy the specified UDF to the Kinetica execution environment, to every server in the cluster. The method takes the following parameters:
Parameter | Description |
---|---|
proc_name |
A system-wide unique name for the UDF |
execution_mode |
An execution mode; either distributed or nondistributed |
files |
A set of files composing the UDF package, including the names of the files and the binary data for those files--the files
specified will be created on the target Kinetica servers (default is the Note: Uploading files using the |
command |
The name of the command to run, which can be a file within the deployed UDF fileset, or any command able to be executed within
the host environment, e.g., /opt/gpudb/bin/gpudb_python or python . If a host environment command is specified, the host
environment must be properly configured to support that command's execution. |
args |
A list of command-line arguments to pass to the specified command, e.g., ./<file name>.py |
options |
Optional parameters for UDF creation; see create_proc() for details |
For example, to deploy a Python UDF using the native Python API, a local
proc file (udf_tc_py_proc.py
) and CSV file (rank_tom.csv
) will need to
be read in as bytes and then passed into the
create_proc()
call as values paired with their keys inside map files
.
proc_name = 'udf_tc_py_proc'
proc_file_name = proc_name + '.py'
csv_file_name = 'rank_tom.csv'
print("Reading in the 'udf_tc_py_proc.py' and 'rank_tom.csv' files as bytes...")
file_names = (csv_file_name, proc_file_name)
files = {}
for file_name in file_names:
with open(file_name, 'rb') as file:
files[file_name] = file.read()
print("Registering distributed proc...")
response = h_db.create_proc(
proc_name=proc_name,
execution_mode="distributed",
files=files,
command="python",
args=[proc_file_name],
options={}
)
print("Proc created successfully:")
print(response)
The max_concurrency_per_node
setting is available in the options
map of
the /create/proc. This option allows you to define a per-Kinetica-
host concurrency limit for a UDF, i.e. no more than n OS processes (UDF
instances) in charge of evaluating the UDF will be permitted to execute
concurrently on a single Kinetica host. You may want to set a concurrency
limit if you have limited resources (like GPUs) and want to avoid the risks of
continually exhausting your resources. This setting is particularly useful for
distributed UDFs, but it will also work for non-distributed UDFs.
Note
You can also set concurrency limits on the Edit Proc screen in the UDF section of GAdmin
The default value for the setting is 0, which results in no limits. If you
set the value to 4, only 4 instances of the UDF will be queued to execute
the UDF. This holds true across all invokations of the proc; this means that
even if /execute/proc
is called eight times, only 4 processes will be
running. Another instance will be queued as soon as one instance finishes
processing. This process will repeat, only allowing 4 instances of the UDF
to run at a time, until all instances have completed or the UDF is
killed.
Calling the execute_proc() method will execute the specified UDF within the targeted Kinetica execution environment. The method takes the following parameters:
Parameter | Description |
---|---|
proc_name |
The system-wide unique name for the UDF |
params |
Set of string-to-string key/value paired parameters to pass to the UDF |
bin_params |
Set of string-to-binary key/value paired parameters to pass to the UDF |
input_table_names |
Input data table names, to be processed by the UDF |
input_column_names |
Mapping of input data table names to their respective column names, to be processed as input data by the UDF |
output_table_names |
Output data table names, where processed data is to be appended |
options |
Optional parameters for UDF execution; see execute_proc() for details |
The call is asynchronous and will return immediately with a run_id
, which is
a string that can be used in subsequent checks of the execution status.
For example, to execute a proc that's already been created (udf_tc_py_proc
)
using existing input (udf_tc_py_in_table
) and output
(udf_tc_py_out_table
) tables:
INPUT_TABLE = 'udf_tc_py_in_table'
OUTPUT_TABLE = 'udf_tc_py_out_table'
proc_name = 'udf_tc_py_proc'
print("Executing proc...")
response = h_db.execute_proc(
proc_name=proc_name,
params={},
bin_params={},
input_table_names=[INPUT_TABLE],
input_column_names={},
output_table_names=[OUTPUT_TABLE],
options={}
)
print("Proc executed successfully:")
print(response)
print("Check the system log or 'gpudb-proc.log' for execution information")
UDFs can be managed using either GAdmin or one of the methods below: