The following is a complete example, using the Python UDF API, of a non-CUDA UDF that demonstrates how to create pandas dataframes and insert them into tables in Kinetica. This example (and others) can be found in the Python UDF API repository; this repository can be downloaded/cloned from GitHub.
References
- Python UDF Reference -- detailed description of the entire UDF API
- Running UDFs -- detailed description on running Python UDFs
- Example UDFs -- example UDFs written in Python
Prerequisites
The general prerequisites for using UDFs in Kinetica can be found under UDF Prerequisites.
Important
This example cannot run on Mac OSX
The following items are also necessary:
- Python 3
- Miniconda
Note
Visit the Conda website to download the Miniconda installer for Python 3.
UDF API Download
This example requires local access to the Python UDF API repository. In the desired directory, run the following but be sure to replace <kinetica-version> with the name of the installed Kinetica version, e.g, v7.1:
git clone -b release/<kinetica-version> --single-branch https://github.com/kineticadb/kinetica-udf-api-python.git
Relevant Scripts
There are four files associated with the pandas UDF example, all of which can be found in the Python UDF API repo.
- A database setup script (test_environment.py) that is called from the initialization script
- An initialization script (setup_db.py) that creates the output table
- A script (register_execute_UDF.py) to register and execute the UDF
- A UDF (df_to_output_UDF.py) that creates a pandas dataframe and inserts it into the output table
This example runs inside a Conda environment. The environment can be automatically configured using the conda_env_py3.yml file found in the Python UDF API repository.
In the same directory you cloned the API, change directory into the root folder of the Python UDF API repository:
cd kinetica-udf-api-python/
Create the Conda environment, replacing <environment name> with the desired name:
conda env create --name <environment name> --file conda_env_py3.yml
Note
It may take a few minutes to create the environment.
Verify the environment was created properly:
conda info --envs
Activate the new environment:
conda activate <environment name>
Install PyGDF:
conda install -c numba -c conda-forge -c gpuopenanalytics/label/dev -c defaults pygdf=0.1.0a2
Install the Kinetica Python API:
pip install gpudb~=7.1.0
Add the Python UDF API repo's root directory to the PYTHONPATH:
export PYTHONPATH=$(pwd):$PYTHONPATH
Edit the util/test_environment.py script for the correct database url, user, and password for your Kinetica instance:
URL = 'https://abcdefg.cloud.kinetica.com/hijklmn/gpudb-0;CombinePrepareAndExecute=1;RowsPerFetch=20000' USER = 'kadmin' PASSWORD = 'kadmin123'
UDF Deployment
Change directory into the UDF Pandas directory:
cd examples/UDF_pandas
Run the UDF initialization script:
python setup_db.py
Run the execute script for the training UDF:
python register_execute_UDF.py --url <kinetica-url> --username <kinetica-user> --password <kinetica-pass>
Verify the results in the unittest_df_output output table in Workbench.
Execution Detail
This example details using a distributed UDF to create and ingest a pandas dataframe into Kinetica. The df_to_output_UDF proc creates the dataframe and inserts it into the output table, unittest_df_output.
The dataframe has a shape of (3, 3) and will get inserted into the output table n number of times, where n is equal to the number of processing nodes available in each processing node container registered in Kinetica.
The output table contains 3 columns:
- id -- an integer column
- value_long -- a long column
- value_float -- a float column
Database Setup
The setup script, setup_db.py, which creates the output table for the UDF, imports the test_environment.py script to access its methods:
|
|
The script calls two methods from test_environment.py: one to create the schema used to contain the example tables and one to create the output tables where the values for the output table's name and type are passed in:
|
|
The methods in test_environment.py require a connection to Kinetica. This is done by instantiating an object of the GPUdb class with a provided connection URL. See Connecting via API for details on the URL format and how to look it up.
|
|
The create_schema() method creates the schema that will contain the table used in the example:
|
|
|
|
The create_test_output_table() method creates the type and table for the output table, but the table is removed first if it already exists:
|
|
|
|
UDF (df_to_output_UDF.py)
First, packages are imported to access the Kinetica Python UDF API and pandas:
|
|
Next, the file gets a handle to the ProcData() class:
|
|
To output the number of values found on each processing node and processing node container, the rank and tom number in the request info map pointing to the current instance of the UDF are mapped and displayed:
|
|
The dataframe is created:
|
|
Get a handle to the output table from the proc (unittest_df_output). Its size is expanded to match the shape of the dataframe; this will allocated enough memory to copy all records in the dataframe to the output table. Then the dataframe is assigned to the output table:
|
|
UDF Registration (register_execute_UDF.py)
To interact with Kinetica, an object of the GPUdb class is instantiated while providing the connection URL of the database server.
|
|
To upload the df_to_output_UDF.py and kinetica_proc.py files to Kinetica, they will first need to be read in as bytes and added to a file data map:
|
|
After the files are placed in a data map, the distributed Pandas_df_output proc is created in Kinetica and the files are associated with it:
|
|
Note
The proc requires the proper command and args to be executed. In this case, the assembled command line would be:
python df_to_output_UDF.py
Finally, after the proc is created, it is executed. The output table created in the Database Setup section is passed in here:
|
|