The following is a complete example, using the Python UDF API, of a non-CUDA UDF that demonstrates how to create pandas dataframes and insert them into tables in Kinetica. This example (and others) can be found in the Python UDF API repository; this repository comes with Kinetica by default (located in /opt/gpudb/udf/api/python/) or can be downloaded/cloned from GitHub.
The general prerequisites for using UDFs in Kinetica can be found on the User-Defined Function Implementation page.
This example cannot run on Mac OSX
The following items are also necessary:
- Python 3
Visit the Conda website to download the Miniconda installer for Python 3.
UDF API Download
This example requires local access to the Python UDF API repository. In the desired directory, run the following but be sure to replace <kinetica-version> with the name of the installed Kinetica version, e.g, v7.1:
git clone -b release/<kinetica-version> --single-branch https://github.com/kineticadb/kinetica-udf-api-python.git
There are four files associated with the pandas UDF example, all of which can be found in the Python UDF API repo.
- A database setup script (test_environment.py) that is called from the initialization script
- An initialization script (setup_db.py) that creates the output table
- A script (register_execute_UDF.py) to register and execute the UDF
- A UDF (df_to_output_UDF.py) that creates a pandas dataframe and inserts it into the output table
This example runs inside a Conda environment. The environment can be automatically configured using the conda_env_py3.yml file found in the Python UDF API repository.
In the same directory you cloned the API, change directory into the root folder of the Python UDF API repository:
Create the Conda environment, replacing <environment name> with the desired name:
conda env create --name <environment name> --file conda_env_py3.yml
It may take a few minutes to create the environment.
Verify the environment was created properly:
conda info --envs
Activate the new environment:
conda activate <environment name>
conda install -c numba -c conda-forge -c gpuopenanalytics/label/dev -c defaults pygdf=0.1.0a2
Install the Kinetica Python API:
pip install gpudb~=7.1.0
Add the Python UDF API repo's root directory to the PYTHONPATH:
Edit the util/test_environment.py script for the correct host, port, user, and password for your Kinetica instance:
HOST = 'http://localhost' PORT = '9191' USER = 'testuser' PASSWORD = 'Testuser123!'
Change directory into the UDF Pandas directory:
Run the UDF initialization script:
Run the execute script for the training UDF:
python register_execute_UDF.py [--host <kinetica-host> [--username <kinetica-user> --password <kinetica-pass>]]
Verify the results in the /opt/gpudb/core/logs/gpudb-proc.log file on the head node and/or in unittest_df_output output table in GAdmin.
This example details using a distributed UDF to create and ingest a pandas dataframe into Kinetica. The df_to_output_UDF proc creates the dataframe and inserts it into the output table, unittest_df_output.
The dataframe has a shape of (3, 3) and will get inserted into the output table n number of times, where n is equal to the number of processing nodes available in each processing node container registered in Kinetica.
The output table contains 3 columns:
- id -- an integer column
- value_long -- a long column
- value_float -- a float column
The setup script, setup_db.py, which creates the output table for the UDF, imports the test_environment.py script to access its methods:
The script calls two methods from test_environment.py: one to create the schema used to contain the example tables and one to create the output tables where the values for the output table's name and type are passed in:
The methods in test_environment.py require a connection to Kinetica. This is done by instantiating an object of the GPUdb class with a provided connection URL (host and port):
The create_schema() method creates the schema that will contain the table used in the example:
The create_test_output_table() method creates the type and table for the output table, but the table is removed first if it already exists:
First, packages are imported to access the Kinetica Python UDF API and pandas:
Next, the file gets a handle to the ProcData() class:
To output the number of values found on each processing node and processing node container, the rank and tom number in the request info map pointing to the current instance of the UDF are mapped and displayed:
The dataframe is created:
Get a handle to the output table from the proc (unittest_df_output). Its size is expanded to match the shape of the dataframe; this will allocated enough memory to copy all records in the dataframe to the output table. Then the dataframe is assigned to the output table:
UDF Registration (register_execute_UDF.py)
To interact with Kinetica, an object of the GPUdb class is instantiated while providing the connection URL, including the host and port of the database server. Ensure the host address and port are correct for your setup:
To upload the df_to_output_UDF.py and kinetica_proc.py files to Kinetica, they will first need to be read in as bytes and added to a file data map:
After the files are placed in a data map, the distributed Pandas_df_output proc is created in Kinetica and the files are associated with it:
The proc requires the proper command and args to be executed. In this case, the assembled command line would be:
Finally, after the proc is created, it is executed. The output table created in the Database Setup section is passed in here: