Java UDF Guide
Step-by-step guide to creating UDFs with the Java API
Step-by-step guide to creating UDFs with the Java API
The following guide provides step-by-step instructions to get started writing and running UDFs in Java. This example is a simple distributed UDF that copies data from one table to another using a CSV configuration file to determine on which processing node(s) data will be copied.
Standard (non-replicated) tables have their data distributed across all processing nodes, while replicated tables have all of their data on every processing node. In this example, we'll use a standard table and copy only the portions of its data that reside on the nodes named in the CSV file.
Note that only copying data from some processing nodes typically would not have "real" applications and this exercise is purely to demonstrate the many facets of the UDF API.
The general prerequisites for using UDFs in Kinetica can be found on the User-Defined Function Implementation page.
There are six files associated with the Java UDF tutorial. All the files can be found in the Java Tutorial Git Repo, which is cloned in the API Download and Installation section.
manager
sub-directory that is used to
compile the UDF manager JARudf
sub-directory that is used to
compile the UDF JARJava 1.7 (or greater)
Note
The location of java should be placed in the PATH environment variable and JAVA_HOME should be set. If it is not, you'll need to use the full path to java executables in the relevant instructions below.
Maven
Python 2.7 (or greater) or pip
Note
The locations of python and pip should be placed in the PATH environment variable. If they are not, you'll need to use the full path to the python and pip executables in the relevant instructions below. Also, administrative access will most likely be required when installing the Python packages.
The Java UDF tutorial requires local access to the Java UDF API & tutorial repositories and the Java API. The native Python API must also be installed to use the UDF simulator (details found in Development).
In the desired directory, run the following to download the Kinetica Java UDF tutorial repository:
git clone -b release/v7.1 --single-branch https://github.com/kineticadb/kinetica-tutorial-java-udf-api.git
In the same directory, run the following to download the Kinetica Java UDF API repository:
git clone -b release/v7.1 --single-branch https://github.com/kineticadb/kinetica-udf-api-java.git
In the same directory, run the following to download the Kinetica Python API repository:
git clone -b release/v7.1 --single-branch https://github.com/kineticadb/kinetica-api-python.git
Change directory into the newly downloaded native Python API repository:
cd kinetica-api-python
In the root directory of the repository, install the Python API:
sudo python setup.py install
Change directory into the Java UDF API repository:
cd ../kinetica-udf-api-java/proc-api
Install the Java UDF API:
mvn clean package mvn install
Change directory into the UDF tutorial root:
cd ../..
The steps below outline using the UDF Simulator, included with the Python API. The UDF Simulator simulates the mechanics of executeProc() without actually calling it in the database; this is useful for developing UDFs piece-by-piece and test incrementally, avoiding memory ramifications for the database.
The UDF files must be compiled into a JAR prior to usage; the files will need to be re-compiled after making any changes to the UDF code. Re-compiling this tutorial using the provided main pom.xml file will create two JARs: one for the UDF itself and one for the manager.
To compile the example UDF & manager:
cd kinetica-tutorial-java-udf-api/table-copy mvn clean package cd output
Important
When working on your own UDFs, ensure that the Kinetica Java UDF API is not bundled with your UDF JAR; otherwise, there could be a compilation target platform conflict with the UDF API on the Kinetica server.
A UDF can be tested using the UDF Simulator in the native Python API repository without writing anything to the database.
Run the UDF manager JAR with the init option, specifying the database URL and a username & password:
java -jar kinetica-udf-table-copy-manager-7.1.2-jar-with-dependencies.jar init <url> <username> <password>
In the native Python API directory, run the UDF Simulator in execute mode with the following options to simulate running the UDF:
python ../../../kinetica-api-python/examples/udfsim.py execute -d \ -i [<schema>.]<input-table> -o [<schema>.]<output-table> \ -K <url> -U <username> -P <password>
Where:
For instance:
python ../../../kinetica-api-python/examples/udfsim.py execute -d \ -i udf_tc_java_in_table -o udf_tc_java_out_table \ -K http://127.0.0.1:9191 -U admin -P admin123
Copy & execute the export command output by the previous command; this will prepare the execution environment for simulating the UDF:
export KINETICA_PCF=/tmp/udf-sim-control-files/kinetica-udf-sim-icf-xMGW32
Important
The export command shown above is an example of what the udfsim.py script will output--it should not be copied to the terminal in which this example is being run. Make sure to copy & execute the actual command output by udfsim.py in the previous step.
Run the UDF:
java -jar kinetica-udf-table-copy-proc-7.1.2-jar-with-dependencies.jar
Run the UDF Simulator in output mode to output the results to Kinetica (use the dry run flag -d to avoid writing to Kinetica). The results map will be returned (even if there's nothing in it) as well as the number of records that were (or will be in the case of a dry run) added to the given output table:
python ../../../kinetica-api-python/examples/udfsim.py output \ -K <url> -U <username> -P <password>
For instance:
python ../../../kinetica-api-python/examples/udfsim.py output \ -K http://127.0.0.1:9191 -U admin -P admin123
This should output the following:
No results Output: udf_tc_java_out_table: 10000 records
Clean the control files output by the UDF Simulator:
python ../../../kinetica-api-python/examples/udfsim.py clean
Important
The clean command is only necessary if data was output to Kinetica; otherwise, the UDF Simulator can be re-run as many times as desired without having to clean the output files and enter another export command.
If satisfied after testing your UDF with the UDF Simulator or if you want to see your UDF in action, the UDF can be created and executed using the UDF methods createProc() and executeProc() (respectively).
Run the UDF manager JAR with the init option to reset the example tables:
java -jar kinetica-udf-table-copy-manager-7.1.2-jar-with-dependencies.jar init <url> <username> <password>
Run the UDF manager JAR with the exec option to run the example:
java -jar kinetica-udf-table-copy-manager-7.1.2-jar-with-dependencies.jar exec <url> <username> <password>
Verify the results, using a SQL client (KiSQL), Kinetica Workbench, or other:
The udf_tc_java_in_table table is created in the user's default schema (ki_home, unless a different one was assigned during account creation)
A matching udf_tc_java_out_table table is created in the same schema
The udf_tc_java_in_table contains 10,000 records of random data
The udf_tc_java_out_table contains the correct amount of copied data from udf_tc_java_in_table.
On single-node installations, as is the case with Developer Edition, all data should be copied. This is because single-node instances have a default configuration of 2 worker ranks with one TOM each, and the rank_tom.csv configuration file contains a reference to rank 1/TOM 0 and rank 2/TOM 0, effectively naming both data TOMs to copy data from.
In larger cluster configurations, only a fraction of the data in the input table will be stored on those two TOMs; so, the output table will contain that same fraction of the input table's data.
The database logs should also show the portion of the data being copied:
Copying <5071> records of <3> columns on rank/TOM <1/0> from <ki_home.udf_tc_java_in_table> to <ki_home.udf_tc_java_out_table> Copying <4929> records of <3> columns on rank/TOM <2/0> from <ki_home.udf_tc_java_in_table> to <ki_home.udf_tc_java_out_table>
As mentioned previously, this section details a simple distributed UDF that copies data from one table to another. While the table copy UDF can run against multiple tables, the example run will use a single table, udf_tc_java_in_table, as input and a similar table, udf_tc_java_out_table, for output.
The input table will contain one int16 column (id) and two float columns (x and y). The id column will be an ordered integer field, with the first row containing 1, the second row containing 2, etc. Both float columns will contain 10,000 pairs of randomly-generated numbers:
+------+-----------+-----------+ | id | x | y | +======+===========+===========+ | 1 | 2.57434 | -3.357401 | +------+-----------+-----------+ | 2 | 0.0996761 | 5.375546 | +------+-----------+-----------+ | ... | ... | ... | +------+-----------+-----------+
The output table will also contain one int16 column (id) and two float columns (a and b). No data is inserted:
+------+-----------+-----------+ | id | a | b | +======+===========+===========+ | | | | +------+-----------+-----------+
The UDF will first read from a given CSV file to determine from which processing node container (rank) and processing node (TOM) to copy data:
|
|
The tom_num column values refer to processing nodes that contain the many shards of data inside the database. The rank_num column values refer to processing node containers that hold the processing nodes for the database. For example, the given CSV file determines that the data from udf_tc_java_in_table on processing node container 1, processing node 0 and processing node container 2, processing node 0 will be copied to udf_tc_java_out_table on those same nodes.
Once the UDF is executed, a UDF instance (OS process) is spun up for each processing node to execute the UDF code against its assigned processing node's data. Each UDF process then determines if its corresponding processing node container/processing node pair matches one of the pairs of values in the CSV file. If there is a match, the UDF process will loop through the given input tables and copy the data contained in that processing node from the input tables to the output tables. If there isn't a match, no data will be copied by that process.
The init option invokes the init() method of the UdfTcManager class. This method will create the input table for the UDF to copy data from and the output table to copy data to. Sample data will also be generated and inserted into the input table.
To create tables using the Java API, a Type needs to be defined in the system first. The type is a class, extended from RecordObject, using annotations to describe which class instance variables are fields (i.e. columns), what type they are, and any special handling they should receive. Each field consists of a name and a data type:
|
|
To interact with Kinetica, you must first instantiate an object of the GPUdb class while providing the connection URL and username & password to use for logging in. This database object is later passed to the init() and exec() methods:
|
|
The InTable type and table are created, but the table is removed first if it already exists. Then the table creation is verified using showTable():
|
|
Next, sample data is generated and inserted into the new input table:
|
|
Lastly, an OutTable type and table are created, but the table is removed first if it already exists. Then the table creation is verified using showTable():
|
|
The UdfTcJavaProc class is the UDF itself. It does the work of copying the input table data to the output table, based on the ranks & TOMs specified in the given CSV file.
First, instantiate a handle to the ProcData class:
|
|
Retrieve the rank/TOM pair for this UDF process instance from the request info map:
|
|
Then, the CSV file mentioned in Program Files is read (skipping the header):
|
|
Compare the rank and TOM of the current UDF instance's processing node to each rank/TOM pair in the file to determine if the current UDF instance should copy the data on its corresponding processing node:
|
|
For each input and output table found in the inputData and outputData objects (respectively), set the output tables' size to the input tables' size. This will allocate enough memory to copy all input records to the output table:
|
|
For each input column in the input table(s), copy the input columns' values to the corresponding output table columns:
|
|
Call complete() to tell Kinetica the UDF is finished:
|
|
The exec option invokes the exec() method of the UdfTcManager class. This method will read files in as bytes, create a UDF, and upload the files to the database. The method will then execute the UDF.
To upload the UdfTcManager.jar and rank_tom.csv files to Kinetica, they will first need to be read in as bytes and added to a file data map:
|
|
After the files are placed in a data map, the distributed UdfTcJavaProc UDF can be created in Kinetica and the files can be associated with it:
|
|
Note
The Java UDF command line needs to reference:
in this case, the assembled command line would be:
java -cp kinetica-udf-table-copy-proc-7.1.2.jar:<UDF API JARs> com.kinetica.UdfTcJavaProc
Finally, after the UDF is created, it can be executed. The input & output tables created in the Initialization (UdfTcManager.java init) section are passed in here:
|
|
The main POM file (at the root of the project) details follow.
Metadata and parent information that will be accessed by the child modules:
|
|
The list of modules to compile. Modules are based on POM files within sub-directories specified using the <module> tag:
|
|
Maven Antrun plugin configuration that copies the CSV file to an output directory at the top-level directory of the project:
|
|
The manager POM file (contained within the manager sub-directory of the project) details follow.
References to the parent POM's metadata:
|
|
Metadata used for naming the package and build component versions:
|
|
Outlining the Kinetica Java API as a dependency as well as which repositories to search for it in:
|
|
Maven compiler and assembly plugin configuration that determine what and how to compile:
|
|
Maven Antrun plugin configuration that copies the compiled module JAR to an output directory at the top-level directory of the project:
|
|
The UDF POM file (contained within the udf sub-directory of the project) details follow.
References to the parent POM's metadata:
|
|
Metadata used for naming the package and build component versions:
|
|
Outlining the Kinetica Java UDF API as a dependency:
|
|
Maven compiler plugin configuration that determine what and how to compile:
|
|
Maven Antrun plugin configuration that copies the compiled module JAR to an output directory at the top-level directory of the project:
|
|