Overview
Kinetica's vector search capability is enabled through its use as a vector store database. Once a table has been created with vector type and a set of embeddings have been loaded into the table, a variety of K-nearest neighbor searches can be performed on the data set.
Details on the vector type and its usage are found below. For more complete walkthroughs of the functionality in Jupyter notebook form, see:
Vector Type
The vector data type has been added to facilitate managing embeddings and issuing vector search queries. The vector type is effectively an array of float types and can be used as shown in the following examples.
Create Table
A vector column can optionally be configured to normalize the vector data inserted into it, giving each vector a magnitude (L2 norm) of 1. This can improve the performance of some vector operations with minimal overhead.
Vector Column without Normalization
|
|
|
|
|
|
Vector Column with Normalization
|
|
|
|
|
|
Insert Data
|
|
|
|
|
|
|
|
Retrieve Data
|
|
|
|
|
|
|
|
Vector Indexes
There are two types of indexes available to improve the performance of vector searches:
CAGRA Vector Index
The performance of some vector searches can be improved with the application of a CAGRA index, which must be manually refreshed to account for updates to the data in the corresponding table.
This can be applied, in SQL, during table creation, via CREATE TABLE, as well as afterwards, as below:
|
|
|
|
HNSW Vector Index
The performance of some vector searches can be improved with the application of an HNSW index, which is automatically updated as the data in the corresponding table changes.
This can be applied, in SQL, during table creation, via CREATE TABLE, as well as afterwards, as below:
|
|
|
|
Vector Functions & Operators
Vector Column Functions
Function | Description |
---|---|
L1_NORM(v) | Calculates the sum of the absolute values of the given vector's values |
L2_NORM(v) | Calculates the square root of the sum of squares of the given vector's values |
LINF_NORM(v) | Returns the maximum of the given vector's values |
LP_NORM(v, p) | Calculates the Lp-space norm of the given vector in the space p |
NTH(v, n) | Returns the given vector's value at 0-based index n |
SIZE(v) | Returns the given vector's number of values |
Vector Search Functions
A number of K-nearest neighbor functions have been implemented to support vector searches. For examples, see Vector Function Examples.
Function | Description |
---|---|
COSINE_DISTANCE(v1, v2) | 1 minus the cosine similarity (equality of angle) of the given vectors |
DOT_PRODUCT(v1, v2) | Calculates the sum of products of the given vectors' values |
EUCLIDEAN_DISTANCE(v1, v2) | Alias for L2_DISTANCE |
L1_DISTANCE(v1, v2) | Calculates the L1-space (taxicab) distance between the given vectors |
L2_DISTANCE(v1, v2) | Calculates the L2-space (Euclidean) distance between the given vectors |
L2_SQUAREDDISTANCE(v1, v2) | Calculates the sum of squares of distances between the given vectors' values |
L2_DISTSQ(v1, v2) | Alias for L2_SQUAREDDISTANCE |
LINF_DISTANCE(v1, v2) | Calculates the maximum of the distances between pairs of values in the given vectors |
LP_DISTANCE(v1, v2, p) | Calculates the Lp-space distance between the given vectors in the space p |
Vector Search Operators
These operators can be used as shorthand to apply vector functions to individual vector column values. For examples, see Vector Operator Examples.
Note
These operators are only available in SQL or in the native API via /execute/sql.
Operator | Equivalent Function |
---|---|
v1 <-> v2 | L2_DISTANCE(v1, v2) |
v1 <=> v2 | COSINE_DISTANCE(v1, v2) |
v1 <#> v2 | DOT_PRODUCT(v1, v2) |
Vector Search Examples
Vector searches can be performed using either named functions or the corresponding operators for select functions.
Vector Operator Examples
|
|
|
|
|
|
|
|
|
|
|
|
Vector Function Examples
|
|
|
|
|
|
|
|
|
|
|
|
Embedding Models
Kinetica supports three different embedding models:
SQLGPT: the default model, based on the OpenAI model at sqlgpt.io, with a maximum of 8191 tokens and a maximum returned vector size of 1536
OpenAI: either of these models can be used:
- text-embedding-3-small
- text-embedding-3-large
See OpenAI documentation for details.
Nvidia: a NIM microservice, like embed-qa-4 can be used, with a maximum of 512 tokens and a fixed returned vector size of 1024; an input type of query or passage must be specified for the model to generate the appropriate response text
See Nvidia Preview for a demo.
CREATE MODEL
Creates a new embedding model reference.
|
|
Parameter | Description |
---|---|
OR REPLACE | Any existing model with the same name will be dropped before creating this one |
REMOTE | Optional keyword for compatibility |
<model name> | Name of the model to create; must adhere to the supported naming criteria |
WITH OPTIONS | Indicator that a comma-delimited list of model option/value assignments will follow. |
CREDENTIAL | Credential object to use to authenticate to model service. |
REMOTE_MODEL_NAME | Name of the model hosted by the embedding service; e.g., NV-Embed-QA. |
REMOTE_MODEL_LOCATION | URL of the model host service. |
For example, to create a remote model, first create an API key credential:
|
|
|
|
Then, create the model:
|
|
|
|
|
|
Optionally, set the model as the default one:
|
|
|
|
|
|
ALTER MODEL
Alters the configuration of an existing embedding model reference. Any of the model configuration parameters can be modified individually.
|
|
Parameter | Description |
---|---|
<model name> | Name of the model to modify; must adhere to the supported naming criteria |
CREDENTIAL | Credential object to use to authenticate to model service. |
REMOTE_MODEL_NAME | Name of the model hosted by the embedding service; e.g., NV-Embed-QA. |
REMOTE_MODEL_LOCATION | URL of the model host service. |
For example, to change the name of the remote model referenced:
|
|
GENERATE_EMBEDDINGS
Generates embeddings for the specified input data using the specified embedding model reference.
|
|
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
KI_HINT_SAVE_UDF_STATS | SQL hint to log the embedding generation results of the UDF responsible for the processing (rag_udf_embed), so the logs can be viewed from the Jobs tab in Workbench. | ||||||||||
MODEL_NAME | Name of the model to use in generating embeddings, adhering to the supported naming criteria; if not specified, the default model, sqlgpt, will be used. To set the default to another model and avoid having to specify this parameter: Set Default Embeddings Model Example
| ||||||||||
EMBEDDING_TABLE | The query or name of the table to use for input data to the embedding generation process. | ||||||||||
EMBEDDING_INPUT_COLUMNS | Names of the columns in the given EMBEDDING_TABLE for which embeddings will be generated. | ||||||||||
EMBEDDING_OUTPUT_COLUMNS | Names of the columns to return the generated embeddings as; if none specified, each input column will have _embedding appended to it to construct the name of the corresponding output column. | ||||||||||
DIMENSIONS | Size of the vector returned by the embedding generation process; required for the default sqlgpt model. | ||||||||||
PARAMS | Size of the vector returned by the embedding generation process. | ||||||||||
PARAMS | Optional indicator that a list of embedding option/value assignments will follow, passed as a comma-delimited list of key/value pairs to the KV_PAIRS function
|
For example, to generate embeddings with various models:
|
|
|
|
|
|
|
|
Vector Search with Embedding Models
Using embedding models, vector searches can be performed dynamically on a given data set.
For example, to search for healthy food references within a data set of product reviews:
|
|
|
|