Kinetica supports the basic notion of SQL tables as containers of one or more columns of data.
Table & column names must adhere to the supported naming criteria.
A column definition consists of a column type and optional column size, column properties, and nullability. Column properties are used to optimize data storage & speed.
The format of a defined column is column name, followed by column definition. A column definition is column type optionally followed by any column size limit or column properties all enclosed in parentheses, followed by an optional nullability statement:
|
|
This format applies to any DDL statement requiring the definition of columns, like CREATE TABLE and ALTER TABLE (when adding/modifying a column).
For example, the following are valid defined columns:
|
|
Column Types
Category | Data Type | Description | |||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number | BOOLEAN | Effective type: boolean | |||||||||||||||||||||||
TINYINT | Effective type: int8 | ||||||||||||||||||||||||
BYTE | Alias for TINYINT | ||||||||||||||||||||||||
SMALLINT | Effective type: int16 | ||||||||||||||||||||||||
INTEGER | Effective type: integer | ||||||||||||||||||||||||
INT | Alias for INTEGER | ||||||||||||||||||||||||
BIGINT | Effective type: long | ||||||||||||||||||||||||
DECIMAL | Alias for BIGINT | ||||||||||||||||||||||||
LONG | Alias for BIGINT | ||||||||||||||||||||||||
UNSIGNED BIGINT | Effective type: ulong | ||||||||||||||||||||||||
UNSIGNED LONG | Alias for UNSIGNED BIGINT | ||||||||||||||||||||||||
REAL | Effective type: float | ||||||||||||||||||||||||
FLOAT | Alias for REAL | ||||||||||||||||||||||||
DOUBLE | Effective type: double | ||||||||||||||||||||||||
DECIMAL(P,S) | Effective type: varies by P & S
| ||||||||||||||||||||||||
NUMERIC | Alias for DECIMAL / DECIMAL(P,S) | ||||||||||||||||||||||||
String | VARCHAR | Effective type: string; character limit based on configured system property | |||||||||||||||||||||||
VARCHAR(N) | Effective type: the smallest charN type that is at least size N, or string if N is greater than 256 | ||||||||||||||||||||||||
CHAR | Alias for VARCHAR / VARCHAR(N) | ||||||||||||||||||||||||
STRING | Alias for VARCHAR / VARCHAR(N) | ||||||||||||||||||||||||
TEXT | Alias for VARCHAR / VARCHAR(N) | ||||||||||||||||||||||||
IPV4 | Shorthand for VARCHAR(IPV4), which applies the IPV4 column property | ||||||||||||||||||||||||
UUID | Effective type uuid | ||||||||||||||||||||||||
JSON | Effective type json | ||||||||||||||||||||||||
Date/Time | DATE | Effective type: date | |||||||||||||||||||||||
DATETIME | Effective type: datetime | ||||||||||||||||||||||||
TIME | Effective type: time | ||||||||||||||||||||||||
TIMESTAMP | Effective type: timestamp | ||||||||||||||||||||||||
TYPE_DATE | Alias for DATE | ||||||||||||||||||||||||
TYPE_TIME | Alias for TIME | ||||||||||||||||||||||||
TYPE_TIMESTAMP | Alias for TIMESTAMP | ||||||||||||||||||||||||
Binary | BLOB | Effective type: bytes | |||||||||||||||||||||||
BINARY | Alias for BLOB | ||||||||||||||||||||||||
BYTES | Alias for BLOB | ||||||||||||||||||||||||
VARBINARY | Alias for BLOB | ||||||||||||||||||||||||
Geospatial | GEOMETRY | Effective type: wkt (string base type) | |||||||||||||||||||||||
ST_GEOMETRY | Alias for GEOMETRY | ||||||||||||||||||||||||
WKT | Alias for GEOMETRY | ||||||||||||||||||||||||
BLOB(WKT) | Effective type: wkt (bytes base type) | ||||||||||||||||||||||||
BINARY(WKT) | Alias for BLOB(WKT) | ||||||||||||||||||||||||
BYTES(WKT) | Alias for BLOB(WKT) | ||||||||||||||||||||||||
VARBINARY(WKT) | Alias for BLOB(WKT) | ||||||||||||||||||||||||
Composite | BOOLEAN[N] | Effective type: array(boolean) (string base type) | |||||||||||||||||||||||
INTEGER[N] | Effective type: array(int) (string base type) | ||||||||||||||||||||||||
BIGINT[N] | Effective type: array(long) (string base type) | ||||||||||||||||||||||||
VECTOR(N) | Effective type: vector(n) (bytes base type); generally used in vector search | ||||||||||||||||||||||||
REAL[N] | Effective type: array(float) (string base type) | ||||||||||||||||||||||||
DOUBLE[N] | Effective type: array(double) (string base type) | ||||||||||||||||||||||||
VARCHAR[N] | Effective type: array(string) (string base type) |
Column Properties
Property | Description |
---|---|
DICT | Applies dict data handling to a column, enabling dictionary-encoding of its values; see Dictionary Encoding for details |
INIT_WITH_NOW | For DATE, TIME, DATETIME, and TIMESTAMP column types, enables the database to use NOW() as the value when the column is not specified in an INSERT statement; also directs the database to replace empty strings (for DATE, TIME, & DATETIME columns) and invalid timestamps (for TIMESTAMP columns) with NOW() |
INIT_WITH_UUID | For UUID column type, enables the database to use a universally unique identifier (UUID) as the value when the column is not specified in an INSERT statement; also directs the database to replace empty strings (for UUID columns) with UUIDs |
IPV4 | Treats the associated string-based column as an IPv4 address |
NORMALIZE | For VECTOR column type, automatically normalizes each vector to have a magnitude of 1; see Vector Type |
PRIMARY_KEY | Treats the associated column as a primary key, or part of a composite primary key if other columns also have this property |
SHARD_KEY | Treats the associated column as a shard key, or part of a composite shard key if other columns also have this property |
TEXT_SEARCH | Applies text-searchability to a column |
CREATE SCHEMA
Schemas are logical containers for all database objects (tables, views, etc.). In order to place an object in a schema, the schema must be created first--schemas will not be automatically created when specified in CREATE TABLE or similar calls.
|
|
Parameters | Description |
---|---|
IF NOT EXISTS | Optional error suppression clause, which causes no error to be returned if a schema with the same name already exists |
<schema name> | Name of the schema to create; must adhere to the supported naming criteria |
|
|
ALTER SCHEMA
Alters the configuration of a schema. The name of a schema is the only property that can be altered.
Rename Schema
A schema can be renamed.
|
|
|
|
DROP SCHEMA
Removes an existing schema.
|
|
Parameters | Description |
---|---|
IF EXISTS | Optional error-suppression clause; if specified, no error will be returned if the given schema does not exist |
<schema name> | Name of the schema to remove |
CASCADE | Drops all objects contained within the schema; if not specified, an error will be returned if the given schema contains any tables, views, etc. |
For example, to drop a schema, including all contained objects:
|
|
SHOW SCHEMA
Outputs the DDL statement required to reconstruct the given schema.
|
|
Note
The response to SHOW SCHEMA is a single-column result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.
Parameters | Description |
---|---|
CREATE | Optional keyword for clarity |
SCHEMA | Optional clause to avoid ambiguity; if omitted, and a table or view exists with the given name, the command will be interpreted as a SHOW TABLE statement instead |
<schema name> | Name of the schema whose contained tables' & views' DDL will be output; use * instead to output the DDL of all schemas, tables, & views |
For example, to output the DDL for the example schema created in the CREATE SCHEMA section:
|
|
|
|
DESCRIBE SCHEMA
Lists the contained tables and views of a given schema.
|
|
Parameters | Description |
---|---|
SCHEMA | Optional clause to avoid ambiguity; if omitted, and a table or view exists with the given name, the command will be interpreted as a DESCRIBE TABLE statement instead |
<schema name> | Name of the schema whose contained tables & views will be listed |
For example, to describe the tables contained in the demo schema, into which demo data is usually downloaded:
|
|
|
|
SET CURRENT SCHEMA
To override the default schema with a different schema, the following syntax can be used:
Note
This command is only available through KiSQL or database clients configured with the Kinetica ODBC/JDBC driver.
|
|
Omitting schema name will switch back to the original default schema.
CREATE TABLE
Creates a new table.
|
|
Parameters | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing table or view with the same name will be dropped before creating this one | ||||||||||
REPLICATED | The table will be distributed within the database as a replicated table | ||||||||||
TEMP | The table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the table will be removed), but it will have increased ingest performance | ||||||||||
<schema name> | Name of the schema that will contain the created table; if no schema is specified, the table will be created in the user's default schema | ||||||||||
<table name> | Name of the table to create; must adhere to the supported naming criteria | ||||||||||
<column name> | Name of a column to create within the table; must adhere to the supported naming criteria | ||||||||||
<column definition> | Definition of the column associated with <column name>; see Data Definition (DDL) for column format | ||||||||||
PRIMARY KEY (<column list>) | Optional primary key specification clause, where <column list> is a comma-separated list of columns to use as the primary key for the table | ||||||||||
SHARD KEY (<column list>) | Optional shard key specification clause, where <column list> is a comma-separated list of columns to use as the shard key for the table | ||||||||||
FOREIGN KEY ... | Optional comma-separated set of foreign key specification clauses, with the following parameters:
| ||||||||||
<partition clause> | Defines a partitioning scheme for the table to create | ||||||||||
<tier strategy clause> | Defines the tier strategy for the table to create | ||||||||||
<index clause> | Applies any number of column indexes, chunk skip indexes, geospatial indexes, CAGRA indexes, or HNSW indexes to the table to create | ||||||||||
<table property clause> | Assigns table properties, from a subset of those available, to the table to create |
For example, to create a table with various column types and properties:
|
|
Partition Clause
A table can be further segmented into partitions. The supported partition types are:
See Partitioning for details.
Range Partitioning
The general format for the range partition clause is:
|
|
The partition definition clause, PARTITIONS, is optional, though it is recommended to define partitions at table creation time, when feasible.
Warning
Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.
For example, to create a range-partitioned table with the following criteria:
- partitioned on the date/time of the order
- partitions for years:
- 2014 - 2016
- 2017
- 2018
- 2019
- records not in that range go to the default partition
|
|
Interval Partitioning
The general format for the interval partition clause is:
|
|
For example, to create an interval-partitioned table with the following criteria:
- partitioned on the date/time of the order
- one partition for each year from 2014 on
- later year partitions are added as necessary
- records prior to 2014 go to the default partition
|
|
To create an interval-partitioned table with the following criteria:
- partitioned on the date/time of the order
- one partition for each day from January 1st, 2014 on
- later day partitions are added as necessary
- records prior to 2014 go to the default partition
|
|
The same interval-partitioned scheme above can be created using the timestamp column directly, with the help of the INTERVAL function (described in the Date/Time Functions section):
|
|
This scheme can be easily modified to create an hourly partition instead:
|
|
List Partitioning
The list partition clause has two forms:
Manual
The general format for the manual list partition clause is:
|
|
The partition definition clause, PARTITIONS, is optional, though it is recommended to define partitions at table creation time, when feasible.
Warning
Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.
For example, to create a manual list-partitioned table with the following criteria:
- partitioned on the date/time of the order
- partitions for years:
- 2014 - 2016
- 2017
- 2018
- 2019
- records not in that list go to the default partition
|
|
To create a manual list-partitioned table with a multi-column key and the following criteria:
- partitioned on the date/time of the order
- each partition corresponds to a unique year & month pair
- partitions for years/months:
- February 2016 & March 2016
- March 2020
- records not in that list go to the default partition
|
|
Automatic
The general format for the automatic list partition clause is:
|
|
To create an automatic list-partitioned table with the following criteria:
- partitioned on the date/time of the order
- one partition for each unique year & month across all orders
- partitions are added as necessary
|
|
Hash Partitioning
The general format for the hash partition clause is:
|
|
To create a hash-partitioned table with the following criteria:
- partitioned on the date/time of the order
- distributed among the fixed set of partitions, based on the hash of the year & month of the order
- 10 partitions
|
|
Series Partitioning
The general format for the series partition clause is:
|
|
The PERCENT_FULL should be an integer between 1 and 100; the default is 50%.
To create a series-partitioned table with the following criteria:
- partitioned on the customer of each order
- partitions with closed key sets will contain all orders from a set of unique customers
- 50% fill threshold
|
|
To create a series-partitioned track table with the following criteria:
- partitioned on the track ID
- partitions with closed key sets will contain all points from a unique set of tracks
- 25% fill threshold
|
|
Tier Strategy Clause
A table can have a tier strategy specified at creation time. If not assigned a tier strategy upon creation, a default tier strategy will be assigned.
|
|
For example, to create a customer_order table with an above-average eviction priority in the RAM Tier:
|
|
If not specified, the default tier strategy will be assigned:
|
|
|
|
|
|
Note
The response to SHOW TABLE is a single-record result set with the DDL statement as the value in the DDL column, shown here with the column separators returned by kisql.
Index Clause
A table can have any number of indexes applied to any of its columns at creation time.
The types of explicit indexes supported are:
|
|
For example, to create a table with the following indexes:
- column index on dept_id
- chunk skip index on id
- geospatial index on work_district
- geospatial index on the pair of office_longitude & office_latitude
- CAGRA index on profile
|
|
Table Property Clause
A subset of table properties can be applied to the table at creation time.
|
|
Available table properties include:
Property | Description |
---|---|
CHUNK SIZE | Size of the blocks of memory holding the data, when loaded; specified as the maximum number of records each block of memory should hold |
NO_ERROR_IF_EXISTS | Error suppression option, which causes no error to be returned if a table with the same name already exists; default is FALSE |
TTL | The time-to-live (TTL) for the table; if not set, the table will not expire |
For example, to create a table with up to 1,000,000 records per chunk and that will expire in 15 minutes:
|
|
CREATE EXTERNAL TABLE
Creates a new external table, which is a database object whose source data is located in one or more files, either internal or external to the database.
|
|
Note
For contextualized examples, see Examples. For copy/paste examples, see Loading Data. For an overview of loading data into Kinetica, see Data Loading Concepts.
The source data can be located in either of the following locations:
- in KiFS
- on a remote system, accessible via a data source
A materialized external table (default) that uses a data source can perform a one-time load upon creation and optionally subscribe for updates on an interval, depending on the data source provider:
Provider | Description | One-Time Load | Subscription |
---|---|---|---|
Azure | Microsoft blob storage | Yes | Yes |
CData | CData Software source-specific JDBC driver See driver list for the supported drivers | Yes | Yes |
GCS | Google Cloud Storage | Yes | Yes |
HDFS | Apache Hadoop Distributed File System | Yes | |
JDBC | Java DataBase Connectivity; requires user-supplied driver | Yes | Yes |
S3 | Amazon S3 Bucket | Yes | Yes |
See Manage Subscription for pausing, resuming, & canceling subscriptions on the external table.
Although an external table cannot use a data source configured for Kafka, a standard table can have Kafka data streamed into it via a LOAD INTO command that references such a data source.
The use of external tables with ring resiliency has additional considerations.
Parameters | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing table or view with the same name will be dropped before creating this one | ||||||||||||
REPLICATED | The external table will be distributed within the database as a replicated table | ||||||||||||
TEMP | The external table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the external table will be removed), but it will have increased ingest performance | ||||||||||||
LOGICAL | External data will not be loaded into the database; the data will be retrieved from the source upon servicing each query against the external table. This mode ensures queries on the external table will always return the most current source data, though there will be a performance penalty for reparsing & reloading the data from source files upon each query. | ||||||||||||
MATERIALIZED | Loads a copy of the external data into the database, refreshed on demand; this is the default external table type | ||||||||||||
<schema name> | Name of the schema that will contain the created external table; if no schema is specified, the external table will be created in the user's default schema | ||||||||||||
<table name> | Name of the external table to create; must adhere to the supported naming criteria | ||||||||||||
<table definition clause> | Optional clause, defining the structure for the external table associated with the source data | ||||||||||||
REMOTE QUERY | Source data specification clause, where <source data query> is a SQL query selecting the data which will be loaded. Note This clause is mutually exclusive with the FILE PATHS clause, and is only applicable to CData & JDBC data sources. The query should meet the following criteria:
Any query resulting in more than 10,000 records will be distributed and loaded in parallel (unless directed otherwise) using the following rule sequence:
Type inferencing is limited by the available JDBC types. To take advantage of Kinetica-specific types and properties, define the table columns explicitly in the <table definition clause>. | ||||||||||||
FILE PATHS | Source file specification clause, where <file paths> is a comma-separated list of single-quoted file paths from which data will be loaded; all files specified are presumed to have the same format and data types. Note This clause is mutually exclusive with the REMOTE QUERY clause, and is not applicable to CData & JDBC data sources. The form of a file path is dependent on the source referenced:
| ||||||||||||
FORMAT | Optional indicator of source file type, for file-based data sources; will be inferred from the file extension if not given. Supported formats include:
| ||||||||||||
WITH OPTIONS | Optional indicator that a comma-delimited list of connection & global option/value assignments will follow. See Load Options for the complete list of options. | ||||||||||||
<partition clause> | Optional clause, defining a partitioning scheme for the external table associated with the source data | ||||||||||||
<tier strategy clause> | Optional clause, defining the tier strategy for the external table associated with the source data | ||||||||||||
<index clause> | Optional clause, applying any number of column indexes, chunk skip indexes, geospatial indexes, CAGRA indexes, or HNSW indexes to the external table associated with the source data | ||||||||||||
<table property clause> | Optional clause, assigning table properties, from a subset of those available, to the external table associated with the source data |
Delimited Text Options
The following options can be specified when loading data from delimited text files. When reading from multiple files, options specific to the source file will be applied to each file being read.
Option | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
COMMENT = '<string>' | Treat lines in the source file(s) that begin with string as comments and skip. The default comment marker is #. | ||||||||||||||||
DELIMITER = '<char>' | Use char as the source file field delimiter. The default delimiter is a comma, unless a source file has one of these extensions:
See Delimited Text Option Characters for allowed characters. | ||||||||||||||||
ESCAPE = '<char>' | Use char as the source file data escape character. The escape character preceding any other character, in the source data, will be converted into that other character, except in the following special cases:
For instance, if the escape character is \, a \t encountered in the data will be converted to a tab character when stored in the database. The escape character can be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. There is no default escape character. | ||||||||||||||||
HEADER DELIMITER = '<char>' | Use char as the source file header field name/property delimiter, when the source file header contains both names and properties. This is largely specific to the Kinetica export to delimited text feature, which will, within each field's header, contain the field name and any associated properties, delimited by the pipe | character. An example Kinetica header in a CSV file:
The default is the | (pipe) character. See Delimited Text Option Characters for allowed characters. Note The DELIMITER character will still be used to separate field name/property sets from each other in the header row | ||||||||||||||||
INCLUDES HEADER = <TRUE|FALSE> | Declare that the source file(s) will or will not have a header. The default is TRUE. | ||||||||||||||||
NULL = '<string>' | Treat string as the indicator of a null source field value. The default is the empty string. | ||||||||||||||||
QUOTE = '<char>' | Use char as the source file data quoting character, for enclosing field values. Usually used to wrap field values that contain embedded delimiter characters, though any field may be enclosed in quote characters (for clarity, for instance). The quote character must appear as the first and last character of a field value in order to be interpreted as quoting the value. Within a quoted value, embedded quote characters may be escaped by preceding them with another quote character or the escape character specified by ESCAPE, if given. The default is the " (double-quote) character. See Delimited Text Option Characters for allowed characters. |
Delimited Text Option Characters
For DELIMITER, HEADER DELIMITER, ESCAPE, & QUOTE, any single character can be used, or any one of the following escaped characters:
Escaped Char | Corresponding Source File Character |
---|---|
'' | Single quote |
\a | ASCII bell |
\b | ASCII backspace |
\f | ASCII form feed |
\t | ASCII horizontal tab |
\v | ASCII vertical tab |
For instance, if two single quotes ('') are specified for a QUOTE character, the parser will interpret single quotes in the source file as quoting characters; specifying \t for DELIMITER will cause the parser to interpret ASCII horizontal tab characters in the source file as delimiter characters.
Load Options
The following options can be specified to modify the way data is loaded (or not loaded) into the target table.
Option | Description | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BAD RECORD TABLE | Name of the table containing records that failed to be loaded into the target table. This bad record table will include the following columns:
Note This option is not applicable for an ON ERROR mode of ABORT. In that mode, processing stops at the first error and that error is returned to the user. | ||||||||||||||||||||||||||
BATCH SIZE | Use an ingest batch size of the given number of records. The default batch size is 50,000. | ||||||||||||||||||||||||||
COLUMN FORMATS | Use the given type-specific formatting for the given column when parsing source data being loaded into that column. This should be a map of column names to format specifications, where each format specification is map of column type to data format, all formatted as a JSON string. Supported column types include:
For example, to load dates of the format 2010.10.30 into date column d and times of the 24-hour format 18:36:54.789 into time column t: { "d": {"date": "YYYY.MM.DD"}, "t": {"time": "HH24:MI:SS.MS"} } Note This option is not available for data sources configured for JDBC. | ||||||||||||||||||||||||||
DATA SOURCE | Load data from the given data source. Data source connect privilege is required when loading from a data source. | ||||||||||||||||||||||||||
DEFAULT COLUMN FORMATS | Use the given formats for source data being loaded into target table columns with the corresponding column types. This should be a map of target column type to source format for data being loaded into columns of that type, formatted as a JSON string. Supported column properties and source data formats are the same as those listed in the description of the COLUMN FORMATS option. For example, to make the default format for loading source data dates like 2010.10.30 and 24-hour times like 18:36:54.789: { "date": "YYYY.MM.DD", "time": "HH24:MI:SS.MS", "datetime": "YYYY.MM.DD HH24:MI:SS.MS" } Note This option is not available for data sources configured for JDBC. | ||||||||||||||||||||||||||
FIELDS IGNORED BY | Choose a comma-separated list of fields from the source file(s) to ignore, loading only those fields that are not in the identified list in the order they appear in the file. Fields can be identified by either POSITION or NAME. If ignoring by NAME, the specified names must match the source file field names exactly.
Note
| ||||||||||||||||||||||||||
FIELDS MAPPED BY | Choose a comma-separated list of fields from the source file(s) to load, in the specified order, identifying fields by either POSITION or NAME. If mapping by NAME, the specified names must match the source file field names exactly.
Note
| ||||||||||||||||||||||||||
FLATTEN_COLUMNS | Specify the policy for handling nested columns within JSON data. The default is FALSE.
| ||||||||||||||||||||||||||
IGNORE_EXISTING_PK | Specify the error suppression policy for inserting duplicate primary key values into a table with a primary key. If the specified table does not have a primary key or the UPDATE_ON_EXISTING_PK option is used, then this options has no effect. The default is FALSE.
| ||||||||||||||||||||||||||
INGESTION MODE | Whether to do a full ingest of the data or perform a dry run or type inference instead. The default mode is FULL.
| ||||||||||||||||||||||||||
JDBC_FETCH_SIZE | Retrieve this many records at a time from the remote database. Lowering this number will help tables with large record sizes fit into available memory during ingest. The default is 50,000. Note This option is only available for data sources configured for JDBC. | ||||||||||||||||||||||||||
NUM_SPLITS_PER_RANK | The number of remote query partitions to assign each Kinetica worker process. The queries assigned to a worker process will be executed by the tasks allotted to the process. To decrease memory pressure, increase the number of splits per rank. The default is 8 splits per rank. Note This option is only available for data sources configured for JDBC. | ||||||||||||||||||||||||||
NUM_TASKS_PER_RANK | The number of tasks to use on each Kinetica worker process to process remote queries. The tasks assigned to a worker process will execute any remote query partitions assigned to it. To decrease memory pressure, decrease the number of tasks per rank. The default is 8 tasks per rank. Note This option is only available for data sources configured for JDBC. | ||||||||||||||||||||||||||
JDBC_SESSION_INIT_STATEMENT | Run the single given statement before the initial load is performed and also before each subsequent reload, if REFRESH ON START or SUBSCRIBE is TRUE. For example, to set the time zone to UTC before running each load, use: JDBC_SESSION_INIT_STATEMENT = 'SET TIME ZONE ''UTC''' Note This option is only available for data sources configured for JDBC. | ||||||||||||||||||||||||||
ON ERROR | When an error is encountered loading a record, handle it using either of the following modes. The default mode is ABORT.
| ||||||||||||||||||||||||||
POLL_INTERVAL | Interval, in seconds, at which a data source is polled for updates. The number of seconds must be passed as a single-quoted string. The default interval is 60 seconds. This option is only applicable when SUBSCRIBE is TRUE. | ||||||||||||||||||||||||||
REFRESH ON START | Whether to refresh the external table data upon restart of the database. Only relevant for materialized external tables. The default is FALSE. This option is ignored if SUBSCRIBE is TRUE.
| ||||||||||||||||||||||||||
REMOTE_QUERY_INCREASING_COLUMN | For a JDBC query change data capture loading scheme, the remote query column that will be used to determine whether a record is new and should be loaded or not. This column should have an ever-increasing value and be of an integral or date/timestamp type. Often, this column will be a sequence-based ID or create/modify timestamp. This option is only applicable when SUBSCRIBE is TRUE. Note This option is only available for data sources configured for JDBC. | ||||||||||||||||||||||||||
REMOTE_QUERY_NO_SPLIT | Whether to not distribute the retrieval of remote data and issue queries for blocks of data at time in parallel. The default is FALSE. Note This option is only available for data sources configured for JDBC
| ||||||||||||||||||||||||||
REMOTE_QUERY_ORDER_BY | Ordering expression to use in partitioning remote data for retrieval. The remote data will be ordered according to this expression and then retrieved in sequential blocks from the first record. This is potentially less performant than using REMOTE_QUERY_PARTITION_COLUMN. If REMOTE_QUERY_NO_SPLIT is TRUE, a valid REMOTE_QUERY_PARTITION_COLUMN is specified, or the column given is invalid, this option is ignored. Note This option is only available for data sources configured for JDBC | ||||||||||||||||||||||||||
REMOTE_QUERY_PARTITION_COLUMN | Column to use to partition remote data for retrieval. The column must be numeric and should be relatively evenly distributed so that queries using values of this column to partition data will retrieve relatively consistently-sized result sets. If REMOTE_QUERY_NO_SPLIT is TRUE or the column given is invalid, this option is ignored. Note This option is only available for data sources configured for JDBC | ||||||||||||||||||||||||||
SUBSCRIBE | Whether to subscribe to the data source specified in the DATA SOURCE option. Only relevant for materialized external tables using data sources configured to allow streaming. The default is FALSE. If TRUE, the REFRESH ON START option is ignored. Note This option is not available for data sources configured for HDFS.
| ||||||||||||||||||||||||||
TRUNCATE_STRINGS | Specify the string truncation policy for inserting text into VARCHAR columns that are not large enough to hold the entire text value. The default is FALSE.
| ||||||||||||||||||||||||||
TYPE_INFERENCE_MODE | When making a type inference of the data values in order to define column types for the target table, use one of the following modes. The default mode is SPEED.
| ||||||||||||||||||||||||||
UPDATE_ON_EXISTING_PK | Specify the record collision policy for inserting into a table with a primary key. If the specified table does not have a primary key, then this options has no effect. The default is FALSE.
|
Table Definition Clause
The table definition clause allows for an explicit local table structure to be defined, irrespective of the source data type. This specification mirrors that of CREATE TABLE.
|
|
See Data Definition (DDL) for column format.
Partition Clause
An external table can be further segmented into partitions.
The supported Partition Clause syntax & features are the same as those in the CREATE TABLE Partition Clause.
Tier Strategy Clause
An external table can have a tier strategy specified at creation time. If not assigned a tier strategy upon creation, a default tier strategy will be assigned.
The supported Tier Strategy Clause syntax & features are the same as those in the CREATE TABLE Tier Strategy Clause.
Index Clause
An external table can have any number of indexes applied to any of its columns at creation time.
The supported Index Clause syntax & features are the same as those in the CREATE TABLE Index Clause.
Table Property Clause
A subset of table properties can be applied to the external table associated with the external data at creation time.
The supported Table Property Clause syntax & features are the same as those in the CREATE TABLE Table Property Clause.
Examples
To create a logical external table with the following features, using a query as the source of data:
- External table named ext_employee_dept2 in the example schema
- Source is department 2 employees from the example.employee table,
queried through the
example.jdbc_ds
data source - Data is re-queried from the source each time the external table is queried
|
|
To create an external table with the following features, using KiFS as the source of data:
- External table named ext_product in the example schema
- External source is a KiFS file named product.csv located in the data directory
- Data is not refreshed on database startup
|
|
To create an external table with the following features, using KiFS as the source of data:
- External table named ext_employee in the example schema
- External source is a Parquet file named employee.parquet located in the KiFS directory data
- External table has a primary key on the id column
- Data is not refreshed on database startup
|
|
To create an external table with the following features, using KiFS as the source of data:
- External table named ext_employee in the example schema
- External source is a file named employee.csv located in the KiFS directory data
- Apply a date format to the hire_date column
|
|
To create an external table with the following features, using a data source as the source of data:
- External table named ext_product in the example schema
- External source is a data source named product_ds in the example schema
- Source is a file named products.csv
- Data is refreshed on database startup
|
|
To create an external table with the following features, subscribing to a data source:
- External table named ext_product in the example schema
- External source is a data source named product_ds in the example schema
- Source is a file named products.csv
- Data updates are streamed continuously
|
|
To create an external table with the following features, using a remote query through a JDBC data source as the source of data:
- External table named ext_employee_dept2 in the example schema
- External source is a data source named jdbc_ds in the example schema
- Source data is a remote query of employees in department 2 from that database's example.ext_employee table
- Data is refreshed on database startup
|
|
Data Sources
File-Based
To create an external table that loads a CSV file, products.csv
,
from the data source example.product_ds
, into a table named
example.ext_product:
|
|
Query-Based
To create an external table that is the result of a remote query of employees in
department 2 from the JDBC data source example.jdbc_ds
, into
a local table named example.ext_employee_dept2:
|
|
Change Data Capture
File-Based
To create an external table loaded by a set of order data in a change data capture scheme with the following conditions:
- data pulled through a data source,
example.order_ds
- data files contained with an
orders
directory - initially, all files in the directory will be loaded; subsequently, only those files that have been updated since the last check will be reloaded
- files will be polled for updates every 60 seconds
- target table named example.ext_order
|
|
Query-Based
To create an external table loaded from a remote query of orders in a change data capture scheme with the following conditions:
- data pulled through a data source,
example.jdbc_ds
- data contained with an example.orders table, where only orders for product with ID 42 will be loaded into the target table
- initially, all orders will be loaded; subsequently, only those orders with an order_id column value higher than the highest one on the previous poll cycle will be loaded
- remote table will be polled for updates every 60 seconds
- target table named example.ext_order_product42
|
|
CREATE TABLE ... AS
Creates a new table from the given query in the specified schema.
|
|
Any column aliases used must adhere to the supported naming criteria.
While primary keys & foreign keys are not transferred to the new table, shard keys will be, if the column(s) composing them are part of the SELECT list.
Parameters/Keys | Description |
---|---|
OR REPLACE | Any existing table or view with the same name will be dropped before creating this one |
REPLICATED | The table will be distributed within the database as a replicated table |
TEMP | The table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the table will be removed), but it will have increased ingest performance |
<schema name> | Name of the schema that will contain the created table; if no schema is specified, the table will be created in the user's default schema |
<table name> | Name of the table to create; must adhere to supported naming criteria |
<select statement> | The query that will define both the initial structure and content of the created table |
The following can be applied to <select statement> to affect the resulting table:
Keyword | Type | Description |
---|---|---|
KI_HINT_GROUP_BY_PK | hint | Creates a primary keys on the columns in the GROUP BY clause if the outermost SELECT statement contains a GROUP BY |
KI_HINT_INDEX(column list) | hint | Indexes each of the columns specified in the column list |
KI_SHARD_KEY(column list) | pseudo- function | Shards the result table with a shard key composed of all columns in the specified column list |
For example, to create a replicated temporary table that is a copy of an existing table, failing if a table with the same name as the target table already exists:
|
|
To create a permanent table with columns a, b, c, & d a new shard key on columns a & b, and an index on column d, replacing a table with the same name as the target table, if it exists:
|
|
To copy a table with columns a, b, c, & d, preserving the primary key on a, b, & c, and the foreign key from d; a new table must be created to match the schema of the old one and then records can be copied from the old one to the new one:
|
|
|
|
Note
This create/insert process is necessary, as neither primary keys nor foreign keys can be preserved through hints.
See Limitations for other restrictions.
ALTER TABLE
Alters the configuration of a table.
Any of the following facets of a table can be altered:
- Name
- Schema
- Access Mode
- TTL
- Columns
- Column Indexes
- Chunk Skip Indexes
- Geospatial Indexes
- CAGRA Indexes
- HNSW Indexes
- Foreign Keys
- Partitions
- Tier Strategy
- External Data Source Subscription
Rename Table
A table can be renamed, following the supported naming criteria.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
Note
Any tables with foreign keys that target this table must be dropped before it can be renamed.
Move Table
A table can be moved from one schema to another.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
Note
Any tables with foreign keys that target this table must be dropped before it can be moved.
For example, to move the sales_2017 table from the example_olap schema to the example_archive schema:
|
|
Set Access Mode
A table can have its global accessibility modified for all users in the system, independently from and further restricting any role-based access controls in place. Note that changing the access mode cannot widen access for users not already granted access; it can only narrow access for those who already have access. This setting will also trump administrative access to a table.
|
|
Set TTL
A table's time-to-live (TTL) can be altered.
|
|
For example, to set a TTL of 7 minutes on a table:
|
|
To set a table to never expire by TTL timeout:
|
|
Add Column
A column can be added, specifying a column definition.
|
|
A new column can have its values initially populated through the use of the DEFAULT keyword. These values can either be a string/numeric constant or the name of an existing column in the table from which values can be copied into the new column. This default value is only in effect for the column creation; the new column will have no default value after that.
Examples
To add, to the employee table, a salary column that is a non-nullable, 10-digit number field containing 2 decimal places with a default value of 0:
|
|
To add, to the employee table, a category column that is a nullable, dictionary-encoded, 32-character text field:
|
|
To add, to the employee table, a bio column that is a nullable, text-searchable, unrestricted-width text field:
|
|
Rename Column
An existing column can be renamed.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
Note
Any tables with foreign keys that target the column being renamed must be dropped before it can be renamed.
|
|
Modify Column
A column can have its column definition modified, affecting column type, column size, column properties, and nullability.
|
|
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
Note
Any tables with foreign keys that target the column being modified must be dropped before it can be modified.
If a column is modified to be non-nullable, it will be populated with default values--empty string for string fields and 0 for numeric fields.
Examples
To change, in the employee table, the first_name column to one that is a non-nullable, dictionary-encoded, 50-character text field:
|
|
Drop Column
An existing column can be removed from a table.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
Note
Any tables with foreign keys that target the column being dropped must be dropped before it can be dropped.
Add Column Index
A column (attribute) index can be added to a table column in order to improve the performance of operations whose expressions contain relational operators against the column. See Limitations for restrictions.
|
|
For example, to index the employee table's department ID column:
|
|
Drop Column Index
An existing column (attribute) index can be removed from a table.
|
|
For example, to drop the index on the employee table's department ID column:
|
|
Add Chunk Skip Index
A chunk skip index can be added to a table column in order to improve the performance of operations containing equality-based filters against the column. See Limitations for restrictions.
|
|
For example, to index the employee table's department ID column:
|
|
Drop Chunk Skip Index
An existing chunk skip index can be removed from a table.
|
|
For example, to drop the chunk skip index on the employee table's department ID column:
|
|
Add Geospatial Index
A geospatial index can be added to one or more table columns to improve the performance of geospatial functions applied to them.
|
|
For example, to index the employee table's work district WKT column:
|
|
To index the employee table's office location coordinate pair columns:
|
|
Drop Geospatial Index
An existing geospatial index can be removed from a table.
|
|
For example, to drop the geospatial index on the employee table's work district WKT column:
|
|
To drop the geospatial index on the employee table's office location coordinate pair columns:
|
|
Add CAGRA Index
A CAGRA index can be added to a table column in order to improve the performance of vector searches applied to the column.
|
|
For example, to add a CAGRA index on the employee table's profile column:
|
|
Refresh CAGRA Index
An existing CAGRA index can be refreshed.
|
|
For example, to refresh the CAGRA index on the employee table's profile column:
|
|
Drop CAGRA Index
An existing CAGRA index can be removed from a table.
|
|
For example, to drop the CAGRA index on the employee table's profile column:
|
|
Add HNSW Index
An HNSW index can be added to a table column in order to improve the performance of vector searches applied to the column.
|
|
For example, add an HNSW index on the employee table's profile column:
|
|
Drop HNSW Index
An existing HNSW index can be removed from a table.
|
|
For example, to drop the HNSW index on the employee table's profile column:
|
|
Add Foreign Key
A foreign key can be added to any column or set of columns with primary key applicable types in order to improve the performance of join operations between the table being altered and the table referenced in the foreign key.
|
|
For example, to add a foreign key on the employee table's department ID column, linking it to the department table's department ID column:
|
|
Drop Foreign Key
An existing foreign key can be removed from a table, either by the name (alias) given to it during creation or by its definition:
|
|
|
|
For example, to drop the foreign key on the employee table's department ID column:
|
|
|
|
Add Partition
A partition can be added to a range-partitioned or list-partitioned table.
Warning
Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.
Range Partition
The new partition can be given a minimum bound (inclusive) and a maximum bound (exclusive). If the new partition would come before an existing partition, omitting the maximum bound would cause the new partition to take on the nearest following existing partition's minimum bound as its maximum bound. If the new partition would come after an existing partition, omitting the minimum bound would cause the new partition to take on the nearest preceding partition's maximum bound as its minimum bound. If no partitions are present in the table, the new partition will have to declare both a minimum and maximum bound.
|
|
For example, to add a partition to the customer_order_range_by_year table, containing all records with a partition key less than 2020 and greater than or equal to the maximum bound of the nearest preceding partition:
|
|
List Partition
The new partition can be given a list of values to match against the partition key values of incoming records.
|
|
For example, to add a partition to the customer_order_manual_list_partition_by_year table, containing all records from 2020:
|
|
For example, to add a partition to the customer_order_manual_list_partition_by_year_and_month table, containing all records from February 2020 & April 2020:
|
|
Remove Partition
An existing partition can be removed from a range-partitioned or list-partitioned table, sending all data contained within that partition back to the default partition.
|
|
For example, to remove a partition named order_2017 from the customer_order_range_by_year table:
|
|
Delete Partition
An existing partition can be dropped from a range-partitioned or list-partitioned table, deleting all data contained within it.
|
|
For example, to drop a partition named order_2014_2016 from the customer_order_range_by_year table, deleting all data within that partition:
|
|
Set Tier Strategy
A table's eviction priorities can be adjusted by setting its tier strategy.
|
|
For example, to set the customer_order table's tier strategy, to one with a below-average eviction priority in the RAM Tier:
|
|
The tier strategy can also be reset to the system default strategy.
|
|
For example, to reset the customer_order table's tier strategy:
|
|
Manage Subscription
Any table that is subscribed to a streaming external data source can have that subscription paused, resumed, or cancelled.
|
|
Note
Once unsubscribed, there is no way to re-subscribe the table to the data source via ALTER TABLE. An external table will need to be re-created, while a table whose subscription was initiated through the LOAD INTO command can have that command re-executed.
For example, to manage a subscription on the ext_product external table through the product_ds data source:
|
|
|
|
|
|
REFRESH EXTERNAL TABLE
Refreshes the data within a materialized external table that is not subscription-based.
|
|
Important
Data source connect privilege is required to refresh an external table that uses a data source.
TRUNCATE TABLE
Deletes all the records from a table.
|
|
DROP TABLE
Removes an existing table. All dependent views, materialized views, streams, and SQL procedures will be dropped.
|
|
Note
Any tables with foreign keys that target a given table must be dropped before it can be dropped.
Parameters | Description |
---|---|
IF EXISTS | Optional error-suppression clause; if specified, no error will be returned if the given table does not exist |
<schema name> | Name of the schema containing the table to remove |
<table name> | Name of the table to remove |
SHOW TABLE
Outputs the DDL statement required to reconstruct the given table.
|
|
Note
The response to SHOW TABLE is a single-record result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.
Parameters | Description |
---|---|
CREATE | Optional keyword for clarity |
TABLE | Optional clause to avoid ambiguity:
|
<schema name> | Name of the schema containing the table to show |
<table name> | Name of the table whose DDL will be output |
For example, to output the DDL for the example table created in the CREATE TABLE section:
|
|
|
|
DESCRIBE TABLE
Lists the columns and column types & properties for a given table.
|
|
Parameters | Description |
---|---|
TABLE | Optional clause to avoid ambiguity:
|
<schema name> | Name of the schema containing the table to describe |
<table name> | Name of the table whose configuration will be output |
To describe the example table created in the CREATE TABLE section:
|
|
|
|
CREATE VIEW
Creates a new virtual table from the given query.
|
|
When any of the source tables of a view is altered or dropped, the view will also be dropped.
Caution!
A CREATE OR REPLACE issues an implicit drop, so replacing an input table will have the same effect on the view as dropping it.
Parameters | Description |
---|---|
OR REPLACE | Any existing table/view with the same name will be dropped before creating this view |
<schema name> | Name of the schema that will contain the created view; if no schema is specified, the view will be created in the user's default schema |
<view name> | Name of the view to create; must adhere to the supported naming criteria |
<select statement> | The query that will define both the structure and content of the created view |
For example, to create a view that is a copy of an existing table, failing if a table or view with the same name as the target view already exists:
|
|
CREATE MATERIALIZED VIEW
Specifying MATERIALIZED in a CREATE VIEW statement will make the view a materialized view.
|
|
The intermediary results of materialized views are cached to improve the performance of queries against them. This means that, unlike typical views, materialized views are not lightweight database entities, but rather consume memory and processing time proportional to the size of the source data and complexity of the query.
When any of the source tables of a materialized view is altered or dropped, the materialized view will also be dropped.
Caution!
A CREATE OR REPLACE issues an implicit drop, so replacing an input table will have the same effect on the materialized view as dropping it.
While primary keys & foreign keys are not transferred to the new materialized view, shard keys will be, if the column(s) composing them are part of the SELECT list. A new shard key can be specified for the materialized view by using the KI_SHARD_KEY(<column list>) pseudo-function in the SELECT list.
Parameters | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing table/view with the same name will be dropped before creating this materialized view | ||||||||||
TEMP | The materialized view will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the materialized view will be removed), but it will have increased ingest performance | ||||||||||
<schema name> | Name of the schema that will contain the created materialized view; if no schema is specified, the materialized view will be created in the user's default schema | ||||||||||
<view name> | Name of the materialized view to create; must adhere to the supported naming criteria | ||||||||||
REFRESH | Specifies the data refresh scheme for the materialized view. The following schemes are available:
| ||||||||||
<select statement> | The query that will define both the structure and content of the created materialized view | ||||||||||
WITH OPTIONS | Optional indicator that a comma-delimited list of connection option/value assignments will follow. The follow options are available:
| ||||||||||
<table property clause> | Optional clause, assigning table properties, from a subset of those available, to the materialized view |
Table Property Clause
A subset of table properties can be applied to the materialized view at creation time.
The supported Table Property Clause syntax & features are the same as those in the CREATE TABLE Table Property Clause.
Changes-Only Views via Delta Tables
A materialized view can be configured to only show the changes in the query result since its last refresh. This applies strictly for inserts; updates and deletes will not be reflected in the view's data.
Using a materialized view to only show deltas in the result data can be achieved using the DELTA_TABLE scoped hint. This hint should be placed immediately after the table name (and before any table alias) of each source table in the query that should be treated as a delta table--a table which will contribute to the result set only those records added since the last time the materialized view referencing it was refreshed.
The marked delta table must be a regular table--it cannot be any of the following:
For instance, a changes-only materialized view weather_zone can be created with its weather source table marked as a delta table in order to only show new weather events since the last refresh of weather_zone:
|
|
After an initial round of data is inserted, a query on the materialized view might return this:
|
|
After another round of data is inserted, a query on the materialized view might return this:
|
|
Examples
To create a materialized view with columns a, b, c, & d and a new shard key on columns a & b, that refreshes once per half hour, replacing a view with the same name as the target view, if it exists:
|
|
To create a materialized view with all columns of a table, refreshing once per minute from the beginning of 2025 through to the end of that year using the permissions of user mv_user to perform the refreshes:
|
|
To create a materialized view that shows only weather events from a streamed weather table and their locations from a static geo_zone table that have occurred since the last time the materialized view weather_zone joining the two data sets was refreshed:
|
|
ALTER VIEW
Alters the configuration of a view.
The following facet of a view can be altered:
Move View
A view can be moved from one schema to another.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
For example, to move the sales_2017 view from the example_olap schema to the example_archive schema:
|
|
ALTER MATERIALIZED VIEW
Alters the configuration of a materialized view.
Any of the following facets of a materialized view can be altered:
Move View
A materialized view can be moved from one schema to another.
|
|
All dependent views, materialized views, streams, and SQL procedures will be dropped.
For example, to move the sales_current view from the example_olap schema to the example_archive schema:
|
|
Set Access Mode
A materialized view can have its global accessibility modified for all users in the system, independently from and further restricting any role-based access controls in place.
|
|
Note
Changing the access mode cannot widen access for users not already granted access; it can only narrow access for those who already have access. This setting will also trump administrative access to a materialized view.
Set Execution User
A materialized view can have its execution user for periodic refreshes (REFRESH is set to EVERY ...) changed to the given user.
|
|
Note
If this user doesn't exist at the time of a refresh, the refresh will be executed as the creating user, and, failing that, the system administration user.
Set Refresh Mode
The refresh mode of a materialized view can be modified.
|
|
The available refresh modes are:
Constant | Description |
---|---|
OFF | Will prevent the materialized view from being automatically refreshed, but will still allow manual refreshes of the data to be requested |
ON CHANGE | Will cause the materialized view to be updated any time a record is added, modified, or deleted from the subtending tables in its query |
ON QUERY | Will cause the materialized view to be updated any time it is queried |
EVERY | Allows specification of an interval in seconds, minutes, hours, or days, at which the materialized view should be refreshed. By default, the first refresh interval will be one interval's worth of time from the point at which the materialized view alteration was requested. This can be modified with the following options:
|
For example, to alter the current sales materialized view to refresh every 6 hours:
|
|
Set TTL
A materialized view's time-to-live (TTL) can be altered.
|
|
REFRESH VIEW
Refreshes the data within a materialized view.
|
|
DROP VIEW
Removes an existing view or materialized view. All dependent views, materialized views, streams, and SQL procedures will be dropped.
|
|
Parameters | Description |
---|---|
MATERIALIZED | Optional keyword for clarity; has no impact on functionality |
IF EXISTS | Optional error-suppression clause; if specified, no error will be returned if the given view does not exist |
<schema name> | Name of the schema containing the view to remove |
<view name> | Name of the view to remove |
SHOW VIEW
Outputs the DDL statement required to reconstruct the given view or materialized view.
|
|
Note
The response to SHOW VIEW is a single-record result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.
Parameters | Description |
---|---|
CREATE | Optional keyword for clarity |
VIEW | Optional clause to avoid ambiguity:
|
<schema name> | Name of the schema containing the view to show |
<view name> | Name of the view whose DDL will be output |
For example, to output the DDL for the example view created in the CREATE VIEW section:
|
|
DESCRIBE VIEW
Lists the columns and column types & properties for a given view or materialized view.
|
|
Parameters | Description |
---|---|
VIEW | Optional clause to avoid ambiguity:
|
<schema name> | Name of the schema containing the view to describe |
<view name> | Name of the view whose configuration will be output |
For example, to describe the example view created in the CREATE VIEW section:
|
|
CREATE CREDENTIAL
Creates a new credential, which is a record that contains authentication information required to connect to a resource outside the database. Any user may create a credential for their own use.
|
|
The following can make use of credentials:
The following services are supported:
- Amazon S3
- Azure
- Docker Repository
- Google Cloud
- HDFS
- JDBC
- Kafka
- Apache Cluster
- Confluent Cluster
For provider-specific syntax, see Provider-Specific Syntax. For provider-specific examples, see Create Credentials.
Parameters | Description | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing credential with the same name will be dropped before creating this one | ||||||||||||||||||||||||||||
<schema name> | Name of the schema that will contain the created credential; if no schema is specified, the credential will be created in the user's default schema | ||||||||||||||||||||||||||||
<credential name> | Name of the credential to create; must adhere to the supported naming criteria | ||||||||||||||||||||||||||||
TYPE | The type of credential to create. Supported types include:
| ||||||||||||||||||||||||||||
IDENTITY | Username to use for authenticating with the credential. | ||||||||||||||||||||||||||||
SECRET | Password to use for authenticating with the credential. | ||||||||||||||||||||||||||||
WITH OPTIONS | Optional indicator that a comma-delimited list of option/value assignments will follow. See Credential Options for the full list of options. |
For example, to create a credential, auser_azure_active_dir_creds, for connecting to Microsoft Azure Active Directory:
|
|
To create a credential, kafka_cred, for connecting to Apache Kafka via SSL:
|
|
Credential Options
The following is a list of possible credential options and their associated providers. For valid combinations of credential options per provider and authentication mechanism, see Provider-Specific Syntax.
Option | Provider | Description |
---|---|---|
azure_storage_account_name | Azure | Azure storage account name (only used if azure_tenant_id is specified) |
azure_tenant_id | Azure | Azure Active Directory tenant identifier |
gcs_service_account_keys | GCS | Text of the JSON file containing the GCS private key |
hdfs_kerberos_keytab | HDFS | Location of the Kerberos keytab file in KiFS |
hdfs_use_kerberos | HDFS | Whether to attempt Kerberos authentication to HDFS |
s3_aws_role_arn | S3 | AWS S3 IAM role |
sasl.kerberos.keytab | Kafka | Location of the Kerberos keytab file in KiFS |
sasl.kerberos.principal | Kafka | Kerberos principal ID |
sasl.kerberos.service.name | Kafka | Kerberos service name |
sasl.mechanism | Kafka | SASL scheme to use; one of:
|
sasl.password | Kafka | SASL user password |
sasl.username | Kafka | SASL user ID |
security.protocol | Kafka | Security protocol to use for authentication; one of:
|
ssl.ca.location | Kafka | Location of trust store file in KiFS |
ssl.certificate.location | Kafka | Location of client certificate in KiFS |
ssl.key.location | Kafka | Location of client key in KiFS |
ssl.key.password | Kafka | Password to client key or trust store |
Provider-Specific Syntax
Several authentication schemes across multiple providers are supported.
Azure
Syntax below, examples here.
|
|
|
|
|
|
Google Cloud Storage
Syntax below, examples here.
|
|
|
|
HDFS
Syntax below, examples here.
|
|
|
|
|
|
JDBC
Syntax below, examples here.
|
|
Kafka (Apache)
Syntax below, examples here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Kafka (Confluent)
Syntax below, examples here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
S3 (Amazon)
Syntax below, examples here.
|
|
|
|
ALTER CREDENTIAL
Alters the properties of an existing credential.
|
|
Only users with system_admin or system_user_admin, or users with credential_admin on the credential, may alter it.
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the credential to alter |
<credential name> | Name of the existing credential to alter. |
SET PROPERTY | Indicator that a comma-delimited list of property name/value assignments will follow. See Set Properties and Credential Options for the complete list of properties. |
To alter a credential, auser_azure_active_dir_creds, updating the secret:
|
|
To alter a credential, kafka_cred, updating the keystore password:
|
|
Set Properties
All credential properties can be altered via ALTER CREDENTIAL. The following are the property names and descriptions to use when performing an alteration.
Parameters | Description | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TYPE | The type of credential. Supported types include:
| ||||||||||||||||||||||||||
IDENTITY | Username to use for authenticating with the credential. | ||||||||||||||||||||||||||
SECRET | Password to use for authenticating with the credential. |
DROP CREDENTIAL
Removes an existing credential.
|
|
Note
All data sources, data sinks, and ML container registries that depend on a given credential must be dropped before it can be dropped.
Only users with system_admin or system_user_admin, or users with credential_admin on the credential to drop, may drop a credential.
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the credential to drop |
<credential name> | Name of the existing credential to remove; use * instead of schema/credential name to drop all credentials |
To drop a credential, auser_azure_active_dir_creds:
|
|
SHOW CREDENTIAL
Outputs the DDL statement required to reconstruct the given credential.
|
|
The secret value will be masked and would need to be replaced with the actual secret value if attempting to reconstruct the credential. Only users with system_admin or system_user_admin, or users with credential_admin or credential_read on the credential to show, may show a credential.
Note
The response to SHOW CREDENTIAL is a single-column result set with the DDL statement as the value in the DDL column.
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the credential to show |
<credential name> | Name of the existing credential for which the DDL will be output; use * instead of schema/credential name to output the DDL of all credentials |
For example, to output the DDL for a credential, auser_azure_active_dir_creds:
|
|
To output the DDL for all credentials:
|
|
DESCRIBE CREDENTIAL
Outputs the configuration of an existing credential.
|
|
Only users with system_admin or system_user_admin, or users with credential_admin or credential_read on the credential to describe, may describe a credential.
Note
The response to DESCRIBE CREDENTIAL is a four-column result set:
- CREDENTIAL_NAME - name of the credential
- CREDENTIAL_TYPE - type of the credential; see CREATE CREDENTIAL for values
- CREDENTIAL_IDENTITY - Username associated with the credential
- CREDENTIAL_OPTIONS - Options associated with the credential; see Credential Options for values
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the credential to describe |
<credential name> | Name of the existing credential for which the configuration will be output; use * instead of schema/credential name to output the configuration of all credentials |
For example, to show the configuration for a credential, auser_azure_active_dir_creds:
|
|
To show the configuration for all credentials:
|
|
CREATE DATA SOURCE
Creates a new data source, which contains the location and connection information for a data store that is external to Kinetica. A data source serves as an authentication & access mechanism to a remote resource and can optionally be used as a streaming source as well.
|
|
A data source does not reference specific data files within the source; file references (if applicable) can be made by using the data source in a CREATE EXTERNAL TABLE call (for creating a persistent view of the file data) or a LOAD INTO call (for either a one-time load or a subscribed loading of data from the external source to a locally persisted table).
Note
The data source will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.
The following data source providers are supported:
- Azure (Microsoft blob storage)
- CData (CData Software source-specific JDBC driver)
- GCS (Google Cloud Storage)
- HDFS (Apache Hadoop Distributed File System)
- JDBC (Java Database Connectivity, using a user-supplied driver)
- Kafka (streaming feed)
- Apache
- Confluent
- S3 (Amazon S3 Bucket)
Note
- Azure anonymous data sources are only supported when both the container and the contained objects allow anonymous access.
- CData data sources can use a JDBC credential for authentication.
- HDFS systems with wire encryption are not supported.
- Confluent & Kafka data sources require an associated credential for authentication.
For provider-specific syntax, see Provider-Specific Syntax. For provider-specific examples, see Create Data Sources.
Parameters | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing data source with the same name will be dropped before creating this one | ||||||||||||||||
EXTERNAL | Optional keyword for clarity | ||||||||||||||||
<schema name> | Name of the schema that will contain the created data source; if no schema is specified, the data source will be created in the user's default schema | ||||||||||||||||
<data source name> | Name of the data source to create; must adhere to the supported naming criteria | ||||||||||||||||
<provider> | Provider of the data source Supported providers include:
| ||||||||||||||||
<host> | Host to use to connect to the data source The default host used for Azure is <storage_account_name>.blob.core.windows.net. The default host used for S3 is <region>.amazonaws.com. | ||||||||||||||||
<port> | Port, for HDFS or Kafka, to use to connect to the data source | ||||||||||||||||
USER | Optional user name, given in <username>, to use for authenticating to the data source | ||||||||||||||||
PASSWORD | Optional password, given in <password>, to use for authenticating to the data source | ||||||||||||||||
WITH OPTIONS | Optional indicator that a comma-delimited list of connection option/value assignments will follow. See Data Source Options for the full list of options. |
To create a data source, kin_ds, that connects to an Amazon S3 bucket, kinetica_ds, in the US East (N. Virginia) region:
|
|
Data Source Options
Option | Provider | Description |
---|---|---|
CREDENTIAL | Any | Credential object to use to authenticate to the remote system |
VALIDATE | Any | Whether to test the connection to the data source upon creation; if TRUE (default), the creation of a data source that cannot be connected to will fail; if FALSE, the data source will be created regardless of connectivity |
WAIT TIMEOUT | AZURE, HDFS, S3 | Timeout in seconds for reading from the storage provider |
USE_MANAGED_CREDENTIALS | AZURE, S3 | Whether to connect to the storage provider with provider-managed credentials
|
CONNECTION TIMEOUT | HDFS, S3 | Timeout in seconds for connecting to a given storage provider |
CONTAINER NAME | AZURE | Azure storage container name |
SAS TOKEN | AZURE | Azure storage account shared access signature token; this should be an account-level access token, not a container-level one |
STORAGE ACCOUNT NAME | AZURE | Azure storage account name (only used if TENANT ID is specified) |
TENANT ID | AZURE | Azure Active Directory tenant identifier |
GCS_BUCKET_NAME | GCS | Name of the GCS bucket to use as the data source |
GCS_PROJECT_ID | GCS | Name of the Google Cloud project to use for request billing |
GCS_SERVICE_ACCOUNT_KEYS | GCS | Text of the JSON key file containing the GCS private key |
DELEGATION TOKEN | HDFS | Optional Kerberos delegation token for worker nodes; if not specified, the token will be acquired from HDFS |
KERBEROS KEYTAB | HDFS | Location of the Kerberos keytab file in KiFS |
USE KERBEROS | HDFS | Whether to attempt Kerberos authentication to HDFS |
JDBC_DRIVER_CLASS_NAME | JDBC | JDBC driver class name (optional, if the name is available in the JAR file's manifest) |
JDBC_DRIVER_JAR_PATH | JDBC | KiFS path of the JDBC driver JAR file to use |
KAFKA_TOPIC_NAME | KAFKA, CONFLUENT | Kafka topic to access |
BUCKET NAME | S3 | Amazon S3 bucket name |
REGION | S3 | Amazon S3 region identifier |
S3_AWS_ROLE_ARN | S3 | Amazon Resource Name (ARN) specifying the role |
S3_ENCRYPTION_CUSTOMER_ALGORITHM | S3 | Algorithm used to encrypt/decrypt data |
S3_ENCRYPTION_CUSTOMER_KEY | S3 | Key used to encrypt/decrypt data |
SCHEMA_REGISTRY_CREDENTIAL | KAFKA, CONFLUENT | Credential object to use to authenticate to the Confluent Schema Registry |
SCHEMA_REGISTRY_LOCATION | KAFKA, CONFLUENT | Location of the Confluent Schema Registry in format: [storage_path[:storage_port]] |
SCHEMA_REGISTRY_PORT | KAFKA, CONFLUENT | Port of the Confluent Schema Registry |
Provider-Specific Syntax
Several authentication schemes across multiple providers are supported.
Azure BLOB
Syntax below, examples here.
|
|
|
|
|
|
|
|
|
|
|
|
CData
Syntax below, examples here.
|
|
|
|
|
|
Google Cloud Storage
Syntax below, examples here.
|
|
|
|
|
|
|
|
HDFS
Syntax below, examples here.
|
|
|
|
|
|
|
|
JDBC
Syntax below, examples here.
|
|
|
|
Kafka (Apache)
Syntax below, examples here.
|
|
|
|
|
|
Kafka (Confluent)
Syntax below, examples here.
|
|
|
|
|
|
S3 (Amazon)
Syntax below, examples here.
|
|
|
|
|
|
|
|
|
|
ALTER DATA SOURCE
Alters the connection parameters of an existing data source.
|
|
Note
The data source will be validated upon alteration, by default, and will fail to be altered if an authorized connection cannot be established.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data source to alter |
<data source name> | Name of the data source to alter |
SET PROPERTY | Indicator that a comma-delimited list of alterations to make will follow. See Set Properties for the complete list of properties. |
To alter a data source, kin_ds, updating the username & password:
|
|
Set Properties
All data source properties can be altered via ALTER DATA SOURCE. The following are the property names and descriptions to use when performing an alteration.
Provider-Agnostic Properties
Parameters | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LOCATION | Location of the data source; see Provider-Specific Syntax for details The data source can be from any of the following providers:
| ||||||||||||||||
USER | User name, given in <username>, to use for authenticating to the data source | ||||||||||||||||
PASSWORD | Password, given in <password>, to use for authenticating to the data source | ||||||||||||||||
CREDENTIAL | Credential object to use to authenticate to the remote system | ||||||||||||||||
VALIDATE | Whether to test the connection to the data source upon creation; if TRUE, the creation of a data source that cannot be connected to will fail; if FALSE, the data source will be created regardless of connectivity. Default is TRUE. |
Provider-Specific Properties
Option | Provider | Description |
---|---|---|
WAIT TIMEOUT | AZURE, HDFS, S3 | Timeout in seconds for reading from the storage provider |
USE_MANAGED_CREDENTIALS | AZURE, S3 | Whether to connect to the storage provider with provider-managed credentials
|
CONNECTION TIMEOUT | HDFS, S3 | Timeout in seconds for connecting to a given storage provider |
CONTAINER NAME | AZURE | Azure storage container name |
SAS TOKEN | AZURE | Azure storage account shared access signature token; this should be an account-level access token, not a container-level one |
STORAGE ACCOUNT NAME | AZURE | Azure storage account name (only used if TENANT ID is specified) |
TENANT ID | AZURE | Azure Active Directory tenant identifier |
GCS_BUCKET_NAME | GCS | Name of the GCS bucket to use as the data source |
GCS_PROJECT_ID | GCS | Name of the Google Cloud project to use for request billing |
GCS_SERVICE_ACCOUNT_KEYS | GCS | Text of the JSON key file containing the GCS private key |
DELEGATION TOKEN | HDFS | Optional Kerberos delegation token for worker nodes; if not specified, the token will be acquired from HDFS |
KERBEROS KEYTAB | HDFS | Location of the Kerberos keytab file in KiFS |
USE KERBEROS | HDFS | Whether to attempt Kerberos authentication to HDFS |
JDBC_DRIVER_CLASS_NAME | JDBC | JDBC driver class name (optional, if the name is available in the JAR file's manifest) |
JDBC_DRIVER_JAR_PATH | JDBC | KiFS path of the JDBC driver JAR file to use |
KAFKA_TOPIC_NAME | KAFKA, CONFLUENT | Kafka topic to access |
BUCKET NAME | S3 | Amazon S3 bucket name |
REGION | S3 | Amazon S3 region identifier |
S3_AWS_ROLE_ARN | S3 | Amazon Resource Name (ARN) specifying the role |
S3_ENCRYPTION_CUSTOMER_ALGORITHM | S3 | Algorithm used to encrypt/decrypt data |
S3_ENCRYPTION_CUSTOMER_KEY | S3 | Key used to encrypt/decrypt data |
SCHEMA_REGISTRY_CREDENTIAL | KAFKA, CONFLUENT | Credential object to use to authenticate to the Confluent Schema Registry |
SCHEMA_REGISTRY_LOCATION | KAFKA, CONFLUENT | Location of the Confluent Schema Registry in format: [storage_path[:storage_port]] |
SCHEMA_REGISTRY_PORT | KAFKA, CONFLUENT | Port of the Confluent Schema Registry |
LIST DATA SOURCE
Outputs a list of tables accessible through a JDBC data source.
|
|
Note
The response to LIST DATA SOURCE is a single-column result set of table names in <schema_name>.<table_name> format.
Parameters | Description |
---|---|
DATA SOURCE | Optional keyword for clarity |
<schema name> | Name of the schema containing the data source to list |
<data source name> | Name of the data source whose accessible tables will be listed |
WITH OPTIONS | Optional indicator that a comma-delimited list of filtering option/value assignments will follow. See List Data Source Options for the full list of options. |
For example, to list all the tables accessible through a data source, kin_ds_jdbc:
|
|
To list all tables in a specific schema:
|
|
To list tables matching a specific name pattern, regardless of catalog/schema:
|
|
List Data Source Options
The _ and % characters can be used in any of these options to match any single character or any number of characters in object names, respectively.
If multiple options are used, the tables must match all specified criteria.
Parameters | Description |
---|---|
CATALOG | Only show tables within the specified catalog(s) |
SCHEMA | Only show tables within the specified schema(s) |
TABLE | Only show tables matching the specified name(s) |
DROP DATA SOURCE
Removes an existing data source.
|
|
Note
Any external tables or subscriptions initiated by a LOAD INTO command that depend on a given data source must be dropped before it can be dropped.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data source to remove |
<data source name> | Name of the data source to remove |
To drop a data source, kin_ds:
|
|
SHOW DATA SOURCE
Outputs the DDL statement required to reconstruct the given data source.
|
|
The PASSWORD value will be masked and would need to be replaced with the actual password if attempting to reconstruct the data source.
Note
The response to SHOW DATA SOURCE is a single-column result set with the DDL statement as the value in the DDL column.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data source to show |
<data source name> | Name of the data source for which the DDL will be output; use * instead of schema/data source name to output the DDL of all data sources |
For example, to output the DDL for a data source, kin_ds:
|
|
To output the DDL for all data sources:
|
|
DESCRIBE DATA SOURCE
Outputs the configuration of an existing data source.
|
|
Note
The response to DESCRIBE DATA SOURCE is a three-column result set:
- DATA_SOURCE - name of the data source
- STORAGE_PROVIDER_TYPE - keyword associated with storage provider
- ADDITIONAL_INFO - data source configuration; passwords will be redacted
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data source to describe |
<data source name> | Name of the data source whose configuration will be output; use * instead of schema/data source name to output the configuration of all data sources |
To show the configuration for a data source, kin_ds:
|
|
To show the configuration for all data sources:
|
|
CREATE DATA SINK
Creates a new data sink, which contains the location and connection information for a data consumer that is external to Kinetica. A data sink serves as an authentication & access mechanism to the remote resource.
|
|
A data sink can be referenced in an EXPORT ... INTO call (for exporting local table data to a remote system) or a CREATE STREAM call (for streaming local table data to a remote system).
Note
The data sink will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.
The following data sink consumers are supported:
- Azure (Microsoft blob storage)
- CData (CData Software source-specific JDBC driver)
- GCS (Google Cloud Storage)
- HDFS (Apache Hadoop Distributed File System)
- JDBC (Java Database Connectivity, using a user-supplied driver)
- Kafka (Apache Kafka streaming feed)
- S3 (Amazon S3 Bucket)
- Webhook (HTTP/HTTPS)
Note
- CData data sinks can use a JDBC credential for authentication.
- Kafka data sinks will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.
For provider-specific syntax, see Consumer-Specific Syntax. For provider-specific examples, see Create Data Sinks.
Parameters | Description | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OR REPLACE | Any existing data sink with the same name will be dropped before creating this one | ||||||||||||||||||
EXTERNAL | Optional keyword for clarity | ||||||||||||||||||
<schema name> | Name of the schema that will contain the created data sink; if no schema is specified, the data sink will be created in the user's default schema | ||||||||||||||||||
<data sink name> | Name of the data sink to create; must adhere to the supported naming criteria | ||||||||||||||||||
<consumer> | Consumer of the data sink Supported consumers include:
| ||||||||||||||||||
<host> | Host to use to connect to the data sink | ||||||||||||||||||
<port> | Port to use to connect to the data sink | ||||||||||||||||||
WITH OPTIONS | Optional indicator that a comma-delimited list of connection option/value assignments will follow. See Data Sink Options for the full list of options. |
To create a data sink, kin_dsink, that connects to an Apache Kafka broker:
|
|
Data Sink Options
Option | Provider | Description |
---|---|---|
CREDENTIAL | Any | Credential object to use to authenticate to the remote consumer |
VALIDATE | Any | Whether to test the connection to the data sink upon creation; if TRUE (default), the creation of a data sink that cannot be connected to will fail; if FALSE, the data sink will be created regardless of connectivity |
WAIT TIMEOUT | Any | Timeout in seconds for reading from the consumer |
JDBC_DRIVER_CLASS_NAME | JDBC | JDBC driver class name (optional, if the name is available in the JAR file's manifest) |
JDBC_DRIVER_JAR_PATH | JDBC | KiFS path of the JDBC driver JAR file to use |
CONNECTION TIMEOUT | KAFKA | Timeout in seconds for connecting to a consumer |
KAFKA_TOPIC_NAME | KAFKA | Kafka topic to write to |
Consumer-Specific Syntax
Several authentication schemes across multiple consumers are supported.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ALTER DATA SINK
Alters the connection parameters of an existing data sink.
|
|
Note
Kafka data sinks will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data sink to alter |
<data sink name> | Name of the data sink to alter |
SET PROPERTY | Indicator that a comma-delimited list of alterations to make will follow. See Set Properties for the complete list of properties. |
To alter a data sink, kin_dsink, updating the timeouts:
|
|
Set Properties
All data sink properties can be altered via ALTER DATA SINK. The following are the property names and descriptions to use when performing an alteration.
Option | Provider | Description |
---|---|---|
CREDENTIAL | Any | Credential object to use to authenticate to the remote consumer |
LOCATION | Any | Location of the data sink; see Consumer-Specific Syntax for details |
VALIDATE | Any | Whether to test the connection to the data sink upon creation; if TRUE (default), the creation of a data sink that cannot be connected to will fail; if FALSE, the data sink will be created regardless of connectivity |
WAIT TIMEOUT | Any | Timeout in seconds for reading from the consumer |
JDBC_DRIVER_CLASS_NAME | JDBC | JDBC driver class name (optional, if the name is available in the JAR file's manifest) |
JDBC_DRIVER_JAR_PATH | JDBC | KiFS path of the JDBC driver JAR file to use |
CONNECTION TIMEOUT | KAFKA | Timeout in seconds for connecting to a consumer |
KAFKA_TOPIC_NAME | KAFKA | Kafka topic to write to |
DROP DATA SINK
Removes an existing data sink.
|
|
Note
Any streams that depend on a given data sink must be dropped before it can be dropped.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data sink to remove |
<data sink name> | Name of the data sink to remove |
To drop a data sink, kin_dsink:
|
|
SHOW DATA SINK
Outputs the DDL statement required to reconstruct the given data sink.
|
|
Note
The response to SHOW DATA SINK is a single-column result set with the DDL statement as the value in the DDL column.
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data sink to show |
<data sink name> | Name of the data sink for which the DDL will be output; use * instead of schema/data sink name to output the DDL of all data sinks |
For example, to output the DDL for a data sink, kin_dsink:
|
|
To output the DDL for all data sinks:
|
|
DESCRIBE DATA SINK
Outputs the configuration of an existing data sink.
|
|
Note
The response to DESCRIBE DATA SINK is a three-column result set:
- DATA_SINK - name of the data sink
- STORAGE_PROVIDER_TYPE - keyword identifying data sink consumer
- ADDITIONAL_INFO - data sink configuration
Parameters | Description |
---|---|
EXTERNAL | Optional keyword for clarity |
<schema name> | Name of the schema containing the data sink to describe |
<data sink name> | Name of the data sink whose configuration will be output; use * instead of schema/data sink name to output the configuration of all data sinks |
To show the configuration for a data sink, kin_dsink:
|
|
To show the configuration for all data sinks:
|
|
CREATE STREAM
Creates a new data stream (natively, a table monitor), which publishes changes in a given table to a target.
|
|
The monitored table can be one of:
- a table (external tables not supported)
- a materialized view (insert monitoring only)
The target can be one of:
- an external Apache Kafka broker
- an external webhook
By default, a stream will publish inserted records to the target. A stream can alternatively be configured to monitor for updates or deletes and publish the corresponding record counts. Streams that monitor for inserts can have an expression applied to only have specific inserts of interest published.
An existing data sink can be referenced in creating a stream, as the external target of the streamed data. Only unauthenticated external targets may be used if not using a data sink.
When the source table of a stream is altered or dropped, the stream will also be dropped.
Parameters | Description | ||||||
---|---|---|---|---|---|---|---|
<stream schema name> | Name of the schema that will contain the created stream; if no schema is specified, the stream will be created in the user's default schema | ||||||
<stream name> | Name of the stream to create; must adhere to the supported naming criteria | ||||||
TABLE | Optional keyword for clarity. | ||||||
<table schema name> | Name of the schema containing the table or materialized view to monitor. | ||||||
<table name> | Name of the table or materialized view to monitor for changes. | ||||||
REFRESH | Specifies the reporting scheme for monitored changes. The following schemes are available:
| ||||||
<filter expression> | Boolean expression that can be used to monitor for only a specific set of inserts. | ||||||
WITH OPTIONS | Indicator that a comma-delimited list of configuration option/value assignments will follow. See Stream Options for the full list of options. Note One of either DATASINK_NAME or DESTINATION is required. |
To create a stream, kin_stream, that publishes inserts into the order_stream table of orders over $10,000 via the kin_dsink data sink:
|
|
To create a stream, kin_stream, that publishes inserts into the order_stream table of orders, placed within a given geofence, via the kin_dsink data sink:
|
|
Stream Options
Option | Consumer | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
DATASINK_NAME | Any | Data sink object to use to identify the consumer of this stream; mutually exclusive with destination | ||||||||
DESTINATION | Any | Location of the external target when not using a data sink; the URL of the consumer to connect to in this format: <consumer>://<host>[:<port>] Supported consumers include:
| ||||||||
EVENT | Any | Specifies the type of event to monitor
| ||||||||
INCREASING_COLUMN | Any | Column of the monitored table that contains ever-increasing values for new data; e.g., a timestamp or sequence ID column Tip If such a column can be identified, the performance of the stream will improve. | ||||||||
KAFKA_TOPIC_NAME | kafka | Kafka topic to write to, if the target is a Kafka broker |
Consumer-Specific Syntax
Several configurations across multiple consumers are supported.
|
|
|
|
|
|
|
|
DROP STREAM
Removes an existing stream.
|
|
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the stream to remove |
<stream name> | Name of the stream to remove |
To drop a stream, kin_stream:
|
|
SHOW STREAM
Outputs the DDL statement required to reconstruct one or more existing streams.
|
|
Listing options:
- [<schema name>.]<stream name> - output the DDL statement of the given stream
- <schema name>.* - output the DDL statements of all streams under the given schema
- * - output the DDL statements of all streams
Note
The response to SHOW STREAM is a single-column result set with the DDL statement as the value in the DDL column.
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the stream(s) to show |
<stream name> | Name of the stream to show |
For example, to output the DDL for a stream, kin_stream:
|
|
DESCRIBE STREAM
Outputs the configuration of one or more existing streams.
|
|
Listing options:
- [<schema name>.]<stream name> - output the configuration of the given stream
- <schema name>.* - output the configuration of all streams under the given schema
- * - output the configuration of all streams
Note
The response to DESCRIBE STREAM is a six-column result set:
- MONITOR_ID - unique name or topic ID for the stream
- TABLE_NAME - name of the table being monitored
- EVENT - table data event that triggers streamed content
- INCREASING_COLUMN - column in table being monitored that contains ever-increasing data values
- FILTER_EXPRESSION - filter used to monitor only data of interest
- DATASINK_NAME - name of the data sink associated with the consumer
Parameters | Description |
---|---|
<schema name> | Name of the schema containing the stream(s) to describe |
<stream name> | Name of the stream to describe |
To show the configuration for a stream, kin_stream:
|
|
DESCRIBE
Lists the contained tables and views of a given schema, or lists the columns and column types & properties for a given table or view.
|
|
|
|
For example, to describe the tables contained in the demo schema, into which demo data is usually downloaded:
|
|
|
|
To describe the example table created in the CREATE TABLE section:
|
|
|
|