Data Definition (DDL)

Kinetica supports the basic notion of SQL tables as containers of one or more columns of data.

Table & column names must adhere to the supported naming criteria.

A column definition consists of a column type and optional column size, column properties, and nullability. Column properties are used to optimize data storage & speed.

The format of a defined column is column name, followed by column definition. A column definition is column type optionally followed by any column size limit or column properties all enclosed in parentheses, followed by an optional nullability statement:

Column Definition Syntax
1
<column name> <column type> [(<column size / property list>)] [[NOT] NULL]

This format applies to any DDL statement requiring the definition of columns, like CREATE TABLE and ALTER TABLE (when adding/modifying a column).

For example, the following are valid defined columns:

Column Definition Examples
1
2
3
4
id INTEGER(SHARD_KEY)            -- id is an integer and the table's shard key
name VARCHAR(64, TEXT_SEARCH)    -- name is a 64-byte limited string and text-searchable
ip IPV4                          -- ip is a string in IPv4 format
cost DECIMAL(10, 2)              -- cost is able to hold an 8.2 decimal

Column Types

CategoryData TypeDescription
NumberBOOLEANEffective type: boolean
TINYINTEffective type: int8
BYTEAlias for TINYINT
SMALLINTEffective type: int16
INTEGEREffective type: integer
INTAlias for INTEGER
BIGINTEffective type: long
DECIMALAlias for BIGINT
LONGAlias for BIGINT
UNSIGNED BIGINTEffective type: ulong
UNSIGNED LONGAlias for UNSIGNED BIGINT
REALEffective type: float
FLOATAlias for REAL
DOUBLEEffective type: double
DECIMAL(P,S)

Effective type: varies by P & S

PSEffective Type
1, 20int8
3, 40int16
5, 60integer
7- n0long
1- n1-4decimal
5, 65, 6float
7- n5- ndouble
NUMERICAlias for DECIMAL / DECIMAL(P,S)
StringVARCHAREffective type: string; character limit based on configured system property
VARCHAR(N)Effective type: the smallest charN type that is at least size N, or string if N is greater than 256
CHARAlias for VARCHAR / VARCHAR(N)
STRINGAlias for VARCHAR / VARCHAR(N)
TEXTAlias for VARCHAR / VARCHAR(N)
IPV4Shorthand for VARCHAR(IPV4), which applies the IPV4 column property
UUIDEffective type uuid
JSONEffective type json
Date/TimeDATEEffective type: date
DATETIMEEffective type: datetime
TIMEEffective type: time
TIMESTAMPEffective type: timestamp
TYPE_DATEAlias for DATE
TYPE_TIMEAlias for TIME
TYPE_TIMESTAMPAlias for TIMESTAMP
BinaryBLOBEffective type: bytes
BINARYAlias for BLOB
BYTESAlias for BLOB
VARBINARYAlias for BLOB
GeospatialGEOMETRYEffective type: wkt (string base type)
ST_GEOMETRYAlias for GEOMETRY
WKTAlias for GEOMETRY
BLOB(WKT)Effective type: wkt (bytes base type)
BINARY(WKT)Alias for BLOB(WKT)
BYTES(WKT)Alias for BLOB(WKT)
VARBINARY(WKT)Alias for BLOB(WKT)
CompositeBOOLEAN[N]Effective type: array(boolean) (string base type)
INTEGER[N]Effective type: array(int) (string base type)
BIGINT[N]Effective type: array(long) (string base type)
VECTOR(N)Effective type: vector(n) (bytes base type); generally used in vector search
REAL[N]Effective type: array(float) (string base type)
DOUBLE[N]Effective type: array(double) (string base type)
VARCHAR[N]Effective type: array(string) (string base type)

Column Properties

PropertyDescription
DICTApplies dict data handling to a column, enabling dictionary-encoding of its values; see Dictionary Encoding for details
INIT_WITH_NOWFor DATE, TIME, DATETIME, and TIMESTAMP column types, enables the database to use NOW() as the value when the column is not specified in an INSERT statement; also directs the database to replace empty strings (for DATE, TIME, & DATETIME columns) and invalid timestamps (for TIMESTAMP columns) with NOW()
INIT_WITH_UUIDFor UUID column type, enables the database to use a universally unique identifier (UUID) as the value when the column is not specified in an INSERT statement; also directs the database to replace empty strings (for UUID columns) with UUIDs
IPV4Treats the associated string-based column as an IPv4 address
NORMALIZEFor VECTOR column type, automatically normalizes each vector to have a magnitude of 1; see Vector Type
PRIMARY_KEYTreats the associated column as a primary key, or part of a composite primary key if other columns also have this property
SHARD_KEYTreats the associated column as a shard key, or part of a composite shard key if other columns also have this property
TEXT_SEARCHApplies text-searchability to a column

CREATE SCHEMA

Schemas are logical containers for all database objects (tables, views, etc.). In order to place an object in a schema, the schema must be created first--schemas will not be automatically created when specified in CREATE TABLE or similar calls.

CREATE SCHEMA Syntax
1
CREATE SCHEMA [IF NOT EXISTS] <schema name>
ParametersDescription
IF NOT EXISTSOptional error suppression clause, which causes no error to be returned if a schema with the same name already exists
<schema name>Name of the schema to create; must adhere to the supported naming criteria
CREATE SCHEMA Example
1
CREATE SCHEMA example_container

ALTER SCHEMA

Alters the configuration of a schema. The name of a schema is the only property that can be altered.

Rename Schema

A schema can be renamed.

Rename Schema Syntax
1
2
ALTER SCHEMA <schema name>
RENAME TO <new schema name>
Rename Schema Example
1
2
ALTER SCHEMA example_container
RENAME TO example_container_renamed

DROP SCHEMA

Removes an existing schema.

DROP SCHEMA Syntax
1
DROP SCHEMA [IF EXISTS] <schema name> [CASCADE]
ParametersDescription
IF EXISTSOptional error-suppression clause; if specified, no error will be returned if the given schema does not exist
<schema name>Name of the schema to remove
CASCADEDrops all objects contained within the schema; if not specified, an error will be returned if the given schema contains any tables, views, etc.

For example, to drop a schema, including all contained objects:

DROP SCHEMA Example
1
DROP SCHEMA example_container CASCADE

SHOW SCHEMA

Outputs the DDL statement required to reconstruct the given schema.

SHOW SCHEMA Syntax
1
SHOW [CREATE] SCHEMA < <schema name> | * >

Note

The response to SHOW SCHEMA is a single-column result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.

ParametersDescription
CREATEOptional keyword for clarity
SCHEMAOptional clause to avoid ambiguity; if omitted, and a table or view exists with the given name, the command will be interpreted as a SHOW TABLE statement instead
<schema name>Name of the schema whose contained tables' & views' DDL will be output; use * instead to output the DDL of all schemas, tables, & views

For example, to output the DDL for the example schema created in the CREATE SCHEMA section:

SHOW SCHEMA Example
1
SHOW CREATE SCHEMA example_container
SHOW SCHEMA Output
1
| CREATE SCHEMA "example_container";   |

DESCRIBE SCHEMA

Lists the contained tables and views of a given schema.

DESCRIBE SCHEMA Syntax
1
DESC[RIBE] [SCHEMA] <schema name>
ParametersDescription
SCHEMAOptional clause to avoid ambiguity; if omitted, and a table or view exists with the given name, the command will be interpreted as a DESCRIBE TABLE statement instead
<schema name>Name of the schema whose contained tables & views will be listed

For example, to describe the tables contained in the demo schema, into which demo data is usually downloaded:

DESCRIBE SCHEMA Example
1
DESC demo
DESCRIBE SCHEMA Output
1
2
3
4
5
6
7
+------------+
| Set_name   |
+------------+
| stocks     |
| nyctaxi    |
| flights    |
+------------+

SET CURRENT SCHEMA

To override the default schema with a different schema, the following syntax can be used:

Note

This command is only available through KiSQL or database clients configured with the Kinetica ODBC/JDBC driver.

SET CURRENT SCHEMA Syntax
1
SET CURRENT SCHEMA [<schema name>]

Omitting schema name will switch back to the original default schema.


CREATE TABLE

Creates a new table.

CREATE TABLE Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
CREATE [OR REPLACE] [REPLICATED] [TEMP] TABLE [<schema name>.]<table name>
(
    <column name> <column definition>,
    ...
    <column name> <column definition>,
    [PRIMARY KEY (<column list>)],
    [SHARD KEY (<column list>)],
    [FOREIGN KEY
        (<column list>) REFERENCES <foreign table name>(<foreign column list>) [AS <foreign key name>],
        ...
        (<column list>) REFERENCES <foreign table name>(<foreign column list>) [AS <foreign key name>]
    ]
)
[<partition clause>]
[<tier strategy clause>]
[<index clause>]
[<table property clause>]
ParametersDescription
OR REPLACEAny existing table or view with the same name will be dropped before creating this one
REPLICATEDThe table will be distributed within the database as a replicated table
TEMPThe table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the table will be removed), but it will have increased ingest performance
<schema name>Name of the schema that will contain the created table; if no schema is specified, the table will be created in the user's default schema
<table name>Name of the table to create; must adhere to the supported naming criteria
<column name>Name of a column to create within the table; must adhere to the supported naming criteria
<column definition>Definition of the column associated with <column name>; see Data Definition (DDL) for column format
PRIMARY KEY (<column list>)Optional primary key specification clause, where <column list> is a comma-separated list of columns to use as the primary key for the table
SHARD KEY (<column list>)Optional shard key specification clause, where <column list> is a comma-separated list of columns to use as the shard key for the table
FOREIGN KEY ...

Optional comma-separated set of foreign key specification clauses, with the following parameters:

ParametersDescription
<column list>Comma-separated list of columns in the table to create that will reference a matching set of primary key columns in another table
<foreign table name>Name of target table referred to in this foreign key
<foreign column list>The primary key columns in the target table referred to in this foreign key, matching the list of columns specified in <column list> in the table to create
AS <foreign key name>Optional alias for the foreign key
<partition clause>Defines a partitioning scheme for the table to create
<tier strategy clause>Defines the tier strategy for the table to create
<index clause>Applies any number of column indexes, chunk skip indexes, geospatial indexes, CAGRA indexes, or HNSW indexes to the table to create
<table property clause>Assigns table properties, from a subset of those available, to the table to create

For example, to create a table with various column types and properties:

CREATE TABLE Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
CREATE TABLE example.various_types
(
    i   INTEGER NOT NULL,                                 /* non-nullable integer, part of primary key (defined at end)                         */
    bi  BIGINT NOT NULL,                                  /* long, part of primary key, shard key, foreign key source (defined at end)          */
    b   BOOLEAN,                                          /* 0s and 1s only                                                                     */
    ub  UNSIGNED BIGINT,                                  /* native unsigned long                                                               */
    r   REAL,                                             /* native float                                                                       */
    d   DOUBLE,                                           /* native double                                                                      */
    s   VARCHAR(TEXT_SEARCH),                             /* string, searchable, only limited in size by system-configured value                */
    c   VARCHAR(30, DICT),                                /* char32 using dictionary-encoding of values                                         */
    p   VARCHAR(256, TEXT_SEARCH),                        /* char256, searchable                                                                */
    ip  IPV4,                                             /* IP address                                                                         */
    ui  UUID(INIT_WITH_UUID),                             /* UUID                                                                               */
    ts  TIMESTAMP,                                        /* timestamp                                                                          */
    td  DATE,                                             /* simple date                                                                        */
    tt  TIME,                                             /* simple time                                                                        */
    dt  DATETIME(INIT_WITH_NOW),                          /* date/time                                                                          */
    dc1 DECIMAL,                                          /* native long                                                                        */
    dc2 DECIMAL(18, 4),                                   /* native decimal                                                                     */
    dc3 DECIMAL(6, 5),                                    /* native float                                                                       */
    dc4 DECIMAL(7, 5),                                    /* native double                                                                      */
    n   NUMERIC(5, 3),                                    /* native decimal, the next largest native numeric type to hold the number type       */
    byt BYTES,                                            /* BLOB                                                                               */
    w   WKT,                                              /* geospatial column for WKT string data                                              */
    j   JSON,                                             /* JSON string                                                                        */
    v   VECTOR(10),                                       /* vector column holding 10 floating point values                                     */
    ai  INTEGER[3],                                       /* array column holding 3 integer values                                              */
    PRIMARY KEY (i, bi),                                  /* primary key columns must be NOT NULL                                               */
    SHARD KEY (bi),                                       /* shard key columns must be part of the primary key                                  */
    FOREIGN KEY (bi) REFERENCES example.lookup(id) AS fk  /* foreign key is often on the shard key                                              */
)
INDEX (ip)                                                /* index on IP column                                                                 */
INDEX (ts)                                                /* index on timestamp column                                                          */

Partition Clause

A table can be further segmented into partitions. The supported partition types are:

See Partitioning for details.

Range Partitioning

The general format for the range partition clause is:

PARTITION BY RANGE Syntax
1
2
3
4
5
6
7
8
9
PARTITION BY RANGE ( <column expression> )
[
    PARTITIONS
    (
        <partition name> [ MIN ( <least value> ) ] [ MAX ( <greatest value> ) ],
        ...
        <partition name> [ MIN ( <least value> ) ] [ MAX ( <greatest value> ) ]
    )
]

The partition definition clause, PARTITIONS, is optional, though it is recommended to define partitions at table creation time, when feasible.

Warning

Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.

For example, to create a range-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • partitions for years:
    • 2014 - 2016
    • 2017
    • 2018
    • 2019
  • records not in that range go to the default partition
PARTITION BY RANGE Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
CREATE TABLE example.customer_order_range_partition_by_year
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP NOT NULL
)
PARTITION BY RANGE (YEAR(purchase_ts))
PARTITIONS
(
    order_2014_2016 MIN(2014) MAX(2017),
    order_2017                MAX(2018),
    order_2018                MAX(2019),
    order_2019                MAX(2020)
)

Interval Partitioning

The general format for the interval partition clause is:

PARTITION BY INTERVAL Syntax
1
2
3
4
5
PARTITION BY INTERVAL ( <column expression> )
PARTITIONS
(
    STARTING [AT] (<least value>) INTERVAL (<interval size>)
)

For example, to create an interval-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • one partition for each year from 2014 on
  • later year partitions are added as necessary
  • records prior to 2014 go to the default partition
PARTITION BY INTERVAL (Year) Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
CREATE TABLE example.customer_order_interval_partition_by_year
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP
)
PARTITION BY INTERVAL (YEAR(purchase_ts))
PARTITIONS
(
    STARTING AT (2014) INTERVAL (1)
)

To create an interval-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • one partition for each day from January 1st, 2014 on
  • later day partitions are added as necessary
  • records prior to 2014 go to the default partition
PARTITION BY INTERVAL (Day) Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
CREATE TABLE example.customer_order_interval_partition_by_day_timestampdiff
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP
)
PARTITION BY INTERVAL (TIMESTAMPDIFF(DAY, '2014-01-01', purchase_ts))
PARTITIONS
(
    STARTING AT (0) INTERVAL (1)
)

The same interval-partitioned scheme above can be created using the timestamp column directly, with the help of the INTERVAL function (described in the Date/Time Functions section):

PARTITION BY INTERVAL (Day) Alternate Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
CREATE TABLE example.customer_order_interval_partition_by_day_interval
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP
)
PARTITION BY INTERVAL (purchase_ts)
PARTITIONS
(
    STARTING AT ('2014-01-01') INTERVAL (INTERVAL '1' DAY)
)

This scheme can be easily modified to create an hourly partition instead:

PARTITION BY INTERVAL (Hour) Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
CREATE TABLE example.customer_order_interval_partition_by_hour_interval
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP
)
PARTITION BY INTERVAL (purchase_ts)
PARTITIONS
(
    STARTING AT ('2014-01-01') INTERVAL (INTERVAL '1' HOUR)
)

List Partitioning

The list partition clause has two forms:

Manual

The general format for the manual list partition clause is:

PARTITION BY LIST Syntax
1
2
3
4
5
6
7
8
9
PARTITION BY LIST ( <column expression list> )
[
    PARTITIONS
    (
        <partition name> VALUES ( <value lists> ),
        ...
        <partition name> VALUES ( <value lists> )
    )
]

The partition definition clause, PARTITIONS, is optional, though it is recommended to define partitions at table creation time, when feasible.

Warning

Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.

For example, to create a manual list-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • partitions for years:
    • 2014 - 2016
    • 2017
    • 2018
    • 2019
  • records not in that list go to the default partition
PARTITION BY LIST (Year) Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
CREATE TABLE example.customer_order_manual_list_partition_by_year
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP NOT NULL
)
PARTITION BY LIST (YEAR(purchase_ts))
PARTITIONS
(
    order_2014_2016 VALUES (2014, 2015, 2016),
    order_2017      VALUES (2017),
    order_2018      VALUES (2018),
    order_2019      VALUES (2019)
)

To create a manual list-partitioned table with a multi-column key and the following criteria:

  • partitioned on the date/time of the order
  • each partition corresponds to a unique year & month pair
  • partitions for years/months:
    • February 2016 & March 2016
    • March 2020
  • records not in that list go to the default partition
PARTITION BY LIST (Month) Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
CREATE TABLE example.customer_order_manual_list_partition_by_year_and_month
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP NOT NULL
)
PARTITION BY LIST (YEAR(purchase_ts), MONTH(purchase_ts))
PARTITIONS
(
    order_2016_0203 VALUES ((2016, 2), (2016, 3)),
    order_2020_03   VALUES ((2020, 3))
)
Automatic

The general format for the automatic list partition clause is:

Automatic PARTITION BY LIST Syntax
1
2
PARTITION BY LIST ( <column expression list> )
AUTOMATIC

To create an automatic list-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • one partition for each unique year & month across all orders
  • partitions are added as necessary
Automatic PARTITION BY LIST (Month) Example
1
2
3
4
5
6
7
8
9
CREATE TABLE example.customer_order_automatic_list_partition_by_year_and_month
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP NOT NULL
)
PARTITION BY LIST (YEAR(purchase_ts), MONTH(purchase_ts))
AUTOMATIC

Hash Partitioning

The general format for the hash partition clause is:

PARTITION BY HASH Syntax
1
2
PARTITION BY HASH ( <column expressions> )
PARTITIONS <total partitions>

To create a hash-partitioned table with the following criteria:

  • partitioned on the date/time of the order
  • distributed among the fixed set of partitions, based on the hash of the year & month of the order
  • 10 partitions
PARTITION BY HASH Example
1
2
3
4
5
6
7
8
9
CREATE TABLE example.customer_order_hash_partition_by_year_and_month
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,2),
    purchase_ts  TIMESTAMP NOT NULL
)
PARTITION BY HASH (YEAR(purchase_ts), MONTH(purchase_ts))
PARTITIONS 10

Series Partitioning

The general format for the series partition clause is:

PARTITION BY SERIES
1
2
PARTITION BY SERIES ( <column list> )
[PERCENT_FULL <percentage>]

The PERCENT_FULL should be an integer between 1 and 100; the default is 50%.

To create a series-partitioned table with the following criteria:

  • partitioned on the customer of each order
  • partitions with closed key sets will contain all orders from a set of unique customers
  • 50% fill threshold
PARTITION BY SERIES Example
1
2
3
4
5
6
7
8
9
CREATE TABLE example.customer_order_series_partition_error_default_percent_full
(
    id           INT NOT NULL,
    customer_id  INT NOT NULL,
    total_price  DECIMAL(10,4),
    purchase_ts  TIMESTAMP NOT NULL
    SHARD KEY(customer_id)
)
PARTITION BY SERIES (customer_id)

To create a series-partitioned track table with the following criteria:

  • partitioned on the track ID
  • partitions with closed key sets will contain all points from a unique set of tracks
  • 25% fill threshold
PARTITION BY SERIES (Tracks) Example
1
2
3
4
5
6
7
8
9
CREATE TABLE example.route_series_partition_by_track
(
    TRACKID    VARCHAR NOT NULL,
    x          DOUBLE NOT NULL,
    y          DOUBLE NOT NULL,
    TIMESTAMP  TIMESTAMP NOT NULL
)
PARTITION BY SERIES (TRACKID)
PERCENT_FULL 25

Tier Strategy Clause

A table can have a tier strategy specified at creation time. If not assigned a tier strategy upon creation, a default tier strategy will be assigned.

TIER STRATEGY Clause Syntax
1
2
3
4
5
6
TIER STRATEGY
(
    <tier strategy>,
    ...
    <tier strategy>
)

For example, to create a customer_order table with an above-average eviction priority in the RAM Tier:

TIER STRATEGY Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
CREATE OR REPLACE TABLE example.customer_order
(
    id          INT NOT NULL,
    customer_id INT NOT NULL,
    total_price DECIMAL(10,2),
    purchase_ts TIMESTAMP,
    SHARD KEY (customer_id)
)
TIER STRATEGY
(
    ( ( VRAM 1, RAM 7, PERSIST 5 ) )
)

If not specified, the default tier strategy will be assigned:

Default TIER STRATEGY Example
1
2
3
4
5
6
7
8
CREATE OR REPLACE TABLE example.customer_order
(
    id          INT NOT NULL,
    customer_id INT NOT NULL,
    total_price DECIMAL(10,2),
    purchase_ts TIMESTAMP,
    SHARD KEY (customer_id)
)
SHOW TABLE Command to Display TIER STRATEGY
1
SHOW CREATE TABLE example.customer_order
SHOW TABLE Command Output
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
| CREATE TABLE "example"."customer_order"
(
    "id" INTEGER NOT NULL,
    "customer_id" INTEGER (shard_key) NOT NULL,
    "total_price" DECIMAL (18, 4),
    "purchase_ts" TIMESTAMP
)
TIER STRATEGY (
    ( ( VRAM 1, RAM 7, PERSIST 5 ) )
); |

Note

The response to SHOW TABLE is a single-record result set with the DDL statement as the value in the DDL column, shown here with the column separators returned by kisql.

Index Clause

A table can have any number of indexes applied to any of its columns at creation time.

The types of explicit indexes supported are:

Index Clause Syntax
1
2
3
<[ATTRIBUTE] | CHUNK [SKIP] | GEOSPATIAL | CAGRA | HNSW> INDEX (<column>)
...
<[ATTRIBUTE] | CHUNK [SKIP] | GEOSPATIAL | CAGRA | HNSW> INDEX (<column>)

For example, to create a table with the following indexes:

  • column index on dept_id
  • chunk skip index on id
  • geospatial index on work_district
  • geospatial index on the pair of office_longitude & office_latitude
  • CAGRA index on profile
Index Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
CREATE TABLE example.employee
(
    id INTEGER NOT NULL,
    dept_id INTEGER NOT NULL,
    manager_id INTEGER,
    first_name VARCHAR(30),
    last_name VARCHAR(30),
    sal DECIMAL(18,4),
    hire_date DATE,
    work_district WKT,
    office_longitude REAL,
    office_latitude REAL,
    profile VECTOR(10),
    PRIMARY KEY (id, dept_id),
    SHARD KEY (dept_id)
)
INDEX (dept_id)
CHUNK SKIP INDEX (id)
GEOSPATIAL INDEX (work_district)
GEOSPATIAL INDEX (office_longitude, office_latitude)
CAGRA INDEX (profile)

Table Property Clause

A subset of table properties can be applied to the table at creation time.

Table Property Clause
1
2
3
4
5
6
USING TABLE PROPERTIES
(
    <table property> = <value>,
    ...
    <table property> = <value>
)

Available table properties include:

PropertyDescription
CHUNK SIZESize of the blocks of memory holding the data, when loaded; specified as the maximum number of records each block of memory should hold
NO_ERROR_IF_EXISTSError suppression option, which causes no error to be returned if a table with the same name already exists; default is FALSE
TTLThe time-to-live (TTL) for the table; if not set, the table will not expire

For example, to create a table with up to 1,000,000 records per chunk and that will expire in 15 minutes:

Table Property Example
1
2
3
4
5
6
7
8
9
CREATE OR REPLACE TABLE example.customer_order
(
    id          INTEGER NOT NULL,
    customer_id INTEGER NOT NULL,
    total_price DECIMAL(10,2),
    purchase_ts TIMESTAMP,
    SHARD KEY (customer_id)
)
USING TABLE PROPERTIES (CHUNK SIZE = 1000000, TTL = 15)

CREATE EXTERNAL TABLE

Creates a new external table, which is a database object whose source data is located in one or more files, either internal or external to the database.

CREATE EXTERNAL TABLE Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
CREATE [OR REPLACE] [REPLICATED] [TEMP] [LOGICAL | MATERIALIZED] EXTERNAL TABLE [<schema name>.]<table name>
[<table definition clause>]
<
    REMOTE QUERY '<source data query>'
    |
    FILE PATHS <file paths>
        [FORMAT <[DELIMITED] TEXT [(<delimited text options>)] | AVRO | JSON | PARQUET | SHAPEFILE>]
>
[WITH OPTIONS (<load option name> = '<load option value>'[,...])]
[<partition clause>]
[<tier strategy clause>]
[<index clause>]
[<table property clause>]

Note

For contextualized examples, see Examples. For copy/paste examples, see Loading Data. For an overview of loading data into Kinetica, see Data Loading Concepts.


The source data can be located in either of the following locations:

A materialized external table (default) that uses a data source can perform a one-time load upon creation and optionally subscribe for updates on an interval, depending on the data source provider:

ProviderDescriptionOne-Time LoadSubscription
AzureMicrosoft blob storageYesYes
CData

CData Software source-specific JDBC driver

See driver list for the supported drivers

YesYes
GCSGoogle Cloud StorageYesYes
HDFSApache Hadoop Distributed File SystemYes 
JDBCJava DataBase Connectivity; requires user-supplied driverYesYes
S3Amazon S3 BucketYesYes

See Manage Subscription for pausing, resuming, & canceling subscriptions on the external table.

Although an external table cannot use a data source configured for Kafka, a standard table can have Kafka data streamed into it via a LOAD INTO command that references such a data source.

The use of external tables with ring resiliency has additional considerations.

ParametersDescription
OR REPLACEAny existing table or view with the same name will be dropped before creating this one
REPLICATEDThe external table will be distributed within the database as a replicated table
TEMPThe external table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the external table will be removed), but it will have increased ingest performance
LOGICALExternal data will not be loaded into the database; the data will be retrieved from the source upon servicing each query against the external table. This mode ensures queries on the external table will always return the most current source data, though there will be a performance penalty for reparsing & reloading the data from source files upon each query.
MATERIALIZEDLoads a copy of the external data into the database, refreshed on demand; this is the default external table type
<schema name>Name of the schema that will contain the created external table; if no schema is specified, the external table will be created in the user's default schema
<table name>Name of the external table to create; must adhere to the supported naming criteria
<table definition clause>Optional clause, defining the structure for the external table associated with the source data
REMOTE QUERY

Source data specification clause, where <source data query> is a SQL query selecting the data which will be loaded.

Note

This clause is mutually exclusive with the FILE PATHS clause, and is only applicable to CData & JDBC data sources.

The query should meet the following criteria:

  • Any column expression used is given a column alias.
  • The first column is not a WKT or unlimited length VARCHAR type.
  • The columns and expressions queried should match the intended order, number, & type of the columns in the target table.

Any query resulting in more than 10,000 records will be distributed and loaded in parallel (unless directed otherwise) using the following rule sequence:

  1. If REMOTE_QUERY_NO_SPLIT is TRUE, the query will not be distributed.
  2. If a valid REMOTE_QUERY_PARTITION_COLUMN is specified, the query will be distributed by partitioning on the given column's values
  3. If a valid REMOTE_QUERY_ORDER_BY is specified, the query will be distributed by ordering the data accordingly and then partitioning into sequential blocks from the first record
  4. If a non-null numeric/date/time column exists, the query will be distributed by partitioning on the first such column's values
  5. The query will be distributed by sorting the data on the first column and then partitioning into sequential blocks from the first record

Type inferencing is limited by the available JDBC types. To take advantage of Kinetica-specific types and properties, define the table columns explicitly in the <table definition clause>.

FILE PATHS

Source file specification clause, where <file paths> is a comma-separated list of single-quoted file paths from which data will be loaded; all files specified are presumed to have the same format and data types.

Note

This clause is mutually exclusive with the REMOTE QUERY clause, and is not applicable to CData & JDBC data sources.

The form of a file path is dependent on the source referenced:

  • Data Source: If a data source is specified in the load options, these file paths must resolve to accessible files at that data source location. A "path prefix" can be specified instead, which will cause all files whose path begins with the given prefix to be included.

    For example, a "path prefix" of /data/ge for <file paths> would match all of the following:

    • /data/geo.csv
    • /data/geo/flights.csv
    • /data/geo/2021/airline.csv

    If using an HDFS data source, the "path prefix" must be the name of an HDFS directory.

  • KiFS: The path must resolve to an accessible file path within KiFS. A "path prefix" can be specified instead, which will cause all files whose path begins with the given prefix to be included.

    For example, a "path prefix" of kifs://data/ge would match all of the following files under the KiFS data directory:

    • kifs://data/geo.csv
    • kifs://data/geo/flights.csv
    • kifs://data/geo/2021/airline.csv
FORMAT

Optional indicator of source file type, for file-based data sources; will be inferred from the file extension if not given.

Supported formats include:

KeywordDescription
[DELIMITED] TEXT

Any text-based, delimited field data file (CSV, PSV, TSV, etc.); a comma-delimited list of options can be given to specify the way in which the data file(s) should be parsed, including the delimiter used, whether headers are present, etc. Records spanning multiple lines are not supported.

See Delimited Text Options for the complete list of <delimited text options>.

AVROApache Avro data file
JSON

Either a JSON or GeoJSON data file

See JSON/GeoJSON Limitations for the supported data types.

PARQUET

Apache Parquet data file

See Parquet Limitations for the supported data types.

SHAPEFILEArcGIS shapefile
WITH OPTIONS

Optional indicator that a comma-delimited list of connection & global option/value assignments will follow.

See Load Options for the complete list of options.

<partition clause>Optional clause, defining a partitioning scheme for the external table associated with the source data
<tier strategy clause>Optional clause, defining the tier strategy for the external table associated with the source data
<index clause>Optional clause, applying any number of column indexes, chunk skip indexes, geospatial indexes, CAGRA indexes, or HNSW indexes to the external table associated with the source data
<table property clause>Optional clause, assigning table properties, from a subset of those available, to the external table associated with the source data

Delimited Text Options

The following options can be specified when loading data from delimited text files. When reading from multiple files, options specific to the source file will be applied to each file being read.

OptionDescription
COMMENT = '<string>'

Treat lines in the source file(s) that begin with string as comments and skip.

The default comment marker is #.

DELIMITER = '<char>'

Use char as the source file field delimiter.

The default delimiter is a comma, unless a source file has one of these extensions:

  • .psv - will cause | to be the delimiter
  • .tsv - will cause the tab character to be the delimiter

See Delimited Text Option Characters for allowed characters.

ESCAPE = '<char>'

Use char as the source file data escape character. The escape character preceding any other character, in the source data, will be converted into that other character, except in the following special cases:

Source Data StringRepresentation when Loaded into the Database
<char>aASCII bell
<char>bASCII backspace
<char>fASCII form feed
<char>nASCII line feed
<char>rASCII carriage return
<char>tASCII horizontal tab
<char>vASCII vertical tab

For instance, if the escape character is \, a \t encountered in the data will be converted to a tab character when stored in the database.

The escape character can be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not.

There is no default escape character.

HEADER DELIMITER = '<char>'

Use char as the source file header field name/property delimiter, when the source file header contains both names and properties. This is largely specific to the Kinetica export to delimited text feature, which will, within each field's header, contain the field name and any associated properties, delimited by the pipe | character.

An example Kinetica header in a CSV file:

1
id|int|data,category|string|data|char16,name|string|data|char32

The default is the | (pipe) character. See Delimited Text Option Characters for allowed characters.

Note

The DELIMITER character will still be used to separate field name/property sets from each other in the header row

INCLUDES HEADER = <TRUE|FALSE>

Declare that the source file(s) will or will not have a header.

The default is TRUE.

NULL = '<string>'

Treat string as the indicator of a null source field value.

The default is the empty string.

QUOTE = '<char>'

Use char as the source file data quoting character, for enclosing field values. Usually used to wrap field values that contain embedded delimiter characters, though any field may be enclosed in quote characters (for clarity, for instance). The quote character must appear as the first and last character of a field value in order to be interpreted as quoting the value. Within a quoted value, embedded quote characters may be escaped by preceding them with another quote character or the escape character specified by ESCAPE, if given.

The default is the " (double-quote) character. See Delimited Text Option Characters for allowed characters.

Delimited Text Option Characters

For DELIMITER, HEADER DELIMITER, ESCAPE, & QUOTE, any single character can be used, or any one of the following escaped characters:

Escaped CharCorresponding Source File Character
''Single quote
\aASCII bell
\bASCII backspace
\fASCII form feed
\tASCII horizontal tab
\vASCII vertical tab

For instance, if two single quotes ('') are specified for a QUOTE character, the parser will interpret single quotes in the source file as quoting characters; specifying \t for DELIMITER will cause the parser to interpret ASCII horizontal tab characters in the source file as delimiter characters.

Load Options

The following options can be specified to modify the way data is loaded (or not loaded) into the target table.

OptionDescription
BAD RECORD TABLE

Name of the table containing records that failed to be loaded into the target table. This bad record table will include the following columns:

Column NameSource Data Format Codes
line_numberNumber of the line in the input file containing the failed record
char_numberPosition of character within a failed record that is assessed as the beginning of the portion of the record that failed to process
filenameName of file that contained the failed record
line_rejectedText of the record that failed to process
error_msgError message associated with the record processing failure

Note

This option is not applicable for an ON ERROR mode of ABORT. In that mode, processing stops at the first error and that error is returned to the user.

BATCH SIZE

Use an ingest batch size of the given number of records.

The default batch size is 50,000.

COLUMN FORMATS

Use the given type-specific formatting for the given column when parsing source data being loaded into that column. This should be a map of column names to format specifications, where each format specification is map of column type to data format, all formatted as a JSON string.

Supported column types include:

Column TypeSource Data Format Codes
date

Apply the given date format to the given column.

Common date format codes follow. For the complete list, see Date/Time Conversion Codes.

CodeDescription
YYYY4-digit year
MM2-digit month, where January is 01
DD2-digit day of the month, where the 1st of each month is 01
time

Apply the given time format to the given column.

Common time format codes follow. For the complete list, see Date/Time Conversion Codes.

CodeDescription
HH2424-based hour, where 12:00 AM is 00 and 7:00 PM is 19
MI2-digit minute of the hour
SS2-digit second of the minute
MSmilliseconds
datetimeApply the given date/time format to the given column.

For example, to load dates of the format 2010.10.30 into date column d and times of the 24-hour format 18:36:54.789 into time column t:

{
    "d": {"date": "YYYY.MM.DD"},
    "t": {"time": "HH24:MI:SS.MS"}
}

Note

This option is not available for data sources configured for JDBC.

DATA SOURCELoad data from the given data source. Data source connect privilege is required when loading from a data source.
DEFAULT COLUMN FORMATS

Use the given formats for source data being loaded into target table columns with the corresponding column types. This should be a map of target column type to source format for data being loaded into columns of that type, formatted as a JSON string.

Supported column properties and source data formats are the same as those listed in the description of the COLUMN FORMATS option.

For example, to make the default format for loading source data dates like 2010.10.30 and 24-hour times like 18:36:54.789:

{
    "date": "YYYY.MM.DD",
    "time": "HH24:MI:SS.MS",
    "datetime": "YYYY.MM.DD HH24:MI:SS.MS"
}

Note

This option is not available for data sources configured for JDBC.

FIELDS IGNORED BY

Choose a comma-separated list of fields from the source file(s) to ignore, loading only those fields that are not in the identified list in the order they appear in the file. Fields can be identified by either POSITION or NAME. If ignoring by NAME, the specified names must match the source file field names exactly.

  • Identifying by Name:

    FIELDS IGNORED BY NAME(Category, Description)
    
  • Identifying by Position:

    FIELDS IGNORED BY POSITION(3, 4)
    

Note

  • When ignoring source data file fields, the set of fields that are not ignored must align, in type & number in their order in the source file, with the external table columns into which the data will be loaded.
  • Ignoring fields by POSITION is only supported for delimited text files.
FIELDS MAPPED BY

Choose a comma-separated list of fields from the source file(s) to load, in the specified order, identifying fields by either POSITION or NAME. If mapping by NAME, the specified names must match the source file field names exactly.

  • Identifying by Name:

    FIELDS MAPPED BY NAME(ID, Name, Stock)
    
  • Identifying by Position:

    FIELDS MAPPED BY POSITION(1, 2, 5)
    

Note

  • When mapping source data file fields, the set of fields that are identified must align, in type & number in the specified order, with the external table columns into which data will be loaded.
  • Mapping fields by POSITION is only supported for delimited text files.
FLATTEN_COLUMNS

Specify the policy for handling nested columns within JSON data.

The default is FALSE.

ValueDescription
TRUEBreak up nested columns into multiple columns.
FALSETreat nested columns as JSON columns instead of flattening.
IGNORE_EXISTING_PK

Specify the error suppression policy for inserting duplicate primary key values into a table with a primary key. If the specified table does not have a primary key or the UPDATE_ON_EXISTING_PK option is used, then this options has no effect.

The default is FALSE.

ValueDescription
TRUESuppress errors when inserted records and existing records' PKs match.
FALSEReturn errors when inserted records and existing records' PKs match.
INGESTION MODE

Whether to do a full ingest of the data or perform a dry run or type inference instead.

The default mode is FULL.

ValueDescription
DRY RUNNo data will be inserted, but the file will be read with the applied ON ERROR mode and the number of valid records that would normally be inserted is returned.
FULLData is fully ingested according to the active ON ERROR mode.
TYPE INFERENCEInfer the type of the source data and return, without ingesting any data. The inferred type is returned in the response, as the output of a SHOW TABLE command.
JDBC_FETCH_SIZE

Retrieve this many records at a time from the remote database. Lowering this number will help tables with large record sizes fit into available memory during ingest.

The default is 50,000.

Note

This option is only available for data sources configured for JDBC.

NUM_SPLITS_PER_RANK

The number of remote query partitions to assign each Kinetica worker process. The queries assigned to a worker process will be executed by the tasks allotted to the process.

To decrease memory pressure, increase the number of splits per rank.

The default is 8 splits per rank.

Note

This option is only available for data sources configured for JDBC.

NUM_TASKS_PER_RANK

The number of tasks to use on each Kinetica worker process to process remote queries. The tasks assigned to a worker process will execute any remote query partitions assigned to it.

To decrease memory pressure, decrease the number of tasks per rank.

The default is 8 tasks per rank.

Note

This option is only available for data sources configured for JDBC.

JDBC_SESSION_INIT_STATEMENT

Run the single given statement before the initial load is performed and also before each subsequent reload, if REFRESH ON START or SUBSCRIBE is TRUE.

For example, to set the time zone to UTC before running each load, use:

JDBC_SESSION_INIT_STATEMENT = 'SET TIME ZONE ''UTC'''

Note

This option is only available for data sources configured for JDBC.

ON ERROR

When an error is encountered loading a record, handle it using either of the following modes. The default mode is ABORT.

ModeDescription
SKIPIf an error is encountered parsing a source record, skip the record.
ABORTIf an error is encountered parsing a source record, stop the data load process. Primary key collisions are considered abortable errors in this mode.
POLL_INTERVAL

Interval, in seconds, at which a data source is polled for updates. The number of seconds must be passed as a single-quoted string.

The default interval is 60 seconds. This option is only applicable when SUBSCRIBE is TRUE.

REFRESH ON START

Whether to refresh the external table data upon restart of the database. Only relevant for materialized external tables.

The default is FALSE. This option is ignored if SUBSCRIBE is TRUE.

ValueDescription
TRUERefresh the external table's data when the database is restarted.
FALSEDo not refresh the external table's data when the database is restarted.
REMOTE_QUERY_INCREASING_COLUMN

For a JDBC query change data capture loading scheme, the remote query column that will be used to determine whether a record is new and should be loaded or not. This column should have an ever-increasing value and be of an integral or date/timestamp type. Often, this column will be a sequence-based ID or create/modify timestamp.

This option is only applicable when SUBSCRIBE is TRUE.

Note

This option is only available for data sources configured for JDBC.

REMOTE_QUERY_NO_SPLIT

Whether to not distribute the retrieval of remote data and issue queries for blocks of data at time in parallel.

The default is FALSE.

Note

This option is only available for data sources configured for JDBC

ValueDescription
TRUEIssue the remote data retrieval as a single query.
FALSEDistribute and parallelize the remote data retrieval in queries for blocks of data at a time.
REMOTE_QUERY_ORDER_BY

Ordering expression to use in partitioning remote data for retrieval. The remote data will be ordered according to this expression and then retrieved in sequential blocks from the first record. This is potentially less performant than using REMOTE_QUERY_PARTITION_COLUMN.

If REMOTE_QUERY_NO_SPLIT is TRUE, a valid REMOTE_QUERY_PARTITION_COLUMN is specified, or the column given is invalid, this option is ignored.

Note

This option is only available for data sources configured for JDBC

REMOTE_QUERY_PARTITION_COLUMN

Column to use to partition remote data for retrieval. The column must be numeric and should be relatively evenly distributed so that queries using values of this column to partition data will retrieve relatively consistently-sized result sets.

If REMOTE_QUERY_NO_SPLIT is TRUE or the column given is invalid, this option is ignored.

Note

This option is only available for data sources configured for JDBC

SUBSCRIBE

Whether to subscribe to the data source specified in the DATA SOURCE option. Only relevant for materialized external tables using data sources configured to allow streaming.

The default is FALSE. If TRUE, the REFRESH ON START option is ignored.

Note

This option is not available for data sources configured for HDFS.

ValueDescription
TRUESubscribe to the specified streaming data source.
FALSEDo not subscribe to the specified data source.
TRUNCATE_STRINGS

Specify the string truncation policy for inserting text into VARCHAR columns that are not large enough to hold the entire text value.

The default is FALSE.

ValueDescription
TRUETruncate any inserted string value at the maximum size for its column.
FALSEReject any record with a string value that is too long for its column.
TYPE_INFERENCE_MODE

When making a type inference of the data values in order to define column types for the target table, use one of the following modes.

The default mode is SPEED.

ValueDescription
ACCURACYScan all available data to arrive at column types that are the narrowest possible that can still hold all the data.
SPEEDPick the widest possible column types from the minimum data scanned in order to quickly arrive at column types that should fit all data values.
UPDATE_ON_EXISTING_PK

Specify the record collision policy for inserting into a table with a primary key. If the specified table does not have a primary key, then this options has no effect.

The default is FALSE.

ValueDescription
TRUEUpdate existing records with records being inserted, when PKs match.
FALSEDiscard records being inserted when existing records' PKs match.

Table Definition Clause

The table definition clause allows for an explicit local table structure to be defined, irrespective of the source data type. This specification mirrors that of CREATE TABLE.

Table Definition Clause Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
(
    <column name> <column definition>,
    ...
    <column name> <column definition>,
    [PRIMARY KEY (<column list>)],
    [SHARD KEY (<column list>)],
    [FOREIGN KEY
        (<column list>) REFERENCES <foreign table name>(<foreign column list>) [AS <foreign key name>],
        ...
        (<column list>) REFERENCES <foreign table name>(<foreign column list>) [AS <foreign key name>]
    ]
)

See Data Definition (DDL) for column format.

Partition Clause

An external table can be further segmented into partitions.

The supported Partition Clause syntax & features are the same as those in the CREATE TABLE Partition Clause.

Tier Strategy Clause

An external table can have a tier strategy specified at creation time. If not assigned a tier strategy upon creation, a default tier strategy will be assigned.

The supported Tier Strategy Clause syntax & features are the same as those in the CREATE TABLE Tier Strategy Clause.

Index Clause

An external table can have any number of indexes applied to any of its columns at creation time.

The supported Index Clause syntax & features are the same as those in the CREATE TABLE Index Clause.

Table Property Clause

A subset of table properties can be applied to the external table associated with the external data at creation time.

The supported Table Property Clause syntax & features are the same as those in the CREATE TABLE Table Property Clause.

Examples

To create a logical external table with the following features, using a query as the source of data:

  • External table named ext_employee_dept2 in the example schema
  • Source is department 2 employees from the example.employee table, queried through the example.jdbc_ds data source
  • Data is re-queried from the source each time the external table is queried
CREATE LOGICAL EXTERNAL TABLE
1
2
3
CREATE LOGICAL EXTERNAL TABLE example.ext_employee_dept2
REMOTE QUERY 'SELECT * FROM example.employee WHERE dept_id = 2'
WITH OPTIONS (DATA SOURCE = 'example.jdbc_ds')

To create an external table with the following features, using KiFS as the source of data:

  • External table named ext_product in the example schema
  • External source is a KiFS file named product.csv located in the data directory
  • Data is not refreshed on database startup
CREATE EXTERNAL TABLE with Default Options
1
2
CREATE EXTERNAL TABLE example.ext_product
FILE PATHS 'kifs://data/products.csv'

To create an external table with the following features, using KiFS as the source of data:

  • External table named ext_employee in the example schema
  • External source is a Parquet file named employee.parquet located in the KiFS directory data
  • External table has a primary key on the id column
  • Data is not refreshed on database startup
CREATE EXTERNAL TABLE with Parquet File Example
1
2
3
CREATE EXTERNAL TABLE example.ext_employee
FILE PATHS 'kifs://data/employee.parquet'
WITH OPTIONS (PRIMARY KEY = (id))

To create an external table with the following features, using KiFS as the source of data:

  • External table named ext_employee in the example schema
  • External source is a file named employee.csv located in the KiFS directory data
  • Apply a date format to the hire_date column
CREATE EXTERNAL TABLE with Date Format Example
1
2
3
4
5
6
7
8
9
CREATE EXTERNAL TABLE example.ext_employee
FILE PATHS 'kifs://data/employee.csv'
WITH OPTIONS
(
    COLUMN FORMATS = '
    {
        "hire_date": {"date": "YYYY-MM-DD"}
    }'
)

To create an external table with the following features, using a data source as the source of data:

  • External table named ext_product in the example schema
  • External source is a data source named product_ds in the example schema
  • Source is a file named products.csv
  • Data is refreshed on database startup
CREATE EXTERNAL TABLE with Data Source Example
1
2
3
4
5
6
7
CREATE EXTERNAL TABLE example.ext_product
FILE PATHS 'products.csv'
WITH OPTIONS
(
    DATA SOURCE = 'example.product_ds',
    REFRESH ON START = TRUE
)

To create an external table with the following features, subscribing to a data source:

  • External table named ext_product in the example schema
  • External source is a data source named product_ds in the example schema
  • Source is a file named products.csv
  • Data updates are streamed continuously
CREATE EXTERNAL TABLE with Data Source Subscription Example
1
2
3
4
5
6
7
8
CREATE EXTERNAL TABLE example.ext_product
FILE PATHS 'products.csv'
WITH OPTIONS
(
    DATA SOURCE = 'example.product_ds',
    SUBSCRIBE = TRUE,
    POLL_INTERVAL = '60'
)

To create an external table with the following features, using a remote query through a JDBC data source as the source of data:

  • External table named ext_employee_dept2 in the example schema
  • External source is a data source named jdbc_ds in the example schema
  • Source data is a remote query of employees in department 2 from that database's example.ext_employee table
  • Data is refreshed on database startup
CREATE EXTERNAL TABLE with JDBC Data Source Remote Query Example
1
2
3
4
5
6
7
CREATE EXTERNAL TABLE example.ext_employee_dept2
REMOTE QUERY 'SELECT * FROM example.ext_employee WHERE dept_id = 2'
WITH OPTIONS
(
    DATA SOURCE = 'example.jdbc_ds',
    REFRESH ON START = TRUE
)

Data Sources

File-Based

To create an external table that loads a CSV file, products.csv, from the data source example.product_ds, into a table named example.ext_product:

CREATE EXTERNAL TABLE Data Source File Example
1
2
3
CREATE EXTERNAL TABLE example.ext_product
FILE PATHS 'products.csv'
WITH OPTIONS (DATA SOURCE = 'example.product_ds')
Query-Based

To create an external table that is the result of a remote query of employees in department 2 from the JDBC data source example.jdbc_ds, into a local table named example.ext_employee_dept2:

CREATE EXTERNAL TABLE Data Source Query Example
1
2
3
CREATE EXTERNAL TABLE example.ext_employee_dept2
REMOTE QUERY 'SELECT * FROM example.employee WHERE dept_id = 2'
WITH OPTIONS (DATA SOURCE = 'example.jdbc_ds')

Change Data Capture

File-Based

To create an external table loaded by a set of order data in a change data capture scheme with the following conditions:

  • data pulled through a data source, example.order_ds
  • data files contained with an orders directory
  • initially, all files in the directory will be loaded; subsequently, only those files that have been updated since the last check will be reloaded
  • files will be polled for updates every 60 seconds
  • target table named example.ext_order
CREATE EXTERNAL TABLE File Change Data Capture Example
1
2
3
CREATE EXTERNAL TABLE example.ext_order
FILE PATHS 'orders/'
WITH OPTIONS (DATA SOURCE = 'example.order_ds', SUBSCRIBE = TRUE)
Query-Based

To create an external table loaded from a remote query of orders in a change data capture scheme with the following conditions:

  • data pulled through a data source, example.jdbc_ds
  • data contained with an example.orders table, where only orders for product with ID 42 will be loaded into the target table
  • initially, all orders will be loaded; subsequently, only those orders with an order_id column value higher than the highest one on the previous poll cycle will be loaded
  • remote table will be polled for updates every 60 seconds
  • target table named example.ext_order_product42
CREATE EXTERNAL TABLE Query Change Data Capture Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
-- Load new orders for product 42 continuously into a table
--   order_id is an ever-increasing sequence allotted to each new order
CREATE EXTERNAL TABLE example.ext_order_product42
REMOTE QUERY 'SELECT * FROM example.orders WHERE product_id = 42'
WITH OPTIONS
(
    DATA SOURCE = 'example.jdbc_ds',
    SUBSCRIBE = TRUE,
    REMOTE_QUERY_INCREASING_COLUMN = 'order_id'
)

CREATE TABLE ... AS

Creates a new table from the given query in the specified schema.

CREATE TABLE ... AS Syntax
1
2
3
4
CREATE [OR REPLACE] [REPLICATED] [TEMP] TABLE [<schema name>.]<table name> AS
(
    <select statement>
)

Any column aliases used must adhere to the supported naming criteria.

While primary keys & foreign keys are not transferred to the new table, shard keys will be, if the column(s) composing them are part of the SELECT list.

Parameters/KeysDescription
OR REPLACEAny existing table or view with the same name will be dropped before creating this one
REPLICATEDThe table will be distributed within the database as a replicated table
TEMPThe table will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the table will be removed), but it will have increased ingest performance
<schema name>Name of the schema that will contain the created table; if no schema is specified, the table will be created in the user's default schema
<table name>Name of the table to create; must adhere to supported naming criteria
<select statement>The query that will define both the initial structure and content of the created table

The following can be applied to <select statement> to affect the resulting table:

KeywordTypeDescription
KI_HINT_GROUP_BY_PKhintCreates a primary keys on the columns in the GROUP BY clause if the outermost SELECT statement contains a GROUP BY
KI_HINT_INDEX(column list)hintIndexes each of the columns specified in the column list
KI_SHARD_KEY(column list)pseudo- functionShards the result table with a shard key composed of all columns in the specified column list

For example, to create a replicated temporary table that is a copy of an existing table, failing if a table with the same name as the target table already exists:

CREATE TABLE ... AS (Replicated/Temporary) Example
1
2
3
4
5
CREATE REPLICATED TEMP TABLE example.new_temporary_table AS
(
    SELECT *
    FROM example.old_table
)

To create a permanent table with columns a, b, c, & d a new shard key on columns a & b, and an index on column d, replacing a table with the same name as the target table, if it exists:

CREATE TABLE ... AS (Reshard) Example
1
2
3
4
5
CREATE OR REPLACE TABLE example.new_sharded_table AS
(
    SELECT a, b, c, d, KI_SHARD_KEY(a, b) /* KI_HINT_INDEX(d) */
    FROM example.old_table
)

To copy a table with columns a, b, c, & d, preserving the primary key on a, b, & c, and the foreign key from d; a new table must be created to match the schema of the old one and then records can be copied from the old one to the new one:

CREATE TABLE (Preserve Primary Key) Example, DDL Step
1
2
3
4
5
6
7
8
9
CREATE TABLE example.new_pk_copy_table
(
    a INTEGER NOT NULL,
    b INTEGER NOT NULL,
    c VARCHAR(32) NOT NULL,
    d TIMESTAMP,
    PRIMARY KEY (a, b, c),
    FOREIGN KEY (d) REFERENCES example.old_table_lookup(d)
)
CREATE TABLE (Preserve Primary Key) Example, DML Step
1
2
3
INSERT INTO example.new_pk_copy_table
SELECT *
FROM example.old_table

Note

This create/insert process is necessary, as neither primary keys nor foreign keys can be preserved through hints.

See Limitations for other restrictions.

ALTER TABLE

Alters the configuration of a table.

Any of the following facets of a table can be altered:


Rename Table

A table can be renamed, following the supported naming criteria.

Rename Table Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
RENAME TO <new table name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

Note

Any tables with foreign keys that target this table must be dropped before it can be renamed.

Move Table

A table can be moved from one schema to another.

Move Table Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
< MOVE TO | SET SCHEMA > <other schema name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

Note

Any tables with foreign keys that target this table must be dropped before it can be moved.

For example, to move the sales_2017 table from the example_olap schema to the example_archive schema:

Move Table Example
1
2
ALTER TABLE example_olap.sales_2017
MOVE TO example_archive

Set Access Mode

A table can have its global accessibility modified for all users in the system, independently from and further restricting any role-based access controls in place. Note that changing the access mode cannot widen access for users not already granted access; it can only narrow access for those who already have access. This setting will also trump administrative access to a table.

Set Table Access Mode Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
SET ACCESS MODE < NO_ACCESS | READ_ONLY | WRITE_ONLY | READ_WRITE >

Set TTL

A table's time-to-live (TTL) can be altered.

Set Table TTL Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
SET TTL <new ttl>

For example, to set a TTL of 7 minutes on a table:

Set Table TTL Example
1
2
ALTER TABLE example.employee
SET TTL 7

To set a table to never expire by TTL timeout:

Set Table No Expiration Example
1
2
ALTER TABLE example.employee
SET TTL -1

Add Column

A column can be added, specifying a column definition.

Add Table Column Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD <column name> <column definition> [DEFAULT <string/numeric constant | column name>]

A new column can have its values initially populated through the use of the DEFAULT keyword. These values can either be a string/numeric constant or the name of an existing column in the table from which values can be copied into the new column. This default value is only in effect for the column creation; the new column will have no default value after that.

Examples

To add, to the employee table, a salary column that is a non-nullable, 10-digit number field containing 2 decimal places with a default value of 0:

Add Table Column (Numeric) Example
1
2
ALTER TABLE example.employee
ADD salary NUMERIC(10, 2) NOT NULL DEFAULT 0

To add, to the employee table, a category column that is a nullable, dictionary-encoded, 32-character text field:

Add Table Column (CharN/Dictionary-Encoded) Example
1
2
ALTER TABLE example.employee
ADD category VARCHAR(32, DICT)

To add, to the employee table, a bio column that is a nullable, text-searchable, unrestricted-width text field:

Add Table Column (String/Text-Searchable) Example
1
2
ALTER TABLE example.employee
ADD bio VARCHAR(TEXT_SEARCH)

Rename Column

An existing column can be renamed.

Rename Table Column Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
RENAME COLUMN <column current name> TO <column new name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

Note

Any tables with foreign keys that target the column being renamed must be dropped before it can be renamed.

Rename Table Column Example
1
2
ALTER TABLE example.employee
RENAME COLUMN bio TO biography

Modify Column

A column can have its column definition modified, affecting column type, column size, column properties, and nullability.

Modify Table Column Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
MODIFY [COLUMN] <column name> <column definition>
Modify Table Column Alternate Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ALTER COLUMN <column name> <column definition>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

Note

Any tables with foreign keys that target the column being modified must be dropped before it can be modified.

If a column is modified to be non-nullable, it will be populated with default values--empty string for string fields and 0 for numeric fields.

Examples

To change, in the employee table, the first_name column to one that is a non-nullable, dictionary-encoded, 50-character text field:

Modify Table Column Example
1
2
ALTER TABLE example.employee
ALTER COLUMN first_name VARCHAR(50, DICT) NOT NULL

Drop Column

An existing column can be removed from a table.

Drop Table Column Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP COLUMN <column name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

Note

Any tables with foreign keys that target the column being dropped must be dropped before it can be dropped.

Add Column Index

A column (attribute) index can be added to a table column in order to improve the performance of operations whose expressions contain relational operators against the column. See Limitations for restrictions.

Add Table Column Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD INDEX (<column name>)

For example, to index the employee table's department ID column:

Add Table Column Index Example
1
2
ALTER TABLE example.employee
ADD INDEX (dept_id)

Drop Column Index

An existing column (attribute) index can be removed from a table.

Drop Table Column Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP INDEX (<column name>)

For example, to drop the index on the employee table's department ID column:

Drop Table Column Index Example
1
2
ALTER TABLE example.employee
DROP INDEX (dept_id)

Add Chunk Skip Index

A chunk skip index can be added to a table column in order to improve the performance of operations containing equality-based filters against the column. See Limitations for restrictions.

Add Table Chunk Skip Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD CHUNK [SKIP] INDEX (<column name>)

For example, to index the employee table's department ID column:

Add Table Chunk Skip Index Example
1
2
ALTER TABLE example.employee
ADD CHUNK SKIP INDEX (dept_id)

Drop Chunk Skip Index

An existing chunk skip index can be removed from a table.

Drop Table Chunk Skip Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP CHUNK [SKIP] INDEX (<column name>)

For example, to drop the chunk skip index on the employee table's department ID column:

Drop Table Chunk Skip Index Example
1
2
ALTER TABLE example.employee
DROP CHUNK SKIP INDEX (dept_id)

Add Geospatial Index

A geospatial index can be added to one or more table columns to improve the performance of geospatial functions applied to them.

Add Table Geospatial Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD GEOSPATIAL INDEX (<column name>[,...])

For example, to index the employee table's work district WKT column:

Add Table WKT Geospatial Index Example
1
2
ALTER TABLE example.employee
ADD GEOSPATIAL INDEX (work_district)

To index the employee table's office location coordinate pair columns:

Add Table Coordinate Pair Geospatial Index Example
1
2
ALTER TABLE example.employee
ADD GEOSPATIAL INDEX (office_longitude, office_latitude)

Drop Geospatial Index

An existing geospatial index can be removed from a table.

Drop Table Geospatial Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP GEOSPATIAL INDEX (<column name>[,...])

For example, to drop the geospatial index on the employee table's work district WKT column:

Drop Table WKT Geospatial Index Example
1
2
ALTER TABLE example.employee
DROP GEOSPATIAL INDEX (work_district)

To drop the geospatial index on the employee table's office location coordinate pair columns:

Drop Table Coordinate Pair Geospatial Index Example
1
2
ALTER TABLE example.employee
DROP GEOSPATIAL INDEX (office_longitude, office_latitude)

Add CAGRA Index

A CAGRA index can be added to a table column in order to improve the performance of vector searches applied to the column.

Add Table CAGRA Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD CAGRA INDEX (<column name>)

For example, to add a CAGRA index on the employee table's profile column:

Add Table CAGRA Index Example
1
2
ALTER TABLE example.employee
ADD CAGRA INDEX (profile)

Refresh CAGRA Index

An existing CAGRA index can be refreshed.

Refresh Table CAGRA Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
REFRESH CAGRA INDEX ON (<column name>)

For example, to refresh the CAGRA index on the employee table's profile column:

Refresh Table CAGRA Index Example
1
2
ALTER TABLE example.employee
REFRESH CAGRA INDEX ON (profile)

Drop CAGRA Index

An existing CAGRA index can be removed from a table.

Drop Table CAGRA Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP CAGRA INDEX (<column name>)

For example, to drop the CAGRA index on the employee table's profile column:

Drop Table CAGRA Index Example
1
2
ALTER TABLE example.employee
DROP CAGRA INDEX (profile)

Add HNSW Index

An HNSW index can be added to a table column in order to improve the performance of vector searches applied to the column.

Add Table HNSW Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD HNSW INDEX (<column name>)

For example, add an HNSW index on the employee table's profile column:

Add Table HNSW Index Example
1
2
ALTER TABLE example.employee
ADD HNSW INDEX (profile)

Drop HNSW Index

An existing HNSW index can be removed from a table.

Drop Table HNSW Index Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP HNSW INDEX (<column name>)

For example, to drop the HNSW index on the employee table's profile column:

Drop Table HNSW Index Example
1
2
ALTER TABLE example.employee
DROP HNSW INDEX (profile)

Add Foreign Key

A foreign key can be added to any column or set of columns with primary key applicable types in order to improve the performance of join operations between the table being altered and the table referenced in the foreign key.

Add Table Foreign Key Syntax
1
2
3
ALTER TABLE [<schema name>.]<table name>
ADD FOREIGN KEY (<column name>,...)
    REFERENCES [<foreign table schema name>.]<foreign table name>(<foreign column name>,...) [AS <foreign key name>]

For example, to add a foreign key on the employee table's department ID column, linking it to the department table's department ID column:

Add Table Foreign Key Example
1
2
ALTER TABLE example.employee
ADD FOREIGN KEY (dept_id) REFERENCES example.department(id) AS fk_emp_dept

Drop Foreign Key

An existing foreign key can be removed from a table, either by the name (alias) given to it during creation or by its definition:

Drop Table Foreign Key by Name Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DROP FOREIGN KEY <foreign key name>
Drop Table Foreign Key by Definition Syntax
1
2
3
ALTER TABLE [<schema name>.]<table name>
DROP FOREIGN KEY (<column name>,...)
    REFERENCES [<foreign table schema name>.]<foreign table name>(<foreign column name>,...)

For example, to drop the foreign key on the employee table's department ID column:

Drop Table Foreign Key by Name Example
1
2
ALTER TABLE example.employee
DROP FOREIGN KEY fk_emp_dept
Drop Table Foreign Key by Definition Example
1
2
ALTER TABLE example.employee
DROP FOREIGN KEY (dept_id) REFERENCES example.department(id)

Add Partition

A partition can be added to a range-partitioned or list-partitioned table.

Warning

Defining (adding) partitions after data has been loaded will result in a performance penalty as the database moves existing records targeted for the new partition from the default partition into it.

Range Partition

The new partition can be given a minimum bound (inclusive) and a maximum bound (exclusive). If the new partition would come before an existing partition, omitting the maximum bound would cause the new partition to take on the nearest following existing partition's minimum bound as its maximum bound. If the new partition would come after an existing partition, omitting the minimum bound would cause the new partition to take on the nearest preceding partition's maximum bound as its minimum bound. If no partitions are present in the table, the new partition will have to declare both a minimum and maximum bound.

Add Table Range Partition Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD PARTITION <partition name> [ MIN ( <least value> ) ] [ MAX ( <greatest value> ) ]

For example, to add a partition to the customer_order_range_by_year table, containing all records with a partition key less than 2020 and greater than or equal to the maximum bound of the nearest preceding partition:

Add Table Range Partition Example
1
2
ALTER TABLE example.customer_order_range_partition_by_year
ADD PARTITION order_2020 MAX(2021)

List Partition

The new partition can be given a list of values to match against the partition key values of incoming records.

Add Table List Partition Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
ADD PARTITION <partition name> VALUES ( <value lists> )

For example, to add a partition to the customer_order_manual_list_partition_by_year table, containing all records from 2020:

Add Table List Partition (Year) Example
1
2
ALTER TABLE example.customer_order_manual_list_partition_by_year
ADD PARTITION order_2020 VALUES (2020)

For example, to add a partition to the customer_order_manual_list_partition_by_year_and_month table, containing all records from February 2020 & April 2020:

Add Table List Partition (Month) Example
1
2
ALTER TABLE example.customer_order_manual_list_partition_by_year_and_month
ADD PARTITION order_2020_0204 VALUES ((2020, 2), (2020, 4))

Remove Partition

An existing partition can be removed from a range-partitioned or list-partitioned table, sending all data contained within that partition back to the default partition.

Remove Table Partition Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
REMOVE PARTITION <partition name>

For example, to remove a partition named order_2017 from the customer_order_range_by_year table:

Remove Table Partition Example
1
2
ALTER TABLE example.customer_order_range_partition_by_year
REMOVE PARTITION order_2017

Delete Partition

An existing partition can be dropped from a range-partitioned or list-partitioned table, deleting all data contained within it.

Delete Table Partition Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
DELETE PARTITION <partition name>

For example, to drop a partition named order_2014_2016 from the customer_order_range_by_year table, deleting all data within that partition:

Delete Table Partition Example
1
2
ALTER TABLE example.customer_order_range_partition_by_year
DELETE PARTITION order_2014_2016

Set Tier Strategy

A table's eviction priorities can be adjusted by setting its tier strategy.

Set Table Tier Strategy Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
SET TIER STRATEGY (<tier strategy>)

For example, to set the customer_order table's tier strategy, to one with a below-average eviction priority in the RAM Tier:

Set Table Tier Strategy Example
1
2
3
4
5
ALTER TABLE example.customer_order
SET TIER STRATEGY
(
    ( ( VRAM 1, RAM 3, PERSIST 5 ) )
)

The tier strategy can also be reset to the system default strategy.

Reset Table Tier Strategy Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
RESET TIER STRATEGY

For example, to reset the customer_order table's tier strategy:

Reset Table Tier Strategy Example
1
2
ALTER TABLE example.customer_order
RESET TIER STRATEGY

Manage Subscription

Any table that is subscribed to a streaming external data source can have that subscription paused, resumed, or cancelled.

Manage External Table Subscription Syntax
1
2
ALTER TABLE [<schema name>.]<table name>
<PAUSE | RESUME | CANCEL> SUBSCRIPTION <data source name>

Note

Once unsubscribed, there is no way to re-subscribe the table to the data source via ALTER TABLE. An external table will need to be re-created, while a table whose subscription was initiated through the LOAD INTO command can have that command re-executed.

For example, to manage a subscription on the ext_product external table through the product_ds data source:

Pause Subscription
1
2
ALTER TABLE example.ext_product
PAUSE SUBSCRIPTION example.product_ds
Resume Subscription
1
2
ALTER TABLE example.ext_product
RESUME SUBSCRIPTION example.product_ds
Cancel Subscription
1
2
ALTER TABLE example.ext_product
CANCEL SUBSCRIPTION example.product_ds

REFRESH EXTERNAL TABLE

Refreshes the data within a materialized external table that is not subscription-based.

REFRESH EXTERNAL TABLE Syntax
1
REFRESH [EXTERNAL] TABLE [<schema name>.]<table name>

Important

Data source connect privilege is required to refresh an external table that uses a data source.

TRUNCATE TABLE

Deletes all the records from a table.

TRUNCATE TABLE Syntax
1
TRUNCATE TABLE [<schema name>.]<table name>

DROP TABLE

Removes an existing table. All dependent views, materialized views, streams, and SQL procedures will be dropped.

DROP TABLE Syntax
1
DROP TABLE [IF EXISTS] [<schema name>.]<table name>

Note

Any tables with foreign keys that target a given table must be dropped before it can be dropped.

ParametersDescription
IF EXISTSOptional error-suppression clause; if specified, no error will be returned if the given table does not exist
<schema name>Name of the schema containing the table to remove
<table name>Name of the table to remove

SHOW TABLE

Outputs the DDL statement required to reconstruct the given table.

SHOW TABLE Syntax
1
SHOW [[CREATE] TABLE] [<schema name>.]<table name>

Note

The response to SHOW TABLE is a single-record result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.

ParametersDescription
CREATEOptional keyword for clarity
TABLE

Optional clause to avoid ambiguity:

  • if given, and a view with the given name exists instead, the command will return an error
  • if omitted, and a view with the given name exists instead, the command will be interpreted as a SHOW VIEW statement
<schema name>Name of the schema containing the table to show
<table name>Name of the table whose DDL will be output

For example, to output the DDL for the example table created in the CREATE TABLE section:

SHOW TABLE Example
1
SHOW CREATE TABLE example.various_types
SHOW TABLE Output
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| CREATE TABLE "example"."various_types"
(
    "i" INTEGER (primary_key) NOT NULL,
    "bi" BIGINT (primary_key, shard_key) NOT NULL,
    "b" BOOLEAN,
    "ub" UNSIGNED BIGINT,
    "r" REAL,
    "d" DOUBLE,
    "s" VARCHAR (text_search),
    "c" VARCHAR (32, dict),
    "p" VARCHAR (256, text_search),
    "ip" IPV4,
    "ui" UUID DEFAULT NEW_UUID(),
    "ts" TIMESTAMP,
    "td" DATE,
    "tt" TIME,
    "dt" DATETIME DEFAULT NOW(),
    "dc1" BIGINT,
    "dc2" DECIMAL (18, 4),
    "dc3" REAL,
    "dc4" DOUBLE,
    "n" DECIMAL (18, 4),
    "byt" BLOB,
    "w" GEOMETRY,
    "j" JSON,
    "v" VECTOR (10),
    "ai" INTEGER[3],
    FOREIGN KEY ("bi") REFERENCES "example"."lookup" ("id") AS "fk"
)
TIER STRATEGY (
    ( ( VRAM 1, RAM 5, DISK0 5, PERSIST 5 ) )
)
ATTRIBUTE INDEX ("ip")
ATTRIBUTE INDEX ("ts"); |

DESCRIBE TABLE

Lists the columns and column types & properties for a given table.

DESCRIBE TABLE Syntax
1
DESC[RIBE] [TABLE] [<schema name>.]<table name>
ParametersDescription
TABLE

Optional clause to avoid ambiguity:

  • if given, and a view with the given name exists instead, the command will return an error
  • if omitted, and a view with the given name exists instead, the command will be interpreted as a DESCRIBE VIEW statement
<schema name>Name of the schema containing the table to describe
<table name>Name of the table whose configuration will be output

To describe the example table created in the CREATE TABLE section:

DESCRIBE TABLE Example
1
DESC example.various_types
DESCRIBE TABLE Output
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
+-----------+--------+------------+-----------------------------------+
| Col_num   | Name   | Null?      | Type                              |
+-----------+--------+------------+-----------------------------------+
| 0         | i      | NOT NULL   | INTEGER (primary_key)             |
| 1         | bi     | NOT NULL   | BIGINT (primary_key, shard_key)   |
| 2         | b      |            | BOOLEAN                           |
| 3         | ub     |            | UNSIGNED BIGINT                   |
| 4         | r      |            | REAL                              |
| 5         | d      |            | DOUBLE                            |
| 6         | s      |            | VARCHAR (text_search)             |
| 7         | c      |            | VARCHAR (32, dict)                |
| 8         | p      |            | VARCHAR (256, text_search)        |
| 9         | ip     |            | IPV4                              |
| 10        | ui     |            | UUID (init_with_uuid)             |
| 11        | ts     |            | TIMESTAMP                         |
| 12        | td     |            | DATE                              |
| 13        | tt     |            | TIME                              |
| 14        | dt     |            | DATETIME (init_with_now)          |
| 15        | dc1    |            | BIGINT                            |
| 16        | dc2    |            | DECIMAL (18, 4)                   |
| 17        | dc3    |            | REAL                              |
| 18        | dc4    |            | DOUBLE                            |
| 19        | n      |            | DECIMAL (18, 4)                   |
| 20        | byt    |            | BLOB                              |
| 21        | w      |            | GEOMETRY                          |
| 22        | j      |            | JSON                              |
| 23        | v      |            | VECTOR (10)                       |
| 24        | ai     |            | INTEGER[3]                        |
+-----------+--------+------------+-----------------------------------+

CREATE VIEW

Creates a new virtual table from the given query.

CREATE VIEW Syntax
1
2
CREATE [OR REPLACE] VIEW [<schema name>.]<view name> AS
<select statement>

When any of the source tables of a view is altered or dropped, the view will also be dropped.

Caution!

A CREATE OR REPLACE issues an implicit drop, so replacing an input table will have the same effect on the view as dropping it.

ParametersDescription
OR REPLACEAny existing table/view with the same name will be dropped before creating this view
<schema name>Name of the schema that will contain the created view; if no schema is specified, the view will be created in the user's default schema
<view name>Name of the view to create; must adhere to the supported naming criteria
<select statement>The query that will define both the structure and content of the created view

For example, to create a view that is a copy of an existing table, failing if a table or view with the same name as the target view already exists:

CREATE VIEW Example
1
2
3
4
5
CREATE VIEW example.view_of_table AS
(
    SELECT *
    FROM example.table_to_view
)

CREATE MATERIALIZED VIEW

Specifying MATERIALIZED in a CREATE VIEW statement will make the view a materialized view.

CREATE MATERIALIZED VIEW Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
CREATE [OR REPLACE] [TEMP] MATERIALIZED VIEW [<schema name>.]<view name>
[
    REFRESH
    <
        OFF |
        ON CHANGE |
        ON QUERY |
        EVERY <number> <SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]>
            [STARTING AT '<YYYY-MM-DD [HH:MM[:SS]]>']
            [STOP AFTER < '<YYYY-MM-DD [HH:MM[:SS]]>' | <number> <SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]> >]
    >
]
AS
<select statement>
[WITH OPTIONS (<materialized view property name> = '<materialized view property value>'[,...])]
[<table property clause>]

The intermediary results of materialized views are cached to improve the performance of queries against them. This means that, unlike typical views, materialized views are not lightweight database entities, but rather consume memory and processing time proportional to the size of the source data and complexity of the query.

When any of the source tables of a materialized view is altered or dropped, the materialized view will also be dropped.

Caution!

A CREATE OR REPLACE issues an implicit drop, so replacing an input table will have the same effect on the materialized view as dropping it.

While primary keys & foreign keys are not transferred to the new materialized view, shard keys will be, if the column(s) composing them are part of the SELECT list. A new shard key can be specified for the materialized view by using the KI_SHARD_KEY(<column list>) pseudo-function in the SELECT list.

ParametersDescription
OR REPLACEAny existing table/view with the same name will be dropped before creating this materialized view
TEMPThe materialized view will be a memory-only table; which, among other things, means it will not be persisted (if the database is restarted, the materialized view will be removed), but it will have increased ingest performance
<schema name>Name of the schema that will contain the created materialized view; if no schema is specified, the materialized view will be created in the user's default schema
<view name>Name of the materialized view to create; must adhere to the supported naming criteria
REFRESH

Specifies the data refresh scheme for the materialized view. The following schemes are available:

ConstantDescription
OFF(the default) Will prevent the materialized view from being automatically refreshed, but will still allow manual refreshes of the data to be requested
ON CHANGEWill cause the materialized view to be updated any time a record is added, modified, or deleted from the subtending tables in its query
ON QUERYWill cause the materialized view to be updated any time it is queried
EVERY

Allows specification of an interval in seconds, minutes, hours, or days, at which the materialized view should be refreshed. By default, the first refresh interval will be one interval's worth of time from the point at which the materialized view creation was requested. This can be modified with the following options:

  • STARTING AT: specify a date or timestamp at which refresh cycles should begin
  • STOP AFTER: specify a date, timestamp, or time interval after which refresh cycles should end
<select statement>The query that will define both the structure and content of the created materialized view
WITH OPTIONS

Optional indicator that a comma-delimited list of connection option/value assignments will follow. The follow options are available:

OptionDescription
EXECUTE AS

Executes materialized view refreshes as the given user with that user's privileges, when EVERY ... is specified as the REFRESH method.

Note

If this user doesn't exist at the time of a refresh, the refresh will be executed as the creating user, and, failing that, the system administration user.

<table property clause>Optional clause, assigning table properties, from a subset of those available, to the materialized view

Table Property Clause

A subset of table properties can be applied to the materialized view at creation time.

The supported Table Property Clause syntax & features are the same as those in the CREATE TABLE Table Property Clause.

Changes-Only Views via Delta Tables

A materialized view can be configured to only show the changes in the query result since its last refresh. This applies strictly for inserts; updates and deletes will not be reflected in the view's data.

Using a materialized view to only show deltas in the result data can be achieved using the DELTA_TABLE scoped hint. This hint should be placed immediately after the table name (and before any table alias) of each source table in the query that should be treated as a delta table--a table which will contribute to the result set only those records added since the last time the materialized view referencing it was refreshed.

The marked delta table must be a regular table--it cannot be any of the following:

For instance, a changes-only materialized view weather_zone can be created with its weather source table marked as a delta table in order to only show new weather events since the last refresh of weather_zone:

Create Changes-Only Materialized View
1
2
3
4
CREATE MATERIALIZED VIEW example.weather_zone AS
SELECT w.name AS event_name, w.type AS event_type, gz.name AS zone
FROM example.weather /*+ DELTA_TABLE */ w
JOIN example.geo_zone gz ON STXY_INTERSECTS(lon, lat, zone)

After an initial round of data is inserted, a query on the materialized view might return this:

Create Changes-Only Materialized View Output, Round 1
1
2
3
4
5
6
7
+--------------+--------------+-------------+
| event_name   | event_type   | zone        |
+--------------+--------------+-------------+
| Anna         | Hurricane    | Northeast   |
| Bob          | Monsoon      | Northwest   |
| Civic        | High Winds   | Southwest   |
+--------------+--------------+-------------+

After another round of data is inserted, a query on the materialized view might return this:

Create Changes-Only Materialized View Output, Round 2
1
2
3
4
5
+--------------+--------------+-------------+
| event_name   | event_type   | zone        |
+--------------+--------------+-------------+
| Dened        | Hurricane    | Southeast   |
+--------------+--------------+-------------+

Examples

To create a materialized view with columns a, b, c, & d and a new shard key on columns a & b, that refreshes once per half hour, replacing a view with the same name as the target view, if it exists:

CREATE MATERIALIZED VIEW Example
1
2
3
4
5
6
CREATE OR REPLACE MATERIALIZED VIEW example.materialized_view_of_table
REFRESH EVERY .5 HOURS AS
(
    SELECT a, b, c, d, KI_SHARD_KEY(a, b)
    FROM example.table_to_view
)

To create a materialized view with all columns of a table, refreshing once per minute from the beginning of 2025 through to the end of that year using the permissions of user mv_user to perform the refreshes:

CREATE MATERIALIZED VIEW Start/Stop Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE MATERIALIZED VIEW example.materialized_view_of_table
REFRESH EVERY 1 MINUTE
STARTING AT '2025-01-01 00:00:00'
STOP AFTER 365 DAYS
AS
(
    SELECT *
    FROM example.table_to_view
)
WITH OPTIONS (EXECUTE AS = 'mv_user')

To create a materialized view that shows only weather events from a streamed weather table and their locations from a static geo_zone table that have occurred since the last time the materialized view weather_zone joining the two data sets was refreshed:

CREATE MATERIALIZED VIEW Changes-Only Example
1
2
3
4
CREATE MATERIALIZED VIEW example.weather_zone AS
SELECT w.name AS event_name, w.type AS event_type, gz.name AS zone
FROM example.weather /*+ DELTA_TABLE */ w
JOIN example.geo_zone gz ON STXY_INTERSECTS(lon, lat, zone)

ALTER VIEW

Alters the configuration of a view.

The following facet of a view can be altered:

Move View

A view can be moved from one schema to another.

Move View Syntax
1
2
ALTER VIEW [<schema name>.]<view name>
< MOVE TO | SET SCHEMA > <new schema name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

For example, to move the sales_2017 view from the example_olap schema to the example_archive schema:

Move View Example
1
2
ALTER VIEW example_olap.sales_2017
MOVE TO example_archive;

ALTER MATERIALIZED VIEW

Alters the configuration of a materialized view.

Any of the following facets of a materialized view can be altered:

Move View

A materialized view can be moved from one schema to another.

Move View Syntax
1
2
ALTER MATERIALIZED VIEW [<schema name>.]<view name>
< MOVE TO | SET SCHEMA > <new schema name>

All dependent views, materialized views, streams, and SQL procedures will be dropped.

For example, to move the sales_current view from the example_olap schema to the example_archive schema:

Move Materialized View Example
1
2
ALTER MATERIALIZED VIEW example_olap.sales_current
MOVE TO example_archive

Set Access Mode

A materialized view can have its global accessibility modified for all users in the system, independently from and further restricting any role-based access controls in place.

Set Materialized View Access Mode Syntax
1
2
ALTER MATERIALIZED VIEW [<schema name>.]<view name>
SET ACCESS MODE < NO_ACCESS | READ_ONLY | WRITE_ONLY | READ_WRITE >

Note

Changing the access mode cannot widen access for users not already granted access; it can only narrow access for those who already have access. This setting will also trump administrative access to a materialized view.

Set Execution User

A materialized view can have its execution user for periodic refreshes (REFRESH is set to EVERY ...) changed to the given user.

Set Materialized View Execution User Syntax
1
2
ALTER MATERIALIZED VIEW [<schema name>.]<view name>
SET EXECUTE AS '<user name>'

Note

If this user doesn't exist at the time of a refresh, the refresh will be executed as the creating user, and, failing that, the system administration user.

Set Refresh Mode

The refresh mode of a materialized view can be modified.

Set Materialized View Refresh Mode Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
ALTER MATERIALIZED VIEW [<schema name>.]<view name>
SET REFRESH
<
    OFF |
    ON CHANGE |
    ON QUERY |
    EVERY <number> <SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]>
        [STARTING AT '<YYYY-MM-DD [HH:MM[:SS]]>']
        [STOP AFTER < '<YYYY-MM-DD [HH:MM[:SS]]>' | <number> <SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]> >]
>

The available refresh modes are:

ConstantDescription
OFFWill prevent the materialized view from being automatically refreshed, but will still allow manual refreshes of the data to be requested
ON CHANGEWill cause the materialized view to be updated any time a record is added, modified, or deleted from the subtending tables in its query
ON QUERYWill cause the materialized view to be updated any time it is queried
EVERY

Allows specification of an interval in seconds, minutes, hours, or days, at which the materialized view should be refreshed. By default, the first refresh interval will be one interval's worth of time from the point at which the materialized view alteration was requested. This can be modified with the following options:

  • STARTING AT: specify a date or timestamp at which refresh cycles should begin
  • STOP AFTER: specify a date, timestamp, or time interval after which refresh cycles should end

For example, to alter the current sales materialized view to refresh every 6 hours:

Set Materialized View Refresh Mode by Hour Example
1
2
ALTER MATERIALIZED VIEW example_olap.sales_current
SET REFRESH EVERY 6 HOURS

Set TTL

A materialized view's time-to-live (TTL) can be altered.

Set Materialized View TTL Syntax
1
2
ALTER MATERIALIZED VIEW [<schema name>.]<view name>
SET TTL <new ttl>

REFRESH VIEW

Refreshes the data within a materialized view.

Refresh Materialized View Syntax
1
REFRESH MATERIALIZED VIEW [<schema name>.]<view name>

DROP VIEW

Removes an existing view or materialized view. All dependent views, materialized views, streams, and SQL procedures will be dropped.

DROP VIEW Syntax
1
DROP [MATERIALIZED] VIEW [IF EXISTS] [<schema name>.]<view name>
ParametersDescription
MATERIALIZEDOptional keyword for clarity; has no impact on functionality
IF EXISTSOptional error-suppression clause; if specified, no error will be returned if the given view does not exist
<schema name>Name of the schema containing the view to remove
<view name>Name of the view to remove

SHOW VIEW

Outputs the DDL statement required to reconstruct the given view or materialized view.

SHOW VIEW Syntax
1
SHOW [[CREATE] VIEW] [<schema name>.]<view name>

Note

The response to SHOW VIEW is a single-record result set with the DDL statement as the value in the DDL column, shown below with the column separators returned by kisql.

ParametersDescription
CREATEOptional keyword for clarity
VIEW

Optional clause to avoid ambiguity:

  • if given, and a table with the given name exists instead, the command will return an error
  • if omitted, and a table with the given name exists instead, the command will be interpreted as a SHOW TABLE statement
<schema name>Name of the schema containing the view to show
<view name>Name of the view whose DDL will be output

For example, to output the DDL for the example view created in the CREATE VIEW section:

SHOW VIEW Example
1
SHOW VIEW example.view_of_table

DESCRIBE VIEW

Lists the columns and column types & properties for a given view or materialized view.

DESCRIBE VIEW Syntax
1
DESC[RIBE] [VIEW] [<schema name>.]<view name>
ParametersDescription
VIEW

Optional clause to avoid ambiguity:

  • if given, and a table with the given name exists instead, the command will return an error
  • if omitted, and a table with the given name exists instead, the command will be interpreted as a DESCRIBE TABLE statement
<schema name>Name of the schema containing the view to describe
<view name>Name of the view whose configuration will be output

For example, to describe the example view created in the CREATE VIEW section:

DESCRIBE VIEW Example
1
DESC VIEW example.view_of_table

CREATE CREDENTIAL

Creates a new credential, which is a record that contains authentication information required to connect to a resource outside the database. Any user may create a credential for their own use.

CREATE CREDENTIAL Syntax
1
2
3
4
5
CREATE [OR REPLACE] CREDENTIAL [<schema name>.]<credential name>
TYPE = '<type>',
IDENTITY = '<username>',
SECRET = '<password>'
[WITH OPTIONS ('<option name>' = '<option value>'[,...])]

The following can make use of credentials:

The following services are supported:

  • Amazon S3
  • Azure
  • Docker Repository
  • Google Cloud
  • HDFS
  • JDBC
  • Kafka
    • Apache Cluster
    • Confluent Cluster

For provider-specific syntax, see Provider-Specific Syntax. For provider-specific examples, see Create Credentials.

ParametersDescription
OR REPLACEAny existing credential with the same name will be dropped before creating this one
<schema name>Name of the schema that will contain the created credential; if no schema is specified, the credential will be created in the user's default schema
<credential name>Name of the credential to create; must adhere to the supported naming criteria
TYPE

The type of credential to create. Supported types include:

TypeDescription
aws_access_keyAuthenticate to Amazon Web Services (AWS) via Access Key
aws_iam_roleAuthenticate to Amazon Web Services (AWS) via IAM Role
azure_adAuthenticate to Microsoft Azure via Active Directory
azure_oauthAuthenticate to Microsoft Azure via OAuth
azure_sasAuthenticate to Microsoft Azure via Shared Access Signature (SAS) using an account-level access token, not a container-level one
azure_storage_keyAuthenticate to Microsoft Azure via Storage Key
confluentAuthenticate to a Confluent Kafka cluster or schema registry
dockerAuthenticate to a Docker repository
gcs_service_account_idAuthenticate to Google Cloud via user ID & private key
gcs_service_account_keysAuthenticate to Google Cloud via JSON key
hdfsAuthenticate to HDFS
jdbcAuthenticate via Java Database Connectivity
kafkaAuthenticate to an Apache Kafka cluster or schema registry
IDENTITYUsername to use for authenticating with the credential.
SECRETPassword to use for authenticating with the credential.
WITH OPTIONSOptional indicator that a comma-delimited list of option/value assignments will follow. See Credential Options for the full list of options.

For example, to create a credential, auser_azure_active_dir_creds, for connecting to Microsoft Azure Active Directory:

CREATE CREDENTIAL (Azure AD) Example
1
2
3
4
CREATE CREDENTIAL auser_azure_active_dir_creds
TYPE = 'azure_ad',
IDENTITY = 'atypicaluser',
SECRET = 'Passw0rd!'

To create a credential, kafka_cred, for connecting to Apache Kafka via SSL:

CREATE CREDENTIAL (Kafka SSL) Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE CREDENTIAL kafka_cred
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://ssl/ca-bundle.crt',
    'ssl.certificate.location' = 'kifs://ssl/client.pem',
    'ssl.key.location' = 'kifs://ssl/client.key',
    'ssl.key.password' = 'Passw0rd!'
)

Credential Options

The following is a list of possible credential options and their associated providers. For valid combinations of credential options per provider and authentication mechanism, see Provider-Specific Syntax.

OptionProviderDescription
azure_storage_account_nameAzureAzure storage account name (only used if azure_tenant_id is specified)
azure_tenant_idAzureAzure Active Directory tenant identifier
gcs_service_account_keysGCSText of the JSON file containing the GCS private key
hdfs_kerberos_keytabHDFSLocation of the Kerberos keytab file in KiFS
hdfs_use_kerberosHDFSWhether to attempt Kerberos authentication to HDFS
s3_aws_role_arnS3AWS S3 IAM role
sasl.kerberos.keytabKafkaLocation of the Kerberos keytab file in KiFS
sasl.kerberos.principalKafkaKerberos principal ID
sasl.kerberos.service.nameKafkaKerberos service name
sasl.mechanismKafka

SASL scheme to use; one of:

  • PLAIN
  • GSSAPI
sasl.passwordKafkaSASL user password
sasl.usernameKafkaSASL user ID
security.protocolKafka

Security protocol to use for authentication; one of:

  • SSL
  • SASL_SSL
  • SASL_PLAINTEXT
ssl.ca.locationKafkaLocation of trust store file in KiFS
ssl.certificate.locationKafkaLocation of client certificate in KiFS
ssl.key.locationKafkaLocation of client key in KiFS
ssl.key.passwordKafkaPassword to client key or trust store

Provider-Specific Syntax

Several authentication schemes across multiple providers are supported.

Azure

Syntax below, examples here.

Password
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'azure_storage_key',
IDENTITY = '<azure storage account name>',
SECRET = '<azure storage account key>'
SAS Token
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'azure_sas',
IDENTITY = '<azure storage account name>',
SECRET = '<azure sas token>'
Active Directory
1
2
3
4
5
6
7
8
9
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'azure_ad',
IDENTITY = '<ad client id>',
SECRET = '<ad client secret key>'
WITH OPTIONS
(
    STORAGE ACCOUNT NAME = '<azure storage account name>',
    TENANT ID = '<azure tenant id>'
)

Google Cloud Storage

Syntax below, examples here.

User ID & Key
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'gcs_service_account_id',
IDENTITY = '<gcs account id>',
SECRET = '<gcs account private key>'
JSON Key
1
2
3
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'gcs_service_account_keys',
WITH OPTIONS (GCS_SERVICE_ACCOUNT_KEYS = '<gcs account json key text>')

HDFS

Syntax below, examples here.

Password
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'hdfs',
IDENTITY = '<hdfs username>',
SECRET = '<hdfs password>'
Kerberos Keytab
1
2
3
4
5
6
7
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'hdfs',
IDENTITY = '<hdfs username>'
WITH OPTIONS
(
    KERBEROS KEYTAB = 'kifs://<keytab file path>'
)
Kerberos Token
1
2
3
4
5
6
7
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'hdfs',
IDENTITY = '<hdfs username>'
WITH OPTIONS
(
    USE KERBEROS = 'true'
)

JDBC

Syntax below, examples here.

Password
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'jdbc',
IDENTITY = '<username>',
SECRET = '<password>'

Kafka (Apache)

Syntax below, examples here.

Password
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka',
IDENTITY = '<username>',
SECRET = '<password>'
SSL with Truststore
1
2
3
4
5
6
7
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>'
)
SSL with Truststore/Client Auth
1
2
3
4
5
6
7
8
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>'
)
SSL with Encryption
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>',
    'ssl.key.location' = 'kifs://<client key path>',
    'ssl.key.password' = '<client key password>'
)
SASL
1
2
3
4
5
6
7
8
9
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SASL_SSL',
    'sasl.mechanism' = 'PLAIN',
    'sasl.username' = '<sasl username>',
    'sasl.password' = '<sasl password>'
)
Kerberos
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SASL_PLAINTEXT',
    'sasl.mechanism' = 'GSSAPI',
    'sasl.kerberos.service.name' = '<kerberos service name>',
    'sasl.kerberos.keytab' = 'kifs://<kerberos keytab file>',
    'sasl.kerberos.principal' = '<kerberos principal>'
)
Kerberos SSL
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'kafka'
WITH OPTIONS
(
    'security.protocol' = 'SASL_SSL',
    'sasl.mechanism' = 'GSSAPI',
    'sasl.kerberos.service.name' = '<kerberos service name>',
    'sasl.kerberos.keytab' = 'kifs://<kerberos keytab file>',
    'sasl.kerberos.principal' = '<kerberos principal>',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>',
    'ssl.key.location' = 'kifs://<client key path>',
    'ssl.key.password' = '<client key password>'
)

Kafka (Confluent)

Syntax below, examples here.

Password
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent',
IDENTITY = '<username>',
SECRET = '<password>'
SSL with Truststore
1
2
3
4
5
6
7
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>'
)
SSL with Truststore/Client Auth
1
2
3
4
5
6
7
8
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>'
)
SSL with Encryption
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SSL',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>',
    'ssl.key.location' = 'kifs://<client key path>',
    'ssl.key.password' = '<client key password>'
)
SASL
1
2
3
4
5
6
7
8
9
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SASL_SSL',
    'sasl.mechanism' = 'PLAIN',
    'sasl.username' = '<sasl username>',
    'sasl.password' = '<sasl password>'
)
Kerberos
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SASL_PLAINTEXT',
    'sasl.mechanism' = 'GSSAPI',
    'sasl.kerberos.service.name' = '<kerberos service name>',
    'sasl.kerberos.keytab' = 'kifs://<kerberos keytab file>',
    'sasl.kerberos.principal' = '<kerberos principal>'
)
Kerberos SSL
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'confluent'
WITH OPTIONS
(
    'security.protocol' = 'SASL_SSL',
    'sasl.mechanism' = 'GSSAPI',
    'sasl.kerberos.service.name' = '<kerberos service name>',
    'sasl.kerberos.keytab' = 'kifs://<kerberos keytab file>',
    'sasl.kerberos.principal' = '<kerberos principal>',
    'ssl.ca.location' = 'kifs://<client truststore path>',
    'ssl.certificate.location' = 'kifs://<client certificate path>',
    'ssl.key.location' = 'kifs://<client key path>',
    'ssl.key.password' = '<client key password>'
)

S3 (Amazon)

Syntax below, examples here.

S3 Access Key
1
2
3
4
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'aws_access_key',
IDENTITY = '<aws access key id>',
SECRET = '<aws secret access key>'
IAM Role
1
2
3
4
5
6
7
8
CREATE CREDENTIAL [<schema name>.]<credential name>
TYPE = 'aws_iam_role',
IDENTITY = '<aws access key id>',
SECRET = '<aws secret access key>'
WITH OPTIONS
(
    S3_AWS_ROLE_ARN = '<amazon resource name>'
)

ALTER CREDENTIAL

Alters the properties of an existing credential.

ALTER CREDENTIAL Syntax
1
2
3
ALTER CREDENTIAL [<schema name>.]<credential name>
SET PROPERTY
    <property name> = '<property value>'[,...]

Only users with system_admin or system_user_admin, or users with credential_admin on the credential, may alter it.

ParametersDescription
<schema name>Name of the schema containing the credential to alter
<credential name>Name of the existing credential to alter.
SET PROPERTYIndicator that a comma-delimited list of property name/value assignments will follow. See Set Properties and Credential Options for the complete list of properties.

To alter a credential, auser_azure_active_dir_creds, updating the secret:

ALTER CREDENTIAL Example
1
2
3
ALTER CREDENTIAL auser_azure_active_dir_creds
SET PROPERTY
    SECRET = 'atypicaluserNewPassw0rd!'

To alter a credential, kafka_cred, updating the keystore password:

ALTER CREDENTIAL WITH OPTIONS Example
1
2
3
ALTER CREDENTIAL kafka_cred
SET PROPERTY
    'ssl.key.password' = 'Passw0rd!?'

Set Properties

All credential properties can be altered via ALTER CREDENTIAL. The following are the property names and descriptions to use when performing an alteration.

ParametersDescription
TYPE

The type of credential. Supported types include:

TypeDescription
aws_access_keyAuthenticate to Amazon Web Services (AWS) via Access Key
aws_iam_roleAuthenticate to Amazon Web Services (AWS) via IAM Role
azure_adAuthenticate to Microsoft Azure via Active Directory
azure_sasAuthenticate to Microsoft Azure via Shared Access Signature (SAS)
azure_storage_keyAuthenticate to Microsoft Azure via Storage Key
confluentAuthenticate to a Confluent Kafka cluster or schema registry
dockerAuthenticate to a Docker repository
gcs_service_account_idAuthenticate to Google Cloud via user ID & private key
gcs_service_account_keysAuthenticate to Google Cloud via JSON key
hdfsAuthenticate to HDFS
jdbcAuthenticate via Java Database Connectivity
kafkaAuthenticate to a Kafka cluster or schema registry
IDENTITYUsername to use for authenticating with the credential.
SECRETPassword to use for authenticating with the credential.

DROP CREDENTIAL

Removes an existing credential.

DROP CREDENTIAL Syntax
1
DROP CREDENTIAL < [<schema name>.]<credential name> | * >

Note

All data sources, data sinks, and ML container registries that depend on a given credential must be dropped before it can be dropped.

Only users with system_admin or system_user_admin, or users with credential_admin on the credential to drop, may drop a credential.

ParametersDescription
<schema name>Name of the schema containing the credential to drop
<credential name>Name of the existing credential to remove; use * instead of schema/credential name to drop all credentials

To drop a credential, auser_azure_active_dir_creds:

DROP CREDENTIAL Example
1
DROP CREDENTIAL auser_azure_active_dir_creds

SHOW CREDENTIAL

Outputs the DDL statement required to reconstruct the given credential.

SHOW CREDENTIAL Syntax
1
SHOW CREDENTIAL < [<schema name>.]<credential name> | * >

The secret value will be masked and would need to be replaced with the actual secret value if attempting to reconstruct the credential. Only users with system_admin or system_user_admin, or users with credential_admin or credential_read on the credential to show, may show a credential.

Note

The response to SHOW CREDENTIAL is a single-column result set with the DDL statement as the value in the DDL column.

ParametersDescription
<schema name>Name of the schema containing the credential to show
<credential name>Name of the existing credential for which the DDL will be output; use * instead of schema/credential name to output the DDL of all credentials

For example, to output the DDL for a credential, auser_azure_active_dir_creds:

SHOW CREDENTIAL Example
1
SHOW CREDENTIAL auser_azure_active_dir_creds

To output the DDL for all credentials:

SHOW CREDENTIAL (All Credentials) Example
1
SHOW CREDENTIAL *

DESCRIBE CREDENTIAL

Outputs the configuration of an existing credential.

DESCRIBE CREDENTIAL Syntax
1
DESC[RIBE] CREDENTIAL < [<schema name>.]<credential name> | * >

Only users with system_admin or system_user_admin, or users with credential_admin or credential_read on the credential to describe, may describe a credential.

Note

The response to DESCRIBE CREDENTIAL is a four-column result set:

  • CREDENTIAL_NAME - name of the credential
  • CREDENTIAL_TYPE - type of the credential; see CREATE CREDENTIAL for values
  • CREDENTIAL_IDENTITY - Username associated with the credential
  • CREDENTIAL_OPTIONS - Options associated with the credential; see Credential Options for values
ParametersDescription
<schema name>Name of the schema containing the credential to describe
<credential name>Name of the existing credential for which the configuration will be output; use * instead of schema/credential name to output the configuration of all credentials

For example, to show the configuration for a credential, auser_azure_active_dir_creds:

DESCRIBE CREDENTIAL Example
1
DESCRIBE CREDENTIAL auser_azure_active_dir_creds

To show the configuration for all credentials:

DESCRIBE CREDENTIAL (All Credentials) Example
1
DESCRIBE CREDENTIAL *

CREATE DATA SOURCE

Creates a new data source, which contains the location and connection information for a data store that is external to Kinetica. A data source serves as an authentication & access mechanism to a remote resource and can optionally be used as a streaming source as well.

CREATE DATA SOURCE Syntax
1
2
3
4
5
CREATE [OR REPLACE] [EXTERNAL] DATA SOURCE [<schema name>.]<data source name>
LOCATION = '<provider>[://[<host>[:<port>]]]'
[USER = '<username>']
[PASSWORD = '<password>']
[WITH OPTIONS (<option name> = '<option value>'[,...])]

A data source does not reference specific data files within the source; file references (if applicable) can be made by using the data source in a CREATE EXTERNAL TABLE call (for creating a persistent view of the file data) or a LOAD INTO call (for either a one-time load or a subscribed loading of data from the external source to a locally persisted table).

Note

The data source will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.

The following data source providers are supported:

  • Azure (Microsoft blob storage)
  • CData (CData Software source-specific JDBC driver)
  • GCS (Google Cloud Storage)
  • HDFS (Apache Hadoop Distributed File System)
  • JDBC (Java Database Connectivity, using a user-supplied driver)
  • Kafka (streaming feed)
    • Apache
    • Confluent
  • S3 (Amazon S3 Bucket)

Note

  • Azure anonymous data sources are only supported when both the container and the contained objects allow anonymous access.
  • CData data sources can use a JDBC credential for authentication.
  • HDFS systems with wire encryption are not supported.
  • Confluent & Kafka data sources require an associated credential for authentication.

For provider-specific syntax, see Provider-Specific Syntax. For provider-specific examples, see Create Data Sources.

ParametersDescription
OR REPLACEAny existing data source with the same name will be dropped before creating this one
EXTERNALOptional keyword for clarity
<schema name>Name of the schema that will contain the created data source; if no schema is specified, the data source will be created in the user's default schema
<data source name>Name of the data source to create; must adhere to the supported naming criteria
<provider>

Provider of the data source

Supported providers include:

ProviderDescription
AZUREMicrosoft Azure blob storage
CONFLUENTConfluent Kafka streaming feed
GCSGoogle Cloud Storage
HDFSApache Hadoop Distributed File System
JDBC

JDBC connection, where LOCATION is the JDBC or CData JDBC URL.

For example, a PostgreSQL JDBC data source might have:

LOCATION = 'jdbc:postgresql://example.com:5432/mydb'

A PostgreSQL via CData JDBC data source might have:

LOCATION = 'jdbc:postgresql:Server=example.com;Port=5432;Database=mydb'

See the driver list for the full list of supported CData JDBC drivers.

KAFKAApache Kafka streaming feed
S3Amazon S3 bucket
<host>

Host to use to connect to the data source

The default host used for Azure is <storage_account_name>.blob.core.windows.net.

The default host used for S3 is <region>.amazonaws.com.

<port>Port, for HDFS or Kafka, to use to connect to the data source
USEROptional user name, given in <username>, to use for authenticating to the data source
PASSWORDOptional password, given in <password>, to use for authenticating to the data source
WITH OPTIONSOptional indicator that a comma-delimited list of connection option/value assignments will follow. See Data Source Options for the full list of options.

To create a data source, kin_ds, that connects to an Amazon S3 bucket, kinetica_ds, in the US East (N. Virginia) region:

CREATE DATA SOURCE Example
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE kin_ds
LOCATION = 'S3'
USER = '<aws access id>'
PASSWORD = '<aws access key>'
WITH OPTIONS
(
    BUCKET NAME = 'kinetica-ds',
    REGION = 'us-east-1'
)

Data Source Options

OptionProviderDescription
CREDENTIALAnyCredential object to use to authenticate to the remote system
VALIDATEAnyWhether to test the connection to the data source upon creation; if TRUE (default), the creation of a data source that cannot be connected to will fail; if FALSE, the data source will be created regardless of connectivity
WAIT TIMEOUTAZURE, HDFS, S3Timeout in seconds for reading from the storage provider
USE_MANAGED_CREDENTIALSAZURE, S3

Whether to connect to the storage provider with provider-managed credentials

  • For an Azure data source, use the Azure Instance Metadata Service (IMDS) endpoint local to the Kinetica cluster to acquire credentials (only for clusters deployed within Azure)
  • For an S3 data source, use the AWS Default Credential Provider Chain to acquire credentials, making sure any supporting server-side configuration (files, environment variables) is done on every node in the cluster (only for clusters deployed within AWS)
CONNECTION TIMEOUTHDFS, S3Timeout in seconds for connecting to a given storage provider
CONTAINER NAMEAZUREAzure storage container name
SAS TOKENAZUREAzure storage account shared access signature token; this should be an account-level access token, not a container-level one
STORAGE ACCOUNT NAMEAZUREAzure storage account name (only used if TENANT ID is specified)
TENANT IDAZUREAzure Active Directory tenant identifier
GCS_BUCKET_NAMEGCSName of the GCS bucket to use as the data source
GCS_PROJECT_IDGCSName of the Google Cloud project to use for request billing
GCS_SERVICE_ACCOUNT_KEYSGCSText of the JSON key file containing the GCS private key
DELEGATION TOKENHDFSOptional Kerberos delegation token for worker nodes; if not specified, the token will be acquired from HDFS
KERBEROS KEYTABHDFSLocation of the Kerberos keytab file in KiFS
USE KERBEROSHDFSWhether to attempt Kerberos authentication to HDFS
JDBC_DRIVER_CLASS_NAMEJDBCJDBC driver class name (optional, if the name is available in the JAR file's manifest)
JDBC_DRIVER_JAR_PATHJDBCKiFS path of the JDBC driver JAR file to use
KAFKA_TOPIC_NAMEKAFKA, CONFLUENTKafka topic to access
BUCKET NAMES3Amazon S3 bucket name
REGIONS3Amazon S3 region identifier
S3_AWS_ROLE_ARNS3Amazon Resource Name (ARN) specifying the role
S3_ENCRYPTION_CUSTOMER_ALGORITHMS3Algorithm used to encrypt/decrypt data
S3_ENCRYPTION_CUSTOMER_KEYS3Key used to encrypt/decrypt data
SCHEMA_REGISTRY_CREDENTIALKAFKA, CONFLUENTCredential object to use to authenticate to the Confluent Schema Registry
SCHEMA_REGISTRY_LOCATIONKAFKA, CONFLUENT

Location of the Confluent Schema Registry in format:

[storage_path[:storage_port]]
SCHEMA_REGISTRY_PORTKAFKA, CONFLUENTPort of the Confluent Schema Registry

Provider-Specific Syntax

Several authentication schemes across multiple providers are supported.

Azure BLOB

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    CONTAINER NAME = '<azure container name>'
)
Public (No Auth)
1
2
3
4
5
6
7
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
USER = '<azure storage account name>'
WITH OPTIONS
(
    CONTAINER NAME = '<azure container name>'
)
Managed Credentials
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
[USER = '<ad client id>']
WITH OPTIONS
(
    USE_MANAGED_CREDENTIALS = true,
    STORAGE ACCOUNT NAME = '<azure storage account name>',
    CONTAINER NAME = '<azure container name>',
    TENANT ID = '<ad tenant id>'
)
Password
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
USER = '<azure storage account name>'
PASSWORD = '<azure storage account key>'
WITH OPTIONS
(
    CONTAINER NAME = '<azure container name>'
)
SAS Token
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
USER = '<azure storage account name>'
WITH OPTIONS
(
    SAS TOKEN = '<sas token>',
    CONTAINER NAME = '<azure container name>'
)
Active Directory
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'AZURE[://<host>]'
USER = '<ad client id>'
PASSWORD = '<ad client secret key>'
WITH OPTIONS
(
    STORAGE ACCOUNT NAME = '<azure storage account name>',
    CONTAINER NAME = '<azure container name>',
    TENANT ID = '<ad tenant id>'
)

CData

Syntax below, examples here.

Credential
1
2
3
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = '<cdata jdbc url>'
WITH OPTIONS (CREDENTIAL = '[<credential schema name>.]<credential name>')
Password in URL
1
2
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = '<cdata jdbc url with username/password>'
Password as Parameter
1
2
3
4
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = '<cdata jdbc url>'
USER = '<jdbc username>'
PASSWORD = '<jdbc password>'

Google Cloud Storage

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'GCS[://<host>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    [GCS_PROJECT_ID = '<gcs project id>',]
    GCS_BUCKET_NAME = '<gcs bucket name>'
)
Public (No Auth)
1
2
3
4
5
6
7
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'GCS[://<host>]'
WITH OPTIONS
(
    [GCS_PROJECT_ID = '<gcs project id>',]
    GCS_BUCKET_NAME = '<gcs bucket name>'
)
User ID & Key
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'GCS[://<host>]'
USER = '<gcs account id>'
PASSWORD = '<gcs account private key>'
WITH OPTIONS
(
    [GCS_PROJECT_ID = '<gcs project id>',]
    GCS_BUCKET_NAME = '<gcs bucket name>'
)
JSON Key
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'GCS[://<host>]'
WITH OPTIONS
(
    GCS_SERVICE_ACCOUNT_KEYS = '<gcs account json key text>',
    [GCS_PROJECT_ID = '<gcs project id>',]
    GCS_BUCKET_NAME = '<gcs bucket name>'
)

HDFS

Syntax below, examples here.

Credential
1
2
3
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'HDFS://<host>:<port>'
WITH OPTIONS (CREDENTIAL = '[<credential schema name>.]<credential name>')
Password
1
2
3
4
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'HDFS://<host>:<port>'
USER = '<hdfs username>'
PASSWORD = '<hdfs password>'
Kerberos Keytab
1
2
3
4
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'HDFS://<host>:<port>'
USER = '<hdfs username>'
WITH OPTIONS (KERBEROS KEYTAB = 'kifs://<keytab file path>')
Kerberos Token
1
2
3
4
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'HDFS://<host>:<port>'
USER = '<hdfs username>'
WITH OPTIONS (USE KERBEROS = true)

JDBC

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = '<jdbc url>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    JDBC_DRIVER_CLASS_NAME = '<jdbc driver class full path>',
    JDBC_DRIVER_JAR_PATH = 'kifs://<jdbc driver jar path>'
)
Password
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = '<jdbc url>'
USER = '<jdbc username>'
PASSWORD = '<jdbc password>'
WITH OPTIONS
(
    JDBC_DRIVER_CLASS_NAME = '<jdbc driver class full path>',
    JDBC_DRIVER_JAR_PATH = 'kifs://<jdbc driver jar path>'
)

Kafka (Apache)

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'KAFKA://<host>:<port>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)
Credential w/ Schema Registry
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'KAFKA://<host>:<port>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    KAFKA_TOPIC_NAME = '<kafka topic name>',
    SCHEMA_REGISTRY_CREDENTIAL = '[<sr credential schema name>.]<sr credential name>',
    SCHEMA_REGISTRY_LOCATION = '<schema registry url>'
)
Public (No Auth)
1
2
3
4
5
6
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'KAFKA://<host>:<port>'
WITH OPTIONS
(
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)

Kafka (Confluent)

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'CONFLUENT://<host>:<port>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)
Credential w/ Schema Registry
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'CONFLUENT://<host>:<port>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    KAFKA_TOPIC_NAME = '<kafka topic name>',
    SCHEMA_REGISTRY_CREDENTIAL = '[<sr credential schema name>.]<sr credential name>',
    SCHEMA_REGISTRY_LOCATION = '<schema registry url>'
)
Public (No Auth)
1
2
3
4
5
6
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'CONFLUENT://<host>:<port>'
WITH OPTIONS
(
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)

S3 (Amazon)

Syntax below, examples here.

Credential
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<data source schema name>.]<data source name>
LOCATION = 'S3[://<host>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)
Public (No Auth)
1
2
3
4
5
6
7
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'S3[://<host>]'
WITH OPTIONS
(
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)
Managed Credentials
1
2
3
4
5
6
7
8
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'S3[://<host>]'
WITH OPTIONS
(
    USE_MANAGED_CREDENTIALS = true,
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)
Access Key
1
2
3
4
5
6
7
8
9
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'S3[://<host>]'
USER = '<aws access key id>'
PASSWORD = '<aws secret access key>'
WITH OPTIONS
(
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)
IAM Role
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE DATA SOURCE [<schema name>.]<data source name>
LOCATION = 'S3[://<host>]'
USER = '<aws access key id>'
PASSWORD = '<aws secret access key>'
WITH OPTIONS
(
    S3_AWS_ROLE_ARN = '<aws iam role arn>',
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)

ALTER DATA SOURCE

Alters the connection parameters of an existing data source.

ALTER DATA SOURCE Syntax
1
2
3
ALTER [EXTERNAL] DATA SOURCE [<schema name>.]<data source name>
SET PROPERTY
    <property name> = '<property value>'[,...]

Note

The data source will be validated upon alteration, by default, and will fail to be altered if an authorized connection cannot be established.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data source to alter
<data source name>Name of the data source to alter
SET PROPERTY

Indicator that a comma-delimited list of alterations to make will follow.

See Set Properties for the complete list of properties.

To alter a data source, kin_ds, updating the username & password:

ALTER DATA SOURCE Example
1
2
3
4
ALTER DATA SOURCE kin_ds
SET PROPERTY
    USER = '<new aws access id>',
    PASSWORD = '<new aws access key>'

Set Properties

All data source properties can be altered via ALTER DATA SOURCE. The following are the property names and descriptions to use when performing an alteration.

Provider-Agnostic Properties

ParametersDescription
LOCATION

Location of the data source; see Provider-Specific Syntax for details

The data source can be from any of the following providers:

ProviderDescription
AZUREMicrosoft Azure blob storage
CONFLUENTConfluent Kafka streaming feed
GCSGoogle Cloud Storage
HDFSApache Hadoop Distributed File System
JDBC

JDBC connection, where LOCATION is the JDBC or CData JDBC URL.

See the driver list for the full list of supported CData JDBC drivers.

KAFKAApache Kafka streaming feed
S3Amazon S3 bucket
USERUser name, given in <username>, to use for authenticating to the data source
PASSWORDPassword, given in <password>, to use for authenticating to the data source
CREDENTIALCredential object to use to authenticate to the remote system
VALIDATEWhether to test the connection to the data source upon creation; if TRUE, the creation of a data source that cannot be connected to will fail; if FALSE, the data source will be created regardless of connectivity. Default is TRUE.

Provider-Specific Properties

OptionProviderDescription
WAIT TIMEOUTAZURE, HDFS, S3Timeout in seconds for reading from the storage provider
USE_MANAGED_CREDENTIALSAZURE, S3

Whether to connect to the storage provider with provider-managed credentials

  • For an Azure data source, use the Azure Instance Metadata Service (IMDS) endpoint local to the Kinetica cluster to acquire credentials (only for clusters deployed within Azure)
  • For an S3 data source, use the AWS Default Credential Provider Chain to acquire credentials, making sure any supporting server-side configuration (files, environment variables) is done on every node in the cluster (only for clusters deployed within AWS)
CONNECTION TIMEOUTHDFS, S3Timeout in seconds for connecting to a given storage provider
CONTAINER NAMEAZUREAzure storage container name
SAS TOKENAZUREAzure storage account shared access signature token; this should be an account-level access token, not a container-level one
STORAGE ACCOUNT NAMEAZUREAzure storage account name (only used if TENANT ID is specified)
TENANT IDAZUREAzure Active Directory tenant identifier
GCS_BUCKET_NAMEGCSName of the GCS bucket to use as the data source
GCS_PROJECT_IDGCSName of the Google Cloud project to use for request billing
GCS_SERVICE_ACCOUNT_KEYSGCSText of the JSON key file containing the GCS private key
DELEGATION TOKENHDFSOptional Kerberos delegation token for worker nodes; if not specified, the token will be acquired from HDFS
KERBEROS KEYTABHDFSLocation of the Kerberos keytab file in KiFS
USE KERBEROSHDFSWhether to attempt Kerberos authentication to HDFS
JDBC_DRIVER_CLASS_NAMEJDBCJDBC driver class name (optional, if the name is available in the JAR file's manifest)
JDBC_DRIVER_JAR_PATHJDBCKiFS path of the JDBC driver JAR file to use
KAFKA_TOPIC_NAMEKAFKA, CONFLUENTKafka topic to access
BUCKET NAMES3Amazon S3 bucket name
REGIONS3Amazon S3 region identifier
S3_AWS_ROLE_ARNS3Amazon Resource Name (ARN) specifying the role
S3_ENCRYPTION_CUSTOMER_ALGORITHMS3Algorithm used to encrypt/decrypt data
S3_ENCRYPTION_CUSTOMER_KEYS3Key used to encrypt/decrypt data
SCHEMA_REGISTRY_CREDENTIALKAFKA, CONFLUENTCredential object to use to authenticate to the Confluent Schema Registry
SCHEMA_REGISTRY_LOCATIONKAFKA, CONFLUENT

Location of the Confluent Schema Registry in format:

[storage_path[:storage_port]]
SCHEMA_REGISTRY_PORTKAFKA, CONFLUENTPort of the Confluent Schema Registry

LIST DATA SOURCE

Outputs a list of tables accessible through a JDBC data source.

LIST DATA SOURCE Syntax
1
2
LIST [DATA SOURCE] [<schema name>.]<data source name>
[WITH OPTIONS (<option name> = '<option value>'[,...])]

Note

The response to LIST DATA SOURCE is a single-column result set of table names in <schema_name>.<table_name> format.

ParametersDescription
DATA SOURCEOptional keyword for clarity
<schema name>Name of the schema containing the data source to list
<data source name>Name of the data source whose accessible tables will be listed
WITH OPTIONSOptional indicator that a comma-delimited list of filtering option/value assignments will follow. See List Data Source Options for the full list of options.

For example, to list all the tables accessible through a data source, kin_ds_jdbc:

LIST DATA SOURCE (All Tables) Example
1
LIST DATA SOURCE kin_ds_jdbc

To list all tables in a specific schema:

LIST DATA SOURCE (Filter by Schema) Example
1
2
LIST DATA SOURCE kin_ds_jdbc
WITH OPTIONS (SCHEMA = 'ki_catalog')

To list tables matching a specific name pattern, regardless of catalog/schema:

LIST DATA SOURCE (Filter by Table) Example
1
2
LIST DATA SOURCE kin_ds_jdbc
WITH OPTIONS (TABLE = '%role%')

List Data Source Options

The _ and % characters can be used in any of these options to match any single character or any number of characters in object names, respectively.

If multiple options are used, the tables must match all specified criteria.

ParametersDescription
CATALOGOnly show tables within the specified catalog(s)
SCHEMAOnly show tables within the specified schema(s)
TABLEOnly show tables matching the specified name(s)

DROP DATA SOURCE

Removes an existing data source.

DROP DATA SOURCE Syntax
1
DROP [EXTERNAL] DATA SOURCE [<schema name>.]<data source name>

Note

Any external tables or subscriptions initiated by a LOAD INTO command that depend on a given data source must be dropped before it can be dropped.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data source to remove
<data source name>Name of the data source to remove

To drop a data source, kin_ds:

DROP DATA SOURCE Example
1
DROP DATA SOURCE kin_ds

SHOW DATA SOURCE

Outputs the DDL statement required to reconstruct the given data source.

SHOW DATA SOURCE Syntax
1
SHOW [EXTERNAL] DATA SOURCE < [<schema name>.]<data source name> | * >

The PASSWORD value will be masked and would need to be replaced with the actual password if attempting to reconstruct the data source.

Note

The response to SHOW DATA SOURCE is a single-column result set with the DDL statement as the value in the DDL column.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data source to show
<data source name>Name of the data source for which the DDL will be output; use * instead of schema/data source name to output the DDL of all data sources

For example, to output the DDL for a data source, kin_ds:

SHOW DATA SOURCE Example
1
SHOW DATA SOURCE kin_ds

To output the DDL for all data sources:

SHOW DATA SOURCE (All Data Sources) Example
1
SHOW DATA SOURCE *

DESCRIBE DATA SOURCE

Outputs the configuration of an existing data source.

SHOW DATA SOURCE Syntax
1
DESC[RIBE] [EXTERNAL] DATA SOURCE < [<schema name>.]<data source name> | * >

Note

The response to DESCRIBE DATA SOURCE is a three-column result set:

  • DATA_SOURCE - name of the data source
  • STORAGE_PROVIDER_TYPE - keyword associated with storage provider
  • ADDITIONAL_INFO - data source configuration; passwords will be redacted
ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data source to describe
<data source name>Name of the data source whose configuration will be output; use * instead of schema/data source name to output the configuration of all data sources

To show the configuration for a data source, kin_ds:

DESCRIBE DATA SOURCE Example
1
DESCRIBE DATA SOURCE kin_ds

To show the configuration for all data sources:

DESCRIBE DATA SOURCE (All Data Sources) Example
1
DESCRIBE DATA SOURCE *

CREATE DATA SINK

Creates a new data sink, which contains the location and connection information for a data consumer that is external to Kinetica. A data sink serves as an authentication & access mechanism to the remote resource.

CREATE DATA SINK Syntax
1
2
3
CREATE [OR REPLACE] [EXTERNAL] DATA SINK [<schema name>.]<data sink name>
LOCATION = '<consumer>://<host>[:<port>]'
[WITH OPTIONS (<option name> = '<option value>'[,...])]

A data sink can be referenced in an EXPORT ... INTO call (for exporting local table data to a remote system) or a CREATE STREAM call (for streaming local table data to a remote system).

Note

The data sink will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.

The following data sink consumers are supported:

  • Azure (Microsoft blob storage)
  • CData (CData Software source-specific JDBC driver)
  • GCS (Google Cloud Storage)
  • HDFS (Apache Hadoop Distributed File System)
  • JDBC (Java Database Connectivity, using a user-supplied driver)
  • Kafka (Apache Kafka streaming feed)
  • S3 (Amazon S3 Bucket)
  • Webhook (HTTP/HTTPS)

Note

  • CData data sinks can use a JDBC credential for authentication.
  • Kafka data sinks will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.

For provider-specific syntax, see Consumer-Specific Syntax. For provider-specific examples, see Create Data Sinks.

ParametersDescription
OR REPLACEAny existing data sink with the same name will be dropped before creating this one
EXTERNALOptional keyword for clarity
<schema name>Name of the schema that will contain the created data sink; if no schema is specified, the data sink will be created in the user's default schema
<data sink name>Name of the data sink to create; must adhere to the supported naming criteria
<consumer>

Consumer of the data sink

Supported consumers include:

ConsumerDescription
AZUREMicrosoft Azure blob storage
GCSGoogle Cloud Storage
HDFSApache Hadoop Distributed File System
HTTPUnsecured webhook
HTTPSSecured webhook
JDBC

JDBC connection, where LOCATION is the JDBC or CData JDBC URL.

For example, a PostgreSQL JDBC data sink might have:

LOCATION = 'jdbc:postgresql://example.com:5432/mydb'

A PostgreSQL via CData JDBC data sink might have:

LOCATION = 'jdbc:postgresql:Server=example.com;Port=5432;Database=mydb'

See the driver list for the full list of supported CData JDBC drivers.

KAFKAApache Kafka broker
S3Amazon S3 bucket
<host>Host to use to connect to the data sink
<port>Port to use to connect to the data sink
WITH OPTIONSOptional indicator that a comma-delimited list of connection option/value assignments will follow. See Data Sink Options for the full list of options.

To create a data sink, kin_dsink, that connects to an Apache Kafka broker:

CREATE DATA SINK Example
1
2
3
4
5
6
7
CREATE DATA SINK kin_dsink
LOCATION = 'kafka://kafka.abc.com:9092'
WITH OPTIONS
(
    CREDENTIAL = 'kafka_credential',
    KAFKA_TOPIC_NAME = 'kafka_topic'
)

Data Sink Options

OptionProviderDescription
CREDENTIALAnyCredential object to use to authenticate to the remote consumer
VALIDATEAnyWhether to test the connection to the data sink upon creation; if TRUE (default), the creation of a data sink that cannot be connected to will fail; if FALSE, the data sink will be created regardless of connectivity
WAIT TIMEOUTAnyTimeout in seconds for reading from the consumer
JDBC_DRIVER_CLASS_NAMEJDBCJDBC driver class name (optional, if the name is available in the JAR file's manifest)
JDBC_DRIVER_JAR_PATHJDBCKiFS path of the JDBC driver JAR file to use
CONNECTION TIMEOUTKAFKATimeout in seconds for connecting to a consumer
KAFKA_TOPIC_NAMEKAFKAKafka topic to write to

Consumer-Specific Syntax

Several authentication schemes across multiple consumers are supported.

Azure BLOB
1
2
3
4
5
6
7
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'AZURE[://<host>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    CONTAINER NAME = '<azure container name>'
)
CData (Credential)
1
2
3
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = '<cdata jdbc url>'
WITH OPTIONS (CREDENTIAL = '[<credential schema name>.]<credential name>')
CData (Password in URL)
1
2
CREATE DATA SINK [<schema name>.]<data sink name>
LOCATION = '<cdata jdbc url with username/password>'
GCS
1
2
3
4
5
6
7
8
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'GCS[://<host>]'
WITH OPTIONS
(
    GCS_SERVICE_ACCOUNT_KEYS = '<gcs account json key text>',
    [GCS_PROJECT_ID = '<gcs project id>',]
    GCS_BUCKET_NAME = '<gcs bucket name>'
)
HDFS
1
2
3
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'HDFS://<host>:<port>'
WITH OPTIONS (CREDENTIAL = '[<credential schema name>.]<credential name>')
JDBC
1
2
3
4
5
6
7
8
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = '<jdbc url>'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    JDBC_DRIVER_CLASS_NAME = '<jdbc driver class full path>',
    JDBC_DRIVER_JAR_PATH = 'kifs://<jdbc driver jar path>'
)
Kafka (Credential)
1
2
3
4
5
6
7
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'kafka://<host>[:<port>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)
Kafka (Public/No Auth)
1
2
3
4
5
6
CREATE DATA SINK [<schema name>.]<data sink name>
LOCATION = 'kafka://<host>[:<port>]'
WITH OPTIONS
(
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)
Amazon S3
1
2
3
4
5
6
7
8
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'S3[://<host>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>',
    BUCKET NAME = '<aws bucket name>',
    REGION = '<aws region>'
)
Webhook (Credential w/ HTTPS)
1
2
3
4
5
6
CREATE DATA SINK [<data sink schema name>.]<data sink name>
LOCATION = 'https://<host>[:<port>]'
WITH OPTIONS
(
    CREDENTIAL = '[<credential schema name>.]<credential name>'
)
Webhook (HTTP)
1
2
CREATE DATA SINK [<schema name>.]<data sink name>
LOCATION = 'http://<host>[:<port>]'

ALTER DATA SINK

Alters the connection parameters of an existing data sink.

ALTER DATA SINK Syntax
1
2
3
ALTER [EXTERNAL] DATA SINK [<schema name>.]<data sink name>
SET PROPERTY
    <property name> = '<property value>'[,...]

Note

Kafka data sinks will be validated upon creation, by default, and will fail to be created if an authorized connection cannot be established.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data sink to alter
<data sink name>Name of the data sink to alter
SET PROPERTY

Indicator that a comma-delimited list of alterations to make will follow.

See Set Properties for the complete list of properties.

To alter a data sink, kin_dsink, updating the timeouts:

ALTER DATA SINK Example
1
2
3
4
ALTER DATA SINK kin_dsink
SET PROPERTY
    CONNECTION TIMEOUT = '10',
    WAIT TIMEOUT = '10'

Set Properties

All data sink properties can be altered via ALTER DATA SINK. The following are the property names and descriptions to use when performing an alteration.

OptionProviderDescription
CREDENTIALAnyCredential object to use to authenticate to the remote consumer
LOCATIONAnyLocation of the data sink; see Consumer-Specific Syntax for details
VALIDATEAnyWhether to test the connection to the data sink upon creation; if TRUE (default), the creation of a data sink that cannot be connected to will fail; if FALSE, the data sink will be created regardless of connectivity
WAIT TIMEOUTAnyTimeout in seconds for reading from the consumer
JDBC_DRIVER_CLASS_NAMEJDBCJDBC driver class name (optional, if the name is available in the JAR file's manifest)
JDBC_DRIVER_JAR_PATHJDBCKiFS path of the JDBC driver JAR file to use
CONNECTION TIMEOUTKAFKATimeout in seconds for connecting to a consumer
KAFKA_TOPIC_NAMEKAFKAKafka topic to write to

DROP DATA SINK

Removes an existing data sink.

DROP DATA SINK Syntax
1
DROP [EXTERNAL] DATA SINK [<schema name>.]<data sink name>

Note

Any streams that depend on a given data sink must be dropped before it can be dropped.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data sink to remove
<data sink name>Name of the data sink to remove

To drop a data sink, kin_dsink:

DROP DATA SINK Example
1
DROP DATA SINK kin_dsink

SHOW DATA SINK

Outputs the DDL statement required to reconstruct the given data sink.

SHOW DATA SINK Syntax
1
SHOW [EXTERNAL] DATA SINK < [<schema name>.]<data sink name> | * >

Note

The response to SHOW DATA SINK is a single-column result set with the DDL statement as the value in the DDL column.

ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data sink to show
<data sink name>Name of the data sink for which the DDL will be output; use * instead of schema/data sink name to output the DDL of all data sinks

For example, to output the DDL for a data sink, kin_dsink:

SHOW DATA SINK Example
1
SHOW DATA SINK kin_dsink

To output the DDL for all data sinks:

SHOW DATA SINK (All Data Sinks) Example
1
SHOW DATA SINK *

DESCRIBE DATA SINK

Outputs the configuration of an existing data sink.

SHOW DATA SINK Syntax
1
DESC[RIBE] [EXTERNAL] DATA SINK < [<schema name>.]<data sink name> | * >

Note

The response to DESCRIBE DATA SINK is a three-column result set:

  • DATA_SINK - name of the data sink
  • STORAGE_PROVIDER_TYPE - keyword identifying data sink consumer
  • ADDITIONAL_INFO - data sink configuration
ParametersDescription
EXTERNALOptional keyword for clarity
<schema name>Name of the schema containing the data sink to describe
<data sink name>Name of the data sink whose configuration will be output; use * instead of schema/data sink name to output the configuration of all data sinks

To show the configuration for a data sink, kin_dsink:

DESCRIBE DATA SINK Example
1
DESC DATA SINK kin_dsink

To show the configuration for all data sinks:

DESCRIBE DATA SINK (All Data Sinks) Example
1
DESC DATA SINK *

CREATE STREAM

Creates a new data stream (natively, a table monitor), which publishes changes in a given table to a target.

CREATE STREAM Syntax
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
CREATE STREAM [<stream schema name>.]<stream name>
ON [TABLE] [<table schema name>.]<table name>
[
    REFRESH
    <
        ON CHANGE |
        EVERY <number> <SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]> [STARTING AT '<YYYY-MM-DD [HH:MM[:SS]]>']
    >
]
[WHERE <filter expression>]
WITH OPTIONS
(
    < DATASINK_NAME = '<data sink name>' | DESTINATION = '<destination>' >,
    <option name> = '<option value>'[,...]
)

The monitored table can be one of:

The target can be one of:

  • an external Apache Kafka broker
  • an external webhook

By default, a stream will publish inserted records to the target. A stream can alternatively be configured to monitor for updates or deletes and publish the corresponding record counts. Streams that monitor for inserts can have an expression applied to only have specific inserts of interest published.

An existing data sink can be referenced in creating a stream, as the external target of the streamed data. Only unauthenticated external targets may be used if not using a data sink.

When the source table of a stream is altered or dropped, the stream will also be dropped.

ParametersDescription
<stream schema name>Name of the schema that will contain the created stream; if no schema is specified, the stream will be created in the user's default schema
<stream name>Name of the stream to create; must adhere to the supported naming criteria
TABLEOptional keyword for clarity.
<table schema name>Name of the schema containing the table or materialized view to monitor.
<table name>Name of the table or materialized view to monitor for changes.
REFRESH

Specifies the reporting scheme for monitored changes. The following schemes are available:

ConstantDescription
ON CHANGEWill cause notifications to be streamed any time a record is added, modified, or deleted from the monitored table
EVERYAllows specification of an interval in seconds, minutes, hours, or days, with the optional specification of a starting time at which the first monitor interval will run; if no start time is specified, the default will be an interval's worth of time from the point at which the stream was created
<filter expression>Boolean expression that can be used to monitor for only a specific set of inserts.
WITH OPTIONS

Indicator that a comma-delimited list of configuration option/value assignments will follow. See Stream Options for the full list of options.

Note

One of either DATASINK_NAME or DESTINATION is required.

To create a stream, kin_stream, that publishes inserts into the order_stream table of orders over $10,000 via the kin_dsink data sink:

CREATE STREAM Example
1
2
3
CREATE STREAM kin_stream ON example.order_stream
WHERE cost > 10000.00
WITH OPTIONS (DATASINK_NAME = 'kin_dsink')

To create a stream, kin_stream, that publishes inserts into the order_stream table of orders, placed within a given geofence, via the kin_dsink data sink:

CREATE STREAM Geofence Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE STREAM kin_stream ON example.order_stream
REFRESH ON CHANGE
WHERE STXY_INTERSECTS(lon, lat, 'POLYGON((
    -108.97203757374318 40.945055425026034,
    -101.72106101124318 40.945055425026034,
    -101.72106101124318 36.98324941198095,
    -108.97203757374318 36.98324941198095,
    -108.97203757374318 40.945055425026034
    ))')
WITH OPTIONS (DATASINK_NAME = 'kin_dsink')

Stream Options

OptionConsumerDescription
DATASINK_NAMEAnyData sink object to use to identify the consumer of this stream; mutually exclusive with destination
DESTINATIONAny

Location of the external target when not using a data sink; the URL of the consumer to connect to in this format:

<consumer>://<host>[:<port>]

Supported consumers include:

ConsumerDescription
httpUnsecured webhook
httpsSecured webhook
kafkaApache Kafka broker
EVENTAny

Specifies the type of event to monitor

Event TypeDescription
insert(default) Monitor inserts and publish them to the target
updateMonitor updates and publish their counts to the target
deleteMonitor deletes and publish their counts to the target
INCREASING_COLUMNAny

Column of the monitored table that contains ever-increasing values for new data; e.g., a timestamp or sequence ID column

Tip

If such a column can be identified, the performance of the stream will improve.

KAFKA_TOPIC_NAMEkafkaKafka topic to write to, if the target is a Kafka broker

Consumer-Specific Syntax

Several configurations across multiple consumers are supported.

HTTP
1
2
3
4
CREATE STREAM [<stream schema name>.]<stream name>
ON [TABLE] [<table schema name>.]<table name>
[WHERE <filter expression>]
WITH OPTIONS (DESTINATION = 'http://<host>[:<port>]')
HTTPS
1
2
3
4
CREATE STREAM [<stream schema name>.]<stream name>
ON [TABLE] [<table schema name>.]<table name>
[WHERE <filter expression>]
WITH OPTIONS (DESTINATION = 'https://<host>[:<port>]')
Kafka (No auth)
1
2
3
4
5
6
7
8
CREATE STREAM [<stream schema name>.]<stream name>
ON [TABLE] [<table schema name>.]<table name>
[WHERE <filter expression>]
WITH OPTIONS
(
    DESTINATION = 'kafka://<host>:<port>',
    KAFKA_TOPIC_NAME = '<kafka topic name>'
)
Data Sink
1
2
3
4
CREATE STREAM [<stream schema name>.]<stream name>
ON [TABLE] [<table schema name>.]<table name>
[WHERE <filter expression>]
WITH OPTIONS (DATASINK_NAME = '[<data sink schema name>.]<data sink name>')

DROP STREAM

Removes an existing stream.

DROP STREAM Syntax
1
DROP STREAM [<schema name>.]<stream name>
ParametersDescription
<schema name>Name of the schema containing the stream to remove
<stream name>Name of the stream to remove

To drop a stream, kin_stream:

DROP STREAM Example
1
DROP STREAM kin_stream

SHOW STREAM

Outputs the DDL statement required to reconstruct one or more existing streams.

SHOW STREAM Syntax
1
SHOW STREAM < [<schema name>.]<stream name> | <schema name>.* | * >

Listing options:

  • [<schema name>.]<stream name> - output the DDL statement of the given stream
  • <schema name>.* - output the DDL statements of all streams under the given schema
  • * - output the DDL statements of all streams

Note

The response to SHOW STREAM is a single-column result set with the DDL statement as the value in the DDL column.

ParametersDescription
<schema name>Name of the schema containing the stream(s) to show
<stream name>Name of the stream to show

For example, to output the DDL for a stream, kin_stream:

SHOW STREAM Example
1
SHOW STREAM kin_stream

DESCRIBE STREAM

Outputs the configuration of one or more existing streams.

SHOW STREAM Syntax
1
DESC[RIBE] STREAM < [<schema name>.]<stream name> | <schema name>.* | * >

Listing options:

  • [<schema name>.]<stream name> - output the configuration of the given stream
  • <schema name>.* - output the configuration of all streams under the given schema
  • * - output the configuration of all streams

Note

The response to DESCRIBE STREAM is a six-column result set:

  • MONITOR_ID - unique name or topic ID for the stream
  • TABLE_NAME - name of the table being monitored
  • EVENT - table data event that triggers streamed content
  • INCREASING_COLUMN - column in table being monitored that contains ever-increasing data values
  • FILTER_EXPRESSION - filter used to monitor only data of interest
  • DATASINK_NAME - name of the data sink associated with the consumer
ParametersDescription
<schema name>Name of the schema containing the stream(s) to describe
<stream name>Name of the stream to describe

To show the configuration for a stream, kin_stream:

DESCRIBE STREAM Example
1
DESC STREAM kin_stream

DESCRIBE

Lists the contained tables and views of a given schema, or lists the columns and column types & properties for a given table or view.

DESCRIBE Schema Syntax
1
DESC[RIBE] <schema name>
DESCRIBE Table Syntax
1
DESC[RIBE] [<schema name>.]<table/view name>

For example, to describe the tables contained in the demo schema, into which demo data is usually downloaded:

DESCRIBE Schema Example
1
DESC demo
DESCRIBE Schema Output
1
2
3
4
5
6
7
+------------+
| Set_name   |
+------------+
| stocks     |
| nyctaxi    |
| flights    |
+------------+

To describe the example table created in the CREATE TABLE section:

DESCRIBE Table Example
1
DESC example.various_types
DESCRIBE Table Output
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
+-----------+--------+------------+-----------------------------------+
| Col_num   | Name   | Null?      | Type                              |
+-----------+--------+------------+-----------------------------------+
| 0         | i      | NOT NULL   | INTEGER (primary_key)             |
| 1         | bi     | NOT NULL   | BIGINT (primary_key, shard_key)   |
| 2         | b      |            | BOOLEAN                           |
| 3         | ub     |            | UNSIGNED BIGINT                   |
| 4         | r      |            | REAL                              |
| 5         | d      |            | DOUBLE                            |
| 6         | s      |            | VARCHAR (text_search)             |
| 7         | c      |            | VARCHAR (32, dict)                |
| 8         | p      |            | VARCHAR (256, text_search)        |
| 9         | ip     |            | IPV4                              |
| 10        | ui     |            | UUID (init_with_uuid)             |
| 11        | ts     |            | TIMESTAMP                         |
| 12        | td     |            | DATE                              |
| 13        | tt     |            | TIME                              |
| 14        | dt     |            | DATETIME (init_with_now)          |
| 15        | dc1    |            | BIGINT                            |
| 16        | dc2    |            | DECIMAL (18, 4)                   |
| 17        | dc3    |            | REAL                              |
| 18        | dc4    |            | DOUBLE                            |
| 19        | n      |            | DECIMAL (18, 4)                   |
| 20        | byt    |            | BLOB                              |
| 21        | w      |            | GEOMETRY                          |
| 22        | j      |            | JSON                              |
| 23        | v      |            | VECTOR (10)                       |
| 24        | ai     |            | INTEGER[3]                        |
+-----------+--------+------------+-----------------------------------+