Kinetica C# API  Version 7.1.10.0
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Properties Pages
kinetica.InsertRecordsFromQueryRequest.Options Struct Reference

Optional parameters. More...

Public Attributes

const string BAD_RECORD_TABLE_NAME = "bad_record_table_name"
 Optional name of a table to which records that were rejected are written. More...
 
const string BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit"
 A positive integer indicating the maximum number of records that can be written to the bad-record-table. More...
 
const string BATCH_SIZE = "batch_size"
 Number of records per batch when inserting data. More...
 
const string DATASOURCE_NAME = "datasource_name"
 Name of an existing external data source from which table will be loaded More...
 
const string ERROR_HANDLING = "error_handling"
 Specifies how errors should be handled upon insertion. More...
 
const string PERMISSIVE = "permissive"
 Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped. More...
 
const string IGNORE_BAD_RECORDS = "ignore_bad_records"
 Malformed records are skipped. More...
 
const string ABORT = "abort"
 Stops current insertion and aborts entire operation when an error is encountered. More...
 
const string IGNORE_EXISTING_PK = "ignore_existing_pk"
 Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). More...
 
const string TRUE = "true"
 Upsert new records when primary keys match existing records More...
 
const string FALSE = "false"
 Reject new records when primary keys match existing records More...
 
const string INGESTION_MODE = "ingestion_mode"
 Whether to do a full load, dry run, or perform a type inference on the source data. More...
 
const string FULL = "full"
 Run a type inference on the source data (if needed) and ingest More...
 
const string DRY_RUN = "dry_run"
 Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling. More...
 
const string TYPE_INFERENCE_ONLY = "type_inference_only"
 Infer the type of the source data and return, without ingesting any data. More...
 
const string JDBC_FETCH_SIZE = "jdbc_fetch_size"
 The JDBC fetch size, which determines how many rows to fetch per round trip. More...
 
const string JDBC_SESSION_INIT_STATEMENT = "jdbc_session_init_statement"
 Executes the statement per each jdbc session before doing actual load. More...
 
const string NUM_SPLITS_PER_RANK = "num_splits_per_rank"
 Optional: number of splits for reading data per rank. More...
 
const string NUM_TASKS_PER_RANK = "num_tasks_per_rank"
 Optional: number of tasks for reading data per rank. More...
 
const string PRIMARY_KEYS = "primary_keys"
 Optional: comma separated list of column names, to set as primary keys, when not specified in the type. More...
 
const string SHARD_KEYS = "shard_keys"
 Optional: comma separated list of column names, to set as primary keys, when not specified in the type. More...
 
const string SUBSCRIBE = "subscribe"
 Continuously poll the data source to check for new data and load it into the table. More...
 
const string TRUNCATE_TABLE = "truncate_table"
 If set to true, truncates the table specified by table_name prior to loading the data. More...
 
const string REMOTE_QUERY = "remote_query"
 Remote SQL query from which data will be sourced More...
 
const string REMOTE_QUERY_ORDER_BY = "remote_query_order_by"
 Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. More...
 
const string REMOTE_QUERY_FILTER_COLUMN = "remote_query_filter_column"
 Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. More...
 
const string REMOTE_QUERY_INCREASING_COLUMN = "remote_query_increasing_column"
 Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). More...
 
const string REMOTE_QUERY_PARTITION_COLUMN = "remote_query_partition_column"
 Alias name for remote_query_filter_column. More...
 
const string TRUNCATE_STRINGS = "truncate_strings"
 If set to true, truncate string values that are longer than the column's type size. More...
 
const string UPDATE_ON_EXISTING_PK = "update_on_existing_pk"
 Specifies the record collision policy for inserting into a table with a primary key. More...
 

Detailed Description

Optional parameters.

  • BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.
  • BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000
  • BATCH_SIZE: Number of records per batch when inserting data.
  • DATASOURCE_NAME: Name of an existing external data source from which table will be loaded
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
    The default value is ABORT.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:
    • FULL: Run a type inference on the source data (if needed) and ingest
    • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
    • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
    The default value is FULL.
  • JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.
  • JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.
  • NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.
  • NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasks
  • PRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value is FALSE.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the data. Supported values: The default value is FALSE.
  • REMOTE_QUERY: Remote SQL query from which data will be sourced
  • REMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.
  • REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.
  • REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.
  • REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.
  • TRUNCATE_STRINGS: If set to true, truncate string values that are longer than the column's type size. Supported values: The default value is FALSE.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.

The default value is an empty Dictionary. A set of string constants for the parameter options.

Definition at line 765 of file InsertRecordsFromQuery.cs.

Member Data Documentation

const string kinetica.InsertRecordsFromQueryRequest.Options.ABORT = "abort"

Stops current insertion and aborts entire operation when an error is encountered.

Primary key collisions are considered abortable errors in this mode.

Definition at line 827 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit"

A positive integer indicating the maximum number of records that can be written to the bad-record-table.

Default value is 10000

Definition at line 778 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.BAD_RECORD_TABLE_NAME = "bad_record_table_name"

Optional name of a table to which records that were rejected are written.

The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.

Definition at line 773 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.BATCH_SIZE = "batch_size"

Number of records per batch when inserting data.

Definition at line 782 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.DATASOURCE_NAME = "datasource_name"

Name of an existing external data source from which table will be loaded

Definition at line 786 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.DRY_RUN = "dry_run"

Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.

Definition at line 913 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.ERROR_HANDLING = "error_handling"

Specifies how errors should be handled upon insertion.

Supported values:

  • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
  • IGNORE_BAD_RECORDS: Malformed records are skipped.
  • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.

The default value is ABORT.

Definition at line 814 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.FALSE = "false"

Reject new records when primary keys match existing records

Definition at line 874 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.FULL = "full"

Run a type inference on the source data (if needed) and ingest

Definition at line 908 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.IGNORE_BAD_RECORDS = "ignore_bad_records"

Malformed records are skipped.

Definition at line 822 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.IGNORE_EXISTING_PK = "ignore_existing_pk"

Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false).

If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:

  • TRUE: Ignore new records whose primary key values collide with those of existing records
  • FALSE: Treat as errors any new records whose primary key values collide with those of existing records

The default value is FALSE.

Definition at line 866 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.INGESTION_MODE = "ingestion_mode"

Whether to do a full load, dry run, or perform a type inference on the source data.

Supported values:

  • FULL: Run a type inference on the source data (if needed) and ingest
  • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
  • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.

The default value is FULL.

Definition at line 904 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.JDBC_FETCH_SIZE = "jdbc_fetch_size"

The JDBC fetch size, which determines how many rows to fetch per round trip.

Definition at line 922 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.JDBC_SESSION_INIT_STATEMENT = "jdbc_session_init_statement"

Executes the statement per each jdbc session before doing actual load.

The default value is ''.

Definition at line 926 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.NUM_SPLITS_PER_RANK = "num_splits_per_rank"

Optional: number of splits for reading data per rank.

Default will be external_file_reader_num_tasks. The default value is ''.

Definition at line 931 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.NUM_TASKS_PER_RANK = "num_tasks_per_rank"

Optional: number of tasks for reading data per rank.

Default will be external_file_reader_num_tasks

Definition at line 935 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.PERMISSIVE = "permissive"

Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.

Definition at line 819 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.PRIMARY_KEYS = "primary_keys"

Optional: comma separated list of column names, to set as primary keys, when not specified in the type.

The default value is ''.

Definition at line 940 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.REMOTE_QUERY = "remote_query"

Remote SQL query from which data will be sourced

Definition at line 983 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.REMOTE_QUERY_FILTER_COLUMN = "remote_query_filter_column"

Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column.

The default value is ''.

Definition at line 993 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.REMOTE_QUERY_INCREASING_COLUMN = "remote_query_increasing_column"

Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP).

The default value is ''.

Definition at line 998 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.REMOTE_QUERY_ORDER_BY = "remote_query_order_by"

Name of column to be used for splitting the query into multiple sub-queries using ordering of given column.

The default value is ''.

Definition at line 988 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.REMOTE_QUERY_PARTITION_COLUMN = "remote_query_partition_column"

Alias name for remote_query_filter_column.

The default value is ''.

Definition at line 1002 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.SHARD_KEYS = "shard_keys"

Optional: comma separated list of column names, to set as primary keys, when not specified in the type.

The default value is ''.

Definition at line 945 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.SUBSCRIBE = "subscribe"

Continuously poll the data source to check for new data and load it into the table.

Supported values:

The default value is FALSE.

Definition at line 962 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.TRUE = "true"

Upsert new records when primary keys match existing records

Definition at line 870 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.TRUNCATE_STRINGS = "truncate_strings"

If set to true, truncate string values that are longer than the column's type size.

Supported values:

The default value is FALSE.

Definition at line 1019 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.TRUNCATE_TABLE = "truncate_table"

If set to true, truncates the table specified by table_name prior to loading the data.

Supported values:

The default value is FALSE.

Definition at line 979 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.TYPE_INFERENCE_ONLY = "type_inference_only"

Infer the type of the source data and return, without ingesting any data.

The inferred type is returned in the response.

Definition at line 918 of file InsertRecordsFromQuery.cs.

const string kinetica.InsertRecordsFromQueryRequest.Options.UPDATE_ON_EXISTING_PK = "update_on_existing_pk"

Specifies the record collision policy for inserting into a table with a primary key.

If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:

  • TRUE: Upsert new records when primary keys match existing records
  • FALSE: Reject new records when primary keys match existing records

The default value is FALSE.

Definition at line 1053 of file InsertRecordsFromQuery.cs.


The documentation for this struct was generated from the following file: