Kinetica C# API
Version 7.2.3.0
|
A set of string constants for the parameter options. More...
Public Attributes | |
const string | BAD_RECORD_TABLE_NAME = "bad_record_table_name" |
Optional name of a table to which records that were rejected are written. More... | |
const string | BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit" |
A positive integer indicating the maximum number of records that can be written to the bad-record-table. More... | |
const string | BAD_RECORD_TABLE_LIMIT_PER_INPUT = "bad_record_table_limit_per_input" |
For subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. More... | |
const string | BATCH_SIZE = "batch_size" |
Internal tuning parameter–number of records per batch when inserting data. More... | |
const string | COLUMN_FORMATS = "column_formats" |
For each target column specified, applies the column-property-bound format to the source data loaded into that column. More... | |
const string | COLUMNS_TO_LOAD = "columns_to_load" |
Specifies a comma-delimited list of columns from the source data to load. More... | |
const string | COLUMNS_TO_SKIP = "columns_to_skip" |
Specifies a comma-delimited list of columns from the source data to skip. More... | |
const string | COMPRESSION_TYPE = "compression_type" |
Optional: payload compression type. More... | |
const string | NONE = "none" |
Uncompressed More... | |
const string | AUTO = "auto" |
Default. More... | |
const string | GZIP = "gzip" |
gzip file compression. More... | |
const string | BZIP2 = "bzip2" |
bzip2 file compression. More... | |
const string | DEFAULT_COLUMN_FORMATS = "default_column_formats" |
Specifies the default format to be applied to source data loaded into columns with the corresponding column property. More... | |
const string | ERROR_HANDLING = "error_handling" |
Specifies how errors should be handled upon insertion. More... | |
const string | PERMISSIVE = "permissive" |
Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped. More... | |
const string | IGNORE_BAD_RECORDS = "ignore_bad_records" |
Malformed records are skipped. More... | |
const string | ABORT = "abort" |
Stops current insertion and aborts entire operation when an error is encountered. More... | |
const string | FILE_TYPE = "file_type" |
Specifies the type of the file(s) whose records will be inserted. More... | |
const string | AVRO = "avro" |
Avro file format More... | |
const string | DELIMITED_TEXT = "delimited_text" |
Delimited text file format; e.g., CSV, TSV, PSV, etc. More... | |
const string | GDB = "gdb" |
Esri/GDB file format More... | |
const string | JSON = "json" |
Json file format More... | |
const string | PARQUET = "parquet" |
Apache Parquet file format More... | |
const string | SHAPEFILE = "shapefile" |
ShapeFile file format More... | |
const string | FLATTEN_COLUMNS = "flatten_columns" |
Specifies how to handle nested columns. More... | |
const string | TRUE = "true" |
Upsert new records when primary keys match existing records More... | |
const string | FALSE = "false" |
Reject new records when primary keys match existing records More... | |
const string | GDAL_CONFIGURATION_OPTIONS = "gdal_configuration_options" |
Comma separated list of gdal conf options, for the specific requests: key=value. More... | |
const string | IGNORE_EXISTING_PK = "ignore_existing_pk" |
Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when UPDATE_ON_EXISTING_PK is FALSE). More... | |
const string | INGESTION_MODE = "ingestion_mode" |
Whether to do a full load, dry run, or perform a type inference on the source data. More... | |
const string | FULL = "full" |
Run a type inference on the source data (if needed) and ingest More... | |
const string | DRY_RUN = "dry_run" |
Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of ERROR_HANDLING. More... | |
const string | TYPE_INFERENCE_ONLY = "type_inference_only" |
Infer the type of the source data and return, without ingesting any data. More... | |
const string | LAYER = "layer" |
Optional: geo files layer(s) name(s): comma separated. More... | |
const string | LOADING_MODE = "loading_mode" |
Scheme for distributing the extraction and loading of data from the source data file(s). More... | |
const string | HEAD = "head" |
The head node loads all data. More... | |
const string | DISTRIBUTED_SHARED = "distributed_shared" |
The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. More... | |
const string | DISTRIBUTED_LOCAL = "distributed_local" |
A single worker process on each node loads all files that are available to it. More... | |
const string | LOCAL_TIME_OFFSET = "local_time_offset" |
For Avro local timestamp columns More... | |
const string | MAX_RECORDS_TO_LOAD = "max_records_to_load" |
Limit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread). More... | |
const string | NUM_TASKS_PER_RANK = "num_tasks_per_rank" |
Optional: number of tasks for reading file per rank. More... | |
const string | POLL_INTERVAL = "poll_interval" |
If TRUE, the number of seconds between attempts to load external files into the table. More... | |
const string | PRIMARY_KEYS = "primary_keys" |
Optional: comma separated list of column names, to set as primary keys, when not specified in the type. More... | |
const string | SCHEMA_REGISTRY_CONNECTION_RETRIES = "schema_registry_connection_retries" |
Confluent Schema registry connection timeout (in Secs) More... | |
const string | SCHEMA_REGISTRY_CONNECTION_TIMEOUT = "schema_registry_connection_timeout" |
Confluent Schema registry connection timeout (in Secs) More... | |
const string | SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES = "schema_registry_max_consecutive_connection_failures" |
Max records to skip due to SR connection failures, before failing More... | |
const string | MAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE = "max_consecutive_invalid_schema_failure" |
Max records to skip due to schema related errors, before failing More... | |
const string | SCHEMA_REGISTRY_SCHEMA_NAME = "schema_registry_schema_name" |
Name of the Avro schema in the schema registry to use when reading Avro records. More... | |
const string | SHARD_KEYS = "shard_keys" |
Optional: comma separated list of column names, to set as primary keys, when not specified in the type. More... | |
const string | SKIP_LINES = "skip_lines" |
Skip a number of lines from the beginning of the file. More... | |
const string | SUBSCRIBE = "subscribe" |
Continuously poll the data source to check for new data and load it into the table. More... | |
const string | TABLE_INSERT_MODE = "table_insert_mode" |
Optional: table_insert_mode. More... | |
const string | SINGLE = "single" |
const string | TABLE_PER_FILE = "table_per_file" |
const string | TEXT_COMMENT_STRING = "text_comment_string" |
Specifies the character string that should be interpreted as a comment line prefix in the source data. More... | |
const string | TEXT_DELIMITER = "text_delimiter" |
Specifies the character delimiting field values in the source data and field names in the header (if present). More... | |
const string | TEXT_ESCAPE_CHARACTER = "text_escape_character" |
Specifies the character that is used to escape other characters in the source data. More... | |
const string | TEXT_HAS_HEADER = "text_has_header" |
Indicates whether the source data contains a header row. More... | |
const string | TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter" |
Specifies the delimiter for column properties in the header row (if present). More... | |
const string | TEXT_NULL_STRING = "text_null_string" |
Specifies the character string that should be interpreted as a null value in the source data. More... | |
const string | TEXT_QUOTE_CHARACTER = "text_quote_character" |
Specifies the character that should be interpreted as a field value quoting character in the source data. More... | |
const string | TEXT_SEARCH_COLUMNS = "text_search_columns" |
Add 'text_search' property to internally inferenced string columns. More... | |
const string | TEXT_SEARCH_MIN_COLUMN_LENGTH = "text_search_min_column_length" |
Set minimum column size. More... | |
const string | TRUNCATE_STRINGS = "truncate_strings" |
If set to TRUE, truncate string values that are longer than the column's type size. More... | |
const string | TRUNCATE_TABLE = "truncate_table" |
If set to TRUE, truncates the table specified by table_name prior to loading the file(s). More... | |
const string | TYPE_INFERENCE_MAX_RECORDS_READ = "type_inference_max_records_read" |
The default value is ''. More... | |
const string | TYPE_INFERENCE_MODE = "type_inference_mode" |
optimize type inference for: More... | |
const string | ACCURACY = "accuracy" |
Scans data to get exactly-typed & sized columns for all data scanned. More... | |
const string | SPEED = "speed" |
Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned More... | |
const string | UPDATE_ON_EXISTING_PK = "update_on_existing_pk" |
Specifies the record collision policy for inserting into a table with a primary key. More... | |
A set of string constants for the parameter options.
Optional parameters.
Definition at line 248 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.ABORT = "abort" |
Stops current insertion and aborts entire operation when an error is encountered.
Primary key collisions are considered abortable errors in this mode.
Definition at line 436 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.ACCURACY = "accuracy" |
Scans data to get exactly-typed & sized columns for all data scanned.
Definition at line 935 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.AUTO = "auto" |
const string kinetica.InsertRecordsFromPayloadRequest.Options.AVRO = "avro" |
Avro file format
Definition at line 477 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit" |
A positive integer indicating the maximum number of records that can be written to the bad-record-table.
Default value is 10000
Definition at line 260 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.BAD_RECORD_TABLE_LIMIT_PER_INPUT = "bad_record_table_limit_per_input" |
For subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload.
Default value will be 'bad_record_table_limit' and total size of the table per rank is limited to 'bad_record_table_limit'
Definition at line 268 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.BAD_RECORD_TABLE_NAME = "bad_record_table_name" |
Optional name of a table to which records that were rejected are written.
The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string).
Definition at line 255 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.BATCH_SIZE = "batch_size" |
Internal tuning parameter–number of records per batch when inserting data.
Definition at line 272 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.BZIP2 = "bzip2" |
bzip2 file compression.
Definition at line 361 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.COLUMN_FORMATS = "column_formats" |
For each target column specified, applies the column-property-bound format to the source data loaded into that column.
Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'.
See DEFAULT_COLUMN_FORMATS for valid format syntax.
Definition at line 288 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.COLUMNS_TO_LOAD = "columns_to_load" |
Specifies a comma-delimited list of columns from the source data to load.
If more than one file is being loaded, this list applies to all files.
Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table.
If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful.
Mutually exclusive with COLUMNS_TO_SKIP.
Definition at line 316 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.COLUMNS_TO_SKIP = "columns_to_skip" |
Specifies a comma-delimited list of columns from the source data to skip.
Mutually exclusive with COLUMNS_TO_LOAD.
Definition at line 323 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.COMPRESSION_TYPE = "compression_type" |
const string kinetica.InsertRecordsFromPayloadRequest.Options.DEFAULT_COLUMN_FORMATS = "default_column_formats" |
Specifies the default format to be applied to source data loaded into columns with the corresponding column property.
Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in COLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a custom COLUMN_FORMATS for that annotation is specified.
The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds).
Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
Definition at line 392 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.DELIMITED_TEXT = "delimited_text" |
Delimited text file format; e.g., CSV, TSV, PSV, etc.
Definition at line 481 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.DISTRIBUTED_LOCAL = "distributed_local" |
A single worker process on each node loads all files that are available to it.
This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data).
NOTE:
If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail.
If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
Definition at line 686 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.DISTRIBUTED_SHARED = "distributed_shared" |
The head node coordinates loading data by worker processes across all nodes from shared files available to all workers.
NOTE:
Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
Definition at line 667 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.DRY_RUN = "dry_run" |
Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of ERROR_HANDLING.
Definition at line 590 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.ERROR_HANDLING = "error_handling" |
Specifies how errors should be handled upon insertion.
Supported values:
The default value is ABORT.
Definition at line 422 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.FALSE = "false" |
Reject new records when primary keys match existing records
Definition at line 519 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.FILE_TYPE = "file_type" |
Specifies the type of the file(s) whose records will be inserted.
Supported values:
The default value is DELIMITED_TEXT.
Definition at line 474 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.FLATTEN_COLUMNS = "flatten_columns" |
Specifies how to handle nested columns.
Supported values:
The default value is FALSE.
Definition at line 511 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.FULL = "full" |
Run a type inference on the source data (if needed) and ingest
Definition at line 584 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.GDAL_CONFIGURATION_OPTIONS = "gdal_configuration_options" |
Comma separated list of gdal conf options, for the specific requests: key=value.
The default value is ''.
Definition at line 524 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.GDB = "gdb" |
Esri/GDB file format
Definition at line 484 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.GZIP = "gzip" |
gzip file compression.
Definition at line 358 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.HEAD = "head" |
The head node loads all data.
All files must be available to the head node.
Definition at line 656 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.IGNORE_BAD_RECORDS = "ignore_bad_records" |
Malformed records are skipped.
Definition at line 430 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.IGNORE_EXISTING_PK = "ignore_existing_pk" |
Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when UPDATE_ON_EXISTING_PK is FALSE).
Supported values:
The default value is FALSE.
Definition at line 550 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.INGESTION_MODE = "ingestion_mode" |
Whether to do a full load, dry run, or perform a type inference on the source data.
Supported values:
The default value is FULL.
Definition at line 580 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.JSON = "json" |
Json file format
Definition at line 487 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.LAYER = "layer" |
Optional: geo files layer(s) name(s): comma separated.
The default value is ''.
Definition at line 601 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.LOADING_MODE = "loading_mode" |
Scheme for distributing the extraction and loading of data from the source data file(s).
Supported values:
The default value is HEAD.
Definition at line 651 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.LOCAL_TIME_OFFSET = "local_time_offset" |
For Avro local timestamp columns
Definition at line 689 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.MAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE = "max_consecutive_invalid_schema_failure" |
Max records to skip due to schema related errors, before failing
Definition at line 731 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.MAX_RECORDS_TO_LOAD = "max_records_to_load" |
Limit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread).
The default value is ''.
Definition at line 696 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.NONE = "none" |
Uncompressed
Definition at line 351 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.NUM_TASKS_PER_RANK = "num_tasks_per_rank" |
Optional: number of tasks for reading file per rank.
Default will be external_file_reader_num_tasks
Definition at line 702 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.PARQUET = "parquet" |
Apache Parquet file format
Definition at line 490 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.PERMISSIVE = "permissive" |
Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
Definition at line 427 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.POLL_INTERVAL = "poll_interval" |
If TRUE, the number of seconds between attempts to load external files into the table.
If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds.
Definition at line 710 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.PRIMARY_KEYS = "primary_keys" |
Optional: comma separated list of column names, to set as primary keys, when not specified in the type.
The default value is ''.
Definition at line 715 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SCHEMA_REGISTRY_CONNECTION_RETRIES = "schema_registry_connection_retries" |
Confluent Schema registry connection timeout (in Secs)
Definition at line 719 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SCHEMA_REGISTRY_CONNECTION_TIMEOUT = "schema_registry_connection_timeout" |
Confluent Schema registry connection timeout (in Secs)
Definition at line 723 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES = "schema_registry_max_consecutive_connection_failures" |
Max records to skip due to SR connection failures, before failing
Definition at line 727 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SCHEMA_REGISTRY_SCHEMA_NAME = "schema_registry_schema_name" |
Name of the Avro schema in the schema registry to use when reading Avro records.
Definition at line 735 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SHAPEFILE = "shapefile" |
ShapeFile file format
Definition at line 493 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SHARD_KEYS = "shard_keys" |
Optional: comma separated list of column names, to set as primary keys, when not specified in the type.
The default value is ''.
Definition at line 740 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SINGLE = "single" |
Definition at line 777 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SKIP_LINES = "skip_lines" |
Skip a number of lines from the beginning of the file.
Definition at line 744 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SPEED = "speed" |
Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
Definition at line 940 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.SUBSCRIBE = "subscribe" |
Continuously poll the data source to check for new data and load it into the table.
Supported values:
The default value is FALSE.
Definition at line 759 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TABLE_INSERT_MODE = "table_insert_mode" |
Optional: table_insert_mode.
Supported values:
The default value is SINGLE.
Definition at line 775 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TABLE_PER_FILE = "table_per_file" |
Definition at line 778 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_COMMENT_STRING = "text_comment_string" |
Specifies the character string that should be interpreted as a comment line prefix in the source data.
All lines in the data starting with the provided string are ignored.
For DELIMITED_TEXT FILE_TYPE only. The default value is '#'.
Definition at line 789 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_DELIMITER = "text_delimiter" |
Specifies the character delimiting field values in the source data and field names in the header (if present).
For DELIMITED_TEXT FILE_TYPE only. The default value is ','.
Definition at line 798 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_ESCAPE_CHARACTER = "text_escape_character" |
Specifies the character that is used to escape other characters in the source data.
An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, & vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value.
The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not.
For DELIMITED_TEXT FILE_TYPE only.
Definition at line 814 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_HAS_HEADER = "text_has_header" |
Indicates whether the source data contains a header row.
Supported values:
The default value is TRUE.
Definition at line 829 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter" |
Specifies the delimiter for column properties in the header row (if present).
Cannot be set to same value as TEXT_DELIMITER.
For DELIMITED_TEXT FILE_TYPE only. The default value is '|'.
Definition at line 841 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_NULL_STRING = "text_null_string" |
Specifies the character string that should be interpreted as a null value in the source data.
For DELIMITED_TEXT FILE_TYPE only. The default value is '\N'.
Definition at line 849 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_QUOTE_CHARACTER = "text_quote_character" |
Specifies the character that should be interpreted as a field value quoting character in the source data.
The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string.
For DELIMITED_TEXT FILE_TYPE only. The default value is '"'.
Definition at line 863 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_SEARCH_COLUMNS = "text_search_columns" |
Add 'text_search' property to internally inferenced string columns.
Comma separated list of column names or '*' for all columns. To add text_search property only to string columns of minimum size, set also the option 'text_search_min_column_length'
Definition at line 871 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TEXT_SEARCH_MIN_COLUMN_LENGTH = "text_search_min_column_length" |
Set minimum column size.
Used only when 'text_search_columns' has a value.
Definition at line 876 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TRUE = "true" |
Upsert new records when primary keys match existing records
Definition at line 515 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TRUNCATE_STRINGS = "truncate_strings" |
const string kinetica.InsertRecordsFromPayloadRequest.Options.TRUNCATE_TABLE = "truncate_table" |
If set to TRUE, truncates the table specified by table_name prior to loading the file(s).
Supported values:
The default value is FALSE.
Definition at line 908 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TYPE_INFERENCE_MAX_RECORDS_READ = "type_inference_max_records_read" |
The default value is ''.
Definition at line 911 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TYPE_INFERENCE_MODE = "type_inference_mode" |
optimize type inference for:
Supported values:
The default value is ACCURACY.
Definition at line 931 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.TYPE_INFERENCE_ONLY = "type_inference_only" |
Infer the type of the source data and return, without ingesting any data.
The inferred type is returned in the response.
Definition at line 596 of file InsertRecordsFromPayload.cs.
const string kinetica.InsertRecordsFromPayloadRequest.Options.UPDATE_ON_EXISTING_PK = "update_on_existing_pk" |
Specifies the record collision policy for inserting into a table with a primary key.
Supported values:
The default value is FALSE.
Definition at line 961 of file InsertRecordsFromPayload.cs.