Kinetica   C#   API  Version 7.2.3.0
kinetica.InsertRecordsFromFilesRequest.Options Struct Reference

A set of string constants for the parameter options. More...

Public Attributes

const string BAD_RECORD_TABLE_NAME = "bad_record_table_name"
 Name of a table to which records that were rejected are written. More...
 
const string BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit"
 A positive integer indicating the maximum number of records that can be written to the bad-record-table. More...
 
const string BAD_RECORD_TABLE_LIMIT_PER_INPUT = "bad_record_table_limit_per_input"
 For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. More...
 
const string BATCH_SIZE = "batch_size"
 Number of records to insert per batch when inserting data. More...
 
const string COLUMN_FORMATS = "column_formats"
 For each target column specified, applies the column-property-bound format to the source data loaded into that column. More...
 
const string COLUMNS_TO_LOAD = "columns_to_load"
 Specifies a comma-delimited list of columns from the source data to load. More...
 
const string COLUMNS_TO_SKIP = "columns_to_skip"
 Specifies a comma-delimited list of columns from the source data to skip. More...
 
const string COMPRESSION_TYPE = "compression_type"
 Source data compression type. More...
 
const string NONE = "none"
 No compression. More...
 
const string AUTO = "auto"
 Auto detect compression type More...
 
const string GZIP = "gzip"
 gzip file compression. More...
 
const string BZIP2 = "bzip2"
 bzip2 file compression. More...
 
const string DATASOURCE_NAME = "datasource_name"
 Name of an existing external data source from which data file(s) specified in filepaths will be loaded More...
 
const string DEFAULT_COLUMN_FORMATS = "default_column_formats"
 Specifies the default format to be applied to source data loaded into columns with the corresponding column property. More...
 
const string ERROR_HANDLING = "error_handling"
 Specifies how errors should be handled upon insertion. More...
 
const string PERMISSIVE = "permissive"
 Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped. More...
 
const string IGNORE_BAD_RECORDS = "ignore_bad_records"
 Malformed records are skipped. More...
 
const string ABORT = "abort"
 Stops current insertion and aborts entire operation when an error is encountered. More...
 
const string FILE_TYPE = "file_type"
 Specifies the type of the file(s) whose records will be inserted. More...
 
const string AVRO = "avro"
 Avro file format More...
 
const string DELIMITED_TEXT = "delimited_text"
 Delimited text file format; e.g., CSV, TSV, PSV, etc. More...
 
const string GDB = "gdb"
 Esri/GDB file format More...
 
const string JSON = "json"
 Json file format More...
 
const string PARQUET = "parquet"
 Apache Parquet file format More...
 
const string SHAPEFILE = "shapefile"
 ShapeFile file format More...
 
const string FLATTEN_COLUMNS = "flatten_columns"
 Specifies how to handle nested columns. More...
 
const string TRUE = "true"
 Upsert new records when primary keys match existing records More...
 
const string FALSE = "false"
 Reject new records when primary keys match existing records More...
 
const string GDAL_CONFIGURATION_OPTIONS = "gdal_configuration_options"
 Comma separated list of gdal conf options, for the specific requets: key=value More...
 
const string IGNORE_EXISTING_PK = "ignore_existing_pk"
 Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when UPDATE_ON_EXISTING_PK is FALSE). More...
 
const string INGESTION_MODE = "ingestion_mode"
 Whether to do a full load, dry run, or perform a type inference on the source data. More...
 
const string FULL = "full"
 Run a type inference on the source data (if needed) and ingest More...
 
const string DRY_RUN = "dry_run"
 Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of ERROR_HANDLING. More...
 
const string TYPE_INFERENCE_ONLY = "type_inference_only"
 Infer the type of the source data and return, without ingesting any data. More...
 
const string KAFKA_CONSUMERS_PER_RANK = "kafka_consumers_per_rank"
 Number of Kafka consumer threads per rank (valid range 1-6). More...
 
const string KAFKA_GROUP_ID = "kafka_group_id"
 The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions). More...
 
const string KAFKA_OFFSET_RESET_POLICY = "kafka_offset_reset_policy"
 Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset. More...
 
const string EARLIEST = "earliest"
 
const string LATEST = "latest"
 
const string KAFKA_OPTIMISTIC_INGEST = "kafka_optimistic_ingest"
 Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism. More...
 
const string KAFKA_SUBSCRIPTION_CANCEL_AFTER = "kafka_subscription_cancel_after"
 Sets the Kafka subscription lifespan (in minutes). More...
 
const string KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT = "kafka_type_inference_fetch_timeout"
 Maximum time to collect Kafka messages before type inferencing on the set of them. More...
 
const string LAYER = "layer"
 Geo files layer(s) name(s): comma separated. More...
 
const string LOADING_MODE = "loading_mode"
 Scheme for distributing the extraction and loading of data from the source data file(s). More...
 
const string HEAD = "head"
 The head node loads all data. More...
 
const string DISTRIBUTED_SHARED = "distributed_shared"
 The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. More...
 
const string DISTRIBUTED_LOCAL = "distributed_local"
 A single worker process on each node loads all files that are available to it. More...
 
const string LOCAL_TIME_OFFSET = "local_time_offset"
 Apply an offset to Avro local timestamp columns. More...
 
const string MAX_RECORDS_TO_LOAD = "max_records_to_load"
 Limit the number of records to load in this request: if this number is larger than BATCH_SIZE, then the number of records loaded will be limited to the next whole number of BATCH_SIZE (per working thread). More...
 
const string NUM_TASKS_PER_RANK = "num_tasks_per_rank"
 Number of tasks for reading file per rank. More...
 
const string POLL_INTERVAL = "poll_interval"
 If TRUE, the number of seconds between attempts to load external files into the table. More...
 
const string PRIMARY_KEYS = "primary_keys"
 Comma separated list of column names to set as primary keys, when not specified in the type. More...
 
const string SCHEMA_REGISTRY_SCHEMA_NAME = "schema_registry_schema_name"
 Name of the Avro schema in the schema registry to use when reading Avro records. More...
 
const string SHARD_KEYS = "shard_keys"
 Comma separated list of column names to set as shard keys, when not specified in the type. More...
 
const string SKIP_LINES = "skip_lines"
 Skip number of lines from begining of file. More...
 
const string START_OFFSETS = "start_offsets"
 Starting offsets by partition to fetch from kafka. More...
 
const string SUBSCRIBE = "subscribe"
 Continuously poll the data source to check for new data and load it into the table. More...
 
const string TABLE_INSERT_MODE = "table_insert_mode"
 Insertion scheme to use when inserting records from multiple shapefiles. More...
 
const string SINGLE = "single"
 Insert all records into a single table. More...
 
const string TABLE_PER_FILE = "table_per_file"
 Insert records from each file into a new table corresponding to that file. More...
 
const string TEXT_COMMENT_STRING = "text_comment_string"
 Specifies the character string that should be interpreted as a comment line prefix in the source data. More...
 
const string TEXT_DELIMITER = "text_delimiter"
 Specifies the character delimiting field values in the source data and field names in the header (if present). More...
 
const string TEXT_ESCAPE_CHARACTER = "text_escape_character"
 Specifies the character that is used to escape other characters in the source data. More...
 
const string TEXT_HAS_HEADER = "text_has_header"
 Indicates whether the source data contains a header row. More...
 
const string TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter"
 Specifies the delimiter for column properties in the header row (if present). More...
 
const string TEXT_NULL_STRING = "text_null_string"
 Specifies the character string that should be interpreted as a null value in the source data. More...
 
const string TEXT_QUOTE_CHARACTER = "text_quote_character"
 Specifies the character that should be interpreted as a field value quoting character in the source data. More...
 
const string TEXT_SEARCH_COLUMNS = "text_search_columns"
 Add 'text_search' property to internally inferenced string columns. More...
 
const string TEXT_SEARCH_MIN_COLUMN_LENGTH = "text_search_min_column_length"
 Set the minimum column size for strings to apply the 'text_search' property to. More...
 
const string TRUNCATE_STRINGS = "truncate_strings"
 If set to TRUE, truncate string values that are longer than the column's type size. More...
 
const string TRUNCATE_TABLE = "truncate_table"
 If set to TRUE, truncates the table specified by table_name prior to loading the file(s). More...
 
const string TYPE_INFERENCE_MODE = "type_inference_mode"
 Optimize type inferencing for either speed or accuracy. More...
 
const string ACCURACY = "accuracy"
 Scans data to get exactly-typed & sized columns for all data scanned. More...
 
const string SPEED = "speed"
 Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned More...
 
const string UPDATE_ON_EXISTING_PK = "update_on_existing_pk"
 Specifies the record collision policy for inserting into a table with a primary key. More...
 

Detailed Description

A set of string constants for the parameter options.

Optional parameters.

Definition at line 263 of file InsertRecordsFromFiles.cs.

Member Data Documentation

◆ ABORT

const string kinetica.InsertRecordsFromFilesRequest.Options.ABORT = "abort"

Stops current insertion and aborts entire operation when an error is encountered.

Primary key collisions are considered abortable errors in this mode.

Definition at line 460 of file InsertRecordsFromFiles.cs.

◆ ACCURACY

const string kinetica.InsertRecordsFromFilesRequest.Options.ACCURACY = "accuracy"

Scans data to get exactly-typed & sized columns for all data scanned.

Definition at line 1011 of file InsertRecordsFromFiles.cs.

◆ AUTO

const string kinetica.InsertRecordsFromFilesRequest.Options.AUTO = "auto"

Auto detect compression type

Definition at line 374 of file InsertRecordsFromFiles.cs.

◆ AVRO

const string kinetica.InsertRecordsFromFilesRequest.Options.AVRO = "avro"

Avro file format

Definition at line 501 of file InsertRecordsFromFiles.cs.

◆ BAD_RECORD_TABLE_LIMIT

const string kinetica.InsertRecordsFromFilesRequest.Options.BAD_RECORD_TABLE_LIMIT = "bad_record_table_limit"

A positive integer indicating the maximum number of records that can be written to the bad-record-table.

The default value is '10000'.

Definition at line 278 of file InsertRecordsFromFiles.cs.

◆ BAD_RECORD_TABLE_LIMIT_PER_INPUT

const string kinetica.InsertRecordsFromFilesRequest.Options.BAD_RECORD_TABLE_LIMIT_PER_INPUT = "bad_record_table_limit_per_input"

For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload.

Default value will be BAD_RECORD_TABLE_LIMIT and total size of the table per rank is limited to BAD_RECORD_TABLE_LIMIT.

Definition at line 288 of file InsertRecordsFromFiles.cs.

◆ BAD_RECORD_TABLE_NAME

const string kinetica.InsertRecordsFromFilesRequest.Options.BAD_RECORD_TABLE_NAME = "bad_record_table_name"

Name of a table to which records that were rejected are written.

The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When ERROR_HANDLING is ABORT, bad records table is not populated.

Definition at line 273 of file InsertRecordsFromFiles.cs.

◆ BATCH_SIZE

const string kinetica.InsertRecordsFromFilesRequest.Options.BATCH_SIZE = "batch_size"

Number of records to insert per batch when inserting data.

The default value is '50000'.

Definition at line 293 of file InsertRecordsFromFiles.cs.

◆ BZIP2

const string kinetica.InsertRecordsFromFilesRequest.Options.BZIP2 = "bzip2"

bzip2 file compression.

Definition at line 380 of file InsertRecordsFromFiles.cs.

◆ COLUMN_FORMATS

const string kinetica.InsertRecordsFromFilesRequest.Options.COLUMN_FORMATS = "column_formats"

For each target column specified, applies the column-property-bound format to the source data loaded into that column.

Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'.

See DEFAULT_COLUMN_FORMATS for valid format syntax.

Definition at line 309 of file InsertRecordsFromFiles.cs.

◆ COLUMNS_TO_LOAD

const string kinetica.InsertRecordsFromFilesRequest.Options.COLUMNS_TO_LOAD = "columns_to_load"

Specifies a comma-delimited list of columns from the source data to load.

If more than one file is being loaded, this list applies to all files.

Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table.

If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful.

Mutually exclusive with COLUMNS_TO_SKIP.

Definition at line 337 of file InsertRecordsFromFiles.cs.

◆ COLUMNS_TO_SKIP

const string kinetica.InsertRecordsFromFilesRequest.Options.COLUMNS_TO_SKIP = "columns_to_skip"

Specifies a comma-delimited list of columns from the source data to skip.

Mutually exclusive with COLUMNS_TO_LOAD.

Definition at line 344 of file InsertRecordsFromFiles.cs.

◆ COMPRESSION_TYPE

const string kinetica.InsertRecordsFromFilesRequest.Options.COMPRESSION_TYPE = "compression_type"

Source data compression type.

Supported values:

  • NONE: No compression.
  • AUTO: Auto detect compression type
  • GZIP: gzip file compression.
  • BZIP2: bzip2 file compression.

The default value is AUTO.

Definition at line 368 of file InsertRecordsFromFiles.cs.

◆ DATASOURCE_NAME

const string kinetica.InsertRecordsFromFilesRequest.Options.DATASOURCE_NAME = "datasource_name"

Name of an existing external data source from which data file(s) specified in filepaths will be loaded

Definition at line 385 of file InsertRecordsFromFiles.cs.

◆ DEFAULT_COLUMN_FORMATS

const string kinetica.InsertRecordsFromFilesRequest.Options.DEFAULT_COLUMN_FORMATS = "default_column_formats"

Specifies the default format to be applied to source data loaded into columns with the corresponding column property.

Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in COLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a custom COLUMN_FORMATS for that annotation is specified.

The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds).

Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"

Definition at line 416 of file InsertRecordsFromFiles.cs.

◆ DELIMITED_TEXT

const string kinetica.InsertRecordsFromFilesRequest.Options.DELIMITED_TEXT = "delimited_text"

Delimited text file format; e.g., CSV, TSV, PSV, etc.

Definition at line 505 of file InsertRecordsFromFiles.cs.

◆ DISTRIBUTED_LOCAL

const string kinetica.InsertRecordsFromFilesRequest.Options.DISTRIBUTED_LOCAL = "distributed_local"

A single worker process on each node loads all files that are available to it.

This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data).

NOTE:

If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail.

If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.

Definition at line 762 of file InsertRecordsFromFiles.cs.

◆ DISTRIBUTED_SHARED

const string kinetica.InsertRecordsFromFilesRequest.Options.DISTRIBUTED_SHARED = "distributed_shared"

The head node coordinates loading data by worker processes across all nodes from shared files available to all workers.

NOTE:

Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.

Definition at line 743 of file InsertRecordsFromFiles.cs.

◆ DRY_RUN

const string kinetica.InsertRecordsFromFilesRequest.Options.DRY_RUN = "dry_run"

Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of ERROR_HANDLING.

Definition at line 613 of file InsertRecordsFromFiles.cs.

◆ EARLIEST

const string kinetica.InsertRecordsFromFilesRequest.Options.EARLIEST = "earliest"

Definition at line 647 of file InsertRecordsFromFiles.cs.

◆ ERROR_HANDLING

const string kinetica.InsertRecordsFromFilesRequest.Options.ERROR_HANDLING = "error_handling"

Specifies how errors should be handled upon insertion.

Supported values:

  • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
  • IGNORE_BAD_RECORDS: Malformed records are skipped.
  • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.

The default value is ABORT.

Definition at line 446 of file InsertRecordsFromFiles.cs.

◆ FALSE

const string kinetica.InsertRecordsFromFilesRequest.Options.FALSE = "false"

Reject new records when primary keys match existing records

Definition at line 543 of file InsertRecordsFromFiles.cs.

◆ FILE_TYPE

const string kinetica.InsertRecordsFromFilesRequest.Options.FILE_TYPE = "file_type"

Specifies the type of the file(s) whose records will be inserted.

Supported values:

  • AVRO: Avro file format
  • DELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.
  • GDB: Esri/GDB file format
  • JSON: Json file format
  • PARQUET: Apache Parquet file format
  • SHAPEFILE: ShapeFile file format

The default value is DELIMITED_TEXT.

Definition at line 498 of file InsertRecordsFromFiles.cs.

◆ FLATTEN_COLUMNS

const string kinetica.InsertRecordsFromFilesRequest.Options.FLATTEN_COLUMNS = "flatten_columns"

Specifies how to handle nested columns.

Supported values:

  • TRUE: Break up nested columns to multiple columns
  • FALSE: Treat nested columns as json columns instead of flattening

The default value is FALSE.

Definition at line 535 of file InsertRecordsFromFiles.cs.

◆ FULL

const string kinetica.InsertRecordsFromFilesRequest.Options.FULL = "full"

Run a type inference on the source data (if needed) and ingest

Definition at line 607 of file InsertRecordsFromFiles.cs.

◆ GDAL_CONFIGURATION_OPTIONS

const string kinetica.InsertRecordsFromFilesRequest.Options.GDAL_CONFIGURATION_OPTIONS = "gdal_configuration_options"

Comma separated list of gdal conf options, for the specific requets: key=value

Definition at line 547 of file InsertRecordsFromFiles.cs.

◆ GDB

const string kinetica.InsertRecordsFromFilesRequest.Options.GDB = "gdb"

Esri/GDB file format

Definition at line 508 of file InsertRecordsFromFiles.cs.

◆ GZIP

const string kinetica.InsertRecordsFromFilesRequest.Options.GZIP = "gzip"

gzip file compression.

Definition at line 377 of file InsertRecordsFromFiles.cs.

◆ HEAD

const string kinetica.InsertRecordsFromFilesRequest.Options.HEAD = "head"

The head node loads all data.

All files must be available to the head node.

Definition at line 732 of file InsertRecordsFromFiles.cs.

◆ IGNORE_BAD_RECORDS

const string kinetica.InsertRecordsFromFilesRequest.Options.IGNORE_BAD_RECORDS = "ignore_bad_records"

Malformed records are skipped.

Definition at line 454 of file InsertRecordsFromFiles.cs.

◆ IGNORE_EXISTING_PK

const string kinetica.InsertRecordsFromFilesRequest.Options.IGNORE_EXISTING_PK = "ignore_existing_pk"

Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when UPDATE_ON_EXISTING_PK is FALSE).

Supported values:

  • TRUE: Ignore new records whose primary key values collide with those of existing records
  • FALSE: Treat as errors any new records whose primary key values collide with those of existing records

The default value is FALSE.

Definition at line 573 of file InsertRecordsFromFiles.cs.

◆ INGESTION_MODE

const string kinetica.InsertRecordsFromFilesRequest.Options.INGESTION_MODE = "ingestion_mode"

Whether to do a full load, dry run, or perform a type inference on the source data.

Supported values:

  • FULL: Run a type inference on the source data (if needed) and ingest
  • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of ERROR_HANDLING.
  • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.

The default value is FULL.

Definition at line 603 of file InsertRecordsFromFiles.cs.

◆ JSON

const string kinetica.InsertRecordsFromFilesRequest.Options.JSON = "json"

Json file format

Definition at line 511 of file InsertRecordsFromFiles.cs.

◆ KAFKA_CONSUMERS_PER_RANK

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_CONSUMERS_PER_RANK = "kafka_consumers_per_rank"

Number of Kafka consumer threads per rank (valid range 1-6).

The default value is '1'.

Definition at line 624 of file InsertRecordsFromFiles.cs.

◆ KAFKA_GROUP_ID

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_GROUP_ID = "kafka_group_id"

The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions).

Definition at line 629 of file InsertRecordsFromFiles.cs.

◆ KAFKA_OFFSET_RESET_POLICY

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_OFFSET_RESET_POLICY = "kafka_offset_reset_policy"

Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset.

Supported values:

The default value is EARLIEST.

Definition at line 645 of file InsertRecordsFromFiles.cs.

◆ KAFKA_OPTIMISTIC_INGEST

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_OPTIMISTIC_INGEST = "kafka_optimistic_ingest"

Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism.

Supported values:

The default value is FALSE.

Definition at line 664 of file InsertRecordsFromFiles.cs.

◆ KAFKA_SUBSCRIPTION_CANCEL_AFTER

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_SUBSCRIPTION_CANCEL_AFTER = "kafka_subscription_cancel_after"

Sets the Kafka subscription lifespan (in minutes).

Expired subscription will be cancelled automatically.

Definition at line 670 of file InsertRecordsFromFiles.cs.

◆ KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT

const string kinetica.InsertRecordsFromFilesRequest.Options.KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT = "kafka_type_inference_fetch_timeout"

Maximum time to collect Kafka messages before type inferencing on the set of them.

Definition at line 674 of file InsertRecordsFromFiles.cs.

◆ LATEST

const string kinetica.InsertRecordsFromFilesRequest.Options.LATEST = "latest"

Definition at line 648 of file InsertRecordsFromFiles.cs.

◆ LAYER

const string kinetica.InsertRecordsFromFilesRequest.Options.LAYER = "layer"

Geo files layer(s) name(s): comma separated.

Definition at line 677 of file InsertRecordsFromFiles.cs.

◆ LOADING_MODE

const string kinetica.InsertRecordsFromFilesRequest.Options.LOADING_MODE = "loading_mode"

Scheme for distributing the extraction and loading of data from the source data file(s).

Supported values:

  • HEAD: The head node loads all data. All files must be available to the head node.
  • DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
  • DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.

The default value is HEAD.

Definition at line 727 of file InsertRecordsFromFiles.cs.

◆ LOCAL_TIME_OFFSET

const string kinetica.InsertRecordsFromFilesRequest.Options.LOCAL_TIME_OFFSET = "local_time_offset"

Apply an offset to Avro local timestamp columns.

Definition at line 766 of file InsertRecordsFromFiles.cs.

◆ MAX_RECORDS_TO_LOAD

const string kinetica.InsertRecordsFromFilesRequest.Options.MAX_RECORDS_TO_LOAD = "max_records_to_load"

Limit the number of records to load in this request: if this number is larger than BATCH_SIZE, then the number of records loaded will be limited to the next whole number of BATCH_SIZE (per working thread).

Definition at line 774 of file InsertRecordsFromFiles.cs.

◆ NONE

const string kinetica.InsertRecordsFromFilesRequest.Options.NONE = "none"

No compression.

Definition at line 371 of file InsertRecordsFromFiles.cs.

◆ NUM_TASKS_PER_RANK

const string kinetica.InsertRecordsFromFilesRequest.Options.NUM_TASKS_PER_RANK = "num_tasks_per_rank"

Number of tasks for reading file per rank.

Default will be system configuration parameter, external_file_reader_num_tasks.

Definition at line 779 of file InsertRecordsFromFiles.cs.

◆ PARQUET

const string kinetica.InsertRecordsFromFilesRequest.Options.PARQUET = "parquet"

Apache Parquet file format

Definition at line 514 of file InsertRecordsFromFiles.cs.

◆ PERMISSIVE

const string kinetica.InsertRecordsFromFilesRequest.Options.PERMISSIVE = "permissive"

Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.

Definition at line 451 of file InsertRecordsFromFiles.cs.

◆ POLL_INTERVAL

const string kinetica.InsertRecordsFromFilesRequest.Options.POLL_INTERVAL = "poll_interval"

If TRUE, the number of seconds between attempts to load external files into the table.

If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds. The default value is '0'.

Definition at line 788 of file InsertRecordsFromFiles.cs.

◆ PRIMARY_KEYS

const string kinetica.InsertRecordsFromFilesRequest.Options.PRIMARY_KEYS = "primary_keys"

Comma separated list of column names to set as primary keys, when not specified in the type.

Definition at line 792 of file InsertRecordsFromFiles.cs.

◆ SCHEMA_REGISTRY_SCHEMA_NAME

const string kinetica.InsertRecordsFromFilesRequest.Options.SCHEMA_REGISTRY_SCHEMA_NAME = "schema_registry_schema_name"

Name of the Avro schema in the schema registry to use when reading Avro records.

Definition at line 796 of file InsertRecordsFromFiles.cs.

◆ SHAPEFILE

const string kinetica.InsertRecordsFromFilesRequest.Options.SHAPEFILE = "shapefile"

ShapeFile file format

Definition at line 517 of file InsertRecordsFromFiles.cs.

◆ SHARD_KEYS

const string kinetica.InsertRecordsFromFilesRequest.Options.SHARD_KEYS = "shard_keys"

Comma separated list of column names to set as shard keys, when not specified in the type.

Definition at line 800 of file InsertRecordsFromFiles.cs.

◆ SINGLE

const string kinetica.InsertRecordsFromFilesRequest.Options.SINGLE = "single"

Insert all records into a single table.

Definition at line 848 of file InsertRecordsFromFiles.cs.

◆ SKIP_LINES

const string kinetica.InsertRecordsFromFilesRequest.Options.SKIP_LINES = "skip_lines"

Skip number of lines from begining of file.

Definition at line 803 of file InsertRecordsFromFiles.cs.

◆ SPEED

const string kinetica.InsertRecordsFromFilesRequest.Options.SPEED = "speed"

Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned

Definition at line 1016 of file InsertRecordsFromFiles.cs.

◆ START_OFFSETS

const string kinetica.InsertRecordsFromFilesRequest.Options.START_OFFSETS = "start_offsets"

Starting offsets by partition to fetch from kafka.

A comma separated list of partition:offset pairs.

Definition at line 809 of file InsertRecordsFromFiles.cs.

◆ SUBSCRIBE

const string kinetica.InsertRecordsFromFilesRequest.Options.SUBSCRIBE = "subscribe"

Continuously poll the data source to check for new data and load it into the table.

Supported values:

The default value is FALSE.

Definition at line 824 of file InsertRecordsFromFiles.cs.

◆ TABLE_INSERT_MODE

const string kinetica.InsertRecordsFromFilesRequest.Options.TABLE_INSERT_MODE = "table_insert_mode"

Insertion scheme to use when inserting records from multiple shapefiles.

Supported values:

  • SINGLE: Insert all records into a single table.
  • TABLE_PER_FILE: Insert records from each file into a new table corresponding to that file.

The default value is SINGLE.

Definition at line 845 of file InsertRecordsFromFiles.cs.

◆ TABLE_PER_FILE

const string kinetica.InsertRecordsFromFilesRequest.Options.TABLE_PER_FILE = "table_per_file"

Insert records from each file into a new table corresponding to that file.

Definition at line 852 of file InsertRecordsFromFiles.cs.

◆ TEXT_COMMENT_STRING

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_COMMENT_STRING = "text_comment_string"

Specifies the character string that should be interpreted as a comment line prefix in the source data.

All lines in the data starting with the provided string are ignored.

For DELIMITED_TEXT FILE_TYPE only. The default value is '#'.

Definition at line 863 of file InsertRecordsFromFiles.cs.

◆ TEXT_DELIMITER

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_DELIMITER = "text_delimiter"

Specifies the character delimiting field values in the source data and field names in the header (if present).

For DELIMITED_TEXT FILE_TYPE only. The default value is ','.

Definition at line 872 of file InsertRecordsFromFiles.cs.

◆ TEXT_ESCAPE_CHARACTER

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_ESCAPE_CHARACTER = "text_escape_character"

Specifies the character that is used to escape other characters in the source data.

An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, & vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value.

The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not.

For DELIMITED_TEXT FILE_TYPE only.

Definition at line 888 of file InsertRecordsFromFiles.cs.

◆ TEXT_HAS_HEADER

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_HAS_HEADER = "text_has_header"

Indicates whether the source data contains a header row.

Supported values:

The default value is TRUE.

Definition at line 903 of file InsertRecordsFromFiles.cs.

◆ TEXT_HEADER_PROPERTY_DELIMITER

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter"

Specifies the delimiter for column properties in the header row (if present).

Cannot be set to same value as TEXT_DELIMITER.

For DELIMITED_TEXT FILE_TYPE only. The default value is '|'.

Definition at line 915 of file InsertRecordsFromFiles.cs.

◆ TEXT_NULL_STRING

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_NULL_STRING = "text_null_string"

Specifies the character string that should be interpreted as a null value in the source data.

For DELIMITED_TEXT FILE_TYPE only. The default value is '\N'.

Definition at line 923 of file InsertRecordsFromFiles.cs.

◆ TEXT_QUOTE_CHARACTER

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_QUOTE_CHARACTER = "text_quote_character"

Specifies the character that should be interpreted as a field value quoting character in the source data.

The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string.

For DELIMITED_TEXT FILE_TYPE only. The default value is '"'.

Definition at line 937 of file InsertRecordsFromFiles.cs.

◆ TEXT_SEARCH_COLUMNS

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_SEARCH_COLUMNS = "text_search_columns"

Add 'text_search' property to internally inferenced string columns.

Comma seperated list of column names or '*' for all columns. To add 'text_search' property only to string columns greater than or equal to a minimum size, also set the TEXT_SEARCH_MIN_COLUMN_LENGTH

Definition at line 947 of file InsertRecordsFromFiles.cs.

◆ TEXT_SEARCH_MIN_COLUMN_LENGTH

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_SEARCH_MIN_COLUMN_LENGTH = "text_search_min_column_length"

Set the minimum column size for strings to apply the 'text_search' property to.

Used only when TEXT_SEARCH_COLUMNS has a value.

Definition at line 954 of file InsertRecordsFromFiles.cs.

◆ TRUE

const string kinetica.InsertRecordsFromFilesRequest.Options.TRUE = "true"

Upsert new records when primary keys match existing records

Definition at line 539 of file InsertRecordsFromFiles.cs.

◆ TRUNCATE_STRINGS

const string kinetica.InsertRecordsFromFilesRequest.Options.TRUNCATE_STRINGS = "truncate_strings"

If set to TRUE, truncate string values that are longer than the column's type size.

Supported values:

The default value is FALSE.

Definition at line 970 of file InsertRecordsFromFiles.cs.

◆ TRUNCATE_TABLE

const string kinetica.InsertRecordsFromFilesRequest.Options.TRUNCATE_TABLE = "truncate_table"

If set to TRUE, truncates the table specified by table_name prior to loading the file(s).

Supported values:

The default value is FALSE.

Definition at line 986 of file InsertRecordsFromFiles.cs.

◆ TYPE_INFERENCE_MODE

const string kinetica.InsertRecordsFromFilesRequest.Options.TYPE_INFERENCE_MODE = "type_inference_mode"

Optimize type inferencing for either speed or accuracy.

Supported values:

  • ACCURACY: Scans data to get exactly-typed & sized columns for all data scanned.
  • SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned

The default value is ACCURACY.

Definition at line 1007 of file InsertRecordsFromFiles.cs.

◆ TYPE_INFERENCE_ONLY

const string kinetica.InsertRecordsFromFilesRequest.Options.TYPE_INFERENCE_ONLY = "type_inference_only"

Infer the type of the source data and return, without ingesting any data.

The inferred type is returned in the response.

Definition at line 619 of file InsertRecordsFromFiles.cs.

◆ UPDATE_ON_EXISTING_PK

const string kinetica.InsertRecordsFromFilesRequest.Options.UPDATE_ON_EXISTING_PK = "update_on_existing_pk"

Specifies the record collision policy for inserting into a table with a primary key.

Supported values:

  • TRUE: Upsert new records when primary keys match existing records
  • FALSE: Reject new records when primary keys match existing records

The default value is FALSE.

Definition at line 1037 of file InsertRecordsFromFiles.cs.


The documentation for this struct was generated from the following file: