Kinetica C# API  Version 7.0.19.0
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Properties Pages
kinetica.InsertRecordsFromFilesRequest.Options Struct Reference

Optional parameters. More...

Public Attributes

const string BATCH_SIZE = "batch_size"
 Specifies number of records to process before inserting. More...
 
const string COLUMN_FORMATS = "column_formats"
 For each target column specified, applies the column-property-bound format to the source data loaded into that column. More...
 
const string COLUMNS_TO_LOAD = "columns_to_load"
 For delimited_text file_type only. More...
 
const string DEFAULT_COLUMN_FORMATS = "default_column_formats"
 Specifies the default format to be applied to source data loaded into columns with the corresponding column property. More...
 
const string DRY_RUN = "dry_run"
 If set to true, no data will be inserted but the file will be read with the applied error_handling mode and the number of valid records that would be normally inserted are returned. More...
 
const string FALSE = "false"
 
const string TRUE = "true"
 
const string ERROR_HANDLING = "error_handling"
 Specifies how errors should be handled upon insertion. More...
 
const string PERMISSIVE = "permissive"
 Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped. More...
 
const string IGNORE_BAD_RECORDS = "ignore_bad_records"
 Malformed records are skipped. More...
 
const string ABORT = "abort"
 Stops current insertion and aborts entire operation when an error is encountered. More...
 
const string FILE_TYPE = "file_type"
 File type for the file(s). More...
 
const string DELIMITED_TEXT = "delimited_text"
 Indicates the file(s) are in delimited text format, e.g., CSV, TSV, PSV, etc. More...
 
const string LOADING_MODE = "loading_mode"
 Specifies how to divide data loading among nodes. More...
 
const string HEAD = "head"
 The head node loads all data. More...
 
const string DISTRIBUTED_SHARED = "distributed_shared"
 The worker nodes coordinate loading a set of files that are available to all of them. More...
 
const string DISTRIBUTED_LOCAL = "distributed_local"
 Each worker node loads all files that are available to it. More...
 
const string TEXT_COMMENT_STRING = "text_comment_string"
 For delimited_text file_type only. More...
 
const string TEXT_DELIMITER = "text_delimiter"
 For delimited_text file_type only. More...
 
const string TEXT_ESCAPE_CHARACTER = "text_escape_character"
 For delimited_text file_type only. More...
 
const string TEXT_HAS_HEADER = "text_has_header"
 For delimited_text file_type only. More...
 
const string TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter"
 For delimited_text file_type only. More...
 
const string TEXT_NULL_STRING = "text_null_string"
 For delimited_text file_type only. More...
 
const string TEXT_QUOTE_CHARACTER = "text_quote_character"
 For delimited_text file_type only. More...
 
const string TRUNCATE_TABLE = "truncate_table"
 If set to true, truncates the table specified by table_name prior to loading the file(s). More...
 
const string NUM_TASKS_PER_RANK = "num_tasks_per_rank"
 Optional: number of tasks for reading file per rank. More...
 

Detailed Description

Optional parameters.

  • BATCH_SIZE: Specifies number of records to process before inserting.
  • COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., { "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }. See default_column_formats for valid format syntax.
  • COLUMNS_TO_LOAD: For delimited_text file_type only. Specifies a comma-delimited list of column positions or names to load instead of loading all columns in the file(s); if more than one file is being loaded, the list of columns will apply to all files. Column numbers can be specified discretely or as a range, e.g., a value of '5,7,1..3' will create a table with the first column in the table being the fifth column in the file, followed by seventh column in the file, then the first column through the fourth column in the file.
  • DEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., { "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
  • DRY_RUN: If set to true, no data will be inserted but the file will be read with the applied error_handling mode and the number of valid records that would be normally inserted are returned. Supported values: The default value is FALSE.
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered.
    The default value is PERMISSIVE.
  • FILE_TYPE: File type for the file(s). Supported values:
    • DELIMITED_TEXT: Indicates the file(s) are in delimited text format, e.g., CSV, TSV, PSV, etc.
    The default value is DELIMITED_TEXT.
  • LOADING_MODE: Specifies how to divide data loading among nodes. Supported values:
    • HEAD: The head node loads all data. All files must be available on the head node.
    • DISTRIBUTED_SHARED: The worker nodes coordinate loading a set of files that are available to all of them. All files must be available on all nodes. This option is best when there is a shared file system.
    • DISTRIBUTED_LOCAL: Each worker node loads all files that are available to it. This option is best when each worker node has its own file system.
    The default value is HEAD.
  • TEXT_COMMENT_STRING: For delimited_text file_type only. All lines in the file(s) starting with the provided string are ignored. The comment string has no effect unless it appears at the beginning of a line. The default value is '#'.
  • TEXT_DELIMITER: For delimited_text file_type only. Specifies the delimiter for values and columns in the header row (if present). Must be a single character. The default value is ','.
  • TEXT_ESCAPE_CHARACTER: For delimited_text file_type only. The character used in the file(s) to escape certain character sequences in text. For example, the escape character followed by a literal 'n' escapes to a newline character within the field. Can be used within quoted string to escape a quote character. An empty value for this option does not specify an escape character.
  • TEXT_HAS_HEADER: For delimited_text file_type only. Indicates whether the delimited text files have a header row. Supported values: The default value is TRUE.
  • TEXT_HEADER_PROPERTY_DELIMITER: For delimited_text file_type only. Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. The default value is '|'.
  • TEXT_NULL_STRING: For delimited_text file_type only. The value in the file(s) to treat as a null value in the database. The default value is ''.
  • TEXT_QUOTE_CHARACTER: For delimited_text file_type only. The quote character used in the file(s), typically encompassing a field value. The character must appear at beginning and end of field to take effect. Delimiters within quoted fields are not treated as delimiters. Within a quoted field, double quotes (") can be used to escape a single literal quote character. To not have a quote character, specify an empty string (""). The default value is '"'.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the file(s). Supported values: The default value is FALSE.
  • NUM_TASKS_PER_RANK: Optional: number of tasks for reading file per rank. Default will be external_file_reader_num_tasks

The default value is an empty Dictionary. A set of string constants for the parameter options.

Definition at line 731 of file InsertRecordsFromFiles.cs.

Member Data Documentation

const string kinetica.InsertRecordsFromFilesRequest.Options.ABORT = "abort"

Stops current insertion and aborts entire operation when an error is encountered.

Definition at line 845 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.BATCH_SIZE = "batch_size"

Specifies number of records to process before inserting.

Definition at line 736 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.COLUMN_FORMATS = "column_formats"

For each target column specified, applies the column-property-bound format to the source data loaded into that column.

Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., { "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }. See default_column_formats for valid format syntax.

Definition at line 749 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.COLUMNS_TO_LOAD = "columns_to_load"

For delimited_text file_type only.

Specifies a comma-delimited list of column positions or names to load instead of loading all columns in the file(s); if more than one file is being loaded, the list of columns will apply to all files. Column numbers can be specified discretely or as a range, e.g., a value of '5,7,1..3' will create a table with the first column in the table being the fifth column in the file, followed by seventh column in the file, then the first column through the fourth column in the file.

Definition at line 760 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.DEFAULT_COLUMN_FORMATS = "default_column_formats"

Specifies the default format to be applied to source data loaded into columns with the corresponding column property.

This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., { "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"

Definition at line 785 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.DELIMITED_TEXT = "delimited_text"

Indicates the file(s) are in delimited text format, e.g., CSV, TSV, PSV, etc.

Definition at line 863 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.DISTRIBUTED_LOCAL = "distributed_local"

Each worker node loads all files that are available to it.

This option is best when each worker node has its own file system.

Definition at line 907 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.DISTRIBUTED_SHARED = "distributed_shared"

The worker nodes coordinate loading a set of files that are available to all of them.

All files must be available on all nodes. This option is best when there is a shared file system.

Definition at line 902 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.DRY_RUN = "dry_run"

If set to true, no data will be inserted but the file will be read with the applied error_handling mode and the number of valid records that would be normally inserted are returned.

Supported values:

The default value is FALSE.

Definition at line 804 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.ERROR_HANDLING = "error_handling"

Specifies how errors should be handled upon insertion.

Supported values:

  • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
  • IGNORE_BAD_RECORDS: Malformed records are skipped.
  • ABORT: Stops current insertion and aborts entire operation when an error is encountered.

The default value is PERMISSIVE.

Definition at line 833 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.FALSE = "false"

Definition at line 805 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.FILE_TYPE = "file_type"

File type for the file(s).

Supported values:

  • DELIMITED_TEXT: Indicates the file(s) are in delimited text format, e.g., CSV, TSV, PSV, etc.

The default value is DELIMITED_TEXT.

Definition at line 859 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.HEAD = "head"

The head node loads all data.

All files must be available on the head node.

Definition at line 896 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.IGNORE_BAD_RECORDS = "ignore_bad_records"

Malformed records are skipped.

Definition at line 841 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.LOADING_MODE = "loading_mode"

Specifies how to divide data loading among nodes.

Supported values:

  • HEAD: The head node loads all data. All files must be available on the head node.
  • DISTRIBUTED_SHARED: The worker nodes coordinate loading a set of files that are available to all of them. All files must be available on all nodes. This option is best when there is a shared file system.
  • DISTRIBUTED_LOCAL: Each worker node loads all files that are available to it. This option is best when each worker node has its own file system.

The default value is HEAD.

Definition at line 892 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.NUM_TASKS_PER_RANK = "num_tasks_per_rank"

Optional: number of tasks for reading file per rank.

Default will be external_file_reader_num_tasks

Definition at line 987 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.PERMISSIVE = "permissive"

Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.

Definition at line 838 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_COMMENT_STRING = "text_comment_string"

For delimited_text file_type only.

All lines in the file(s) starting with the provided string are ignored. The comment string has no effect unless it appears at the beginning of a line. The default value is '#'.

Definition at line 913 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_DELIMITER = "text_delimiter"

For delimited_text file_type only.

Specifies the delimiter for values and columns in the header row (if present). Must be a single character. The default value is ','.

Definition at line 919 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_ESCAPE_CHARACTER = "text_escape_character"

For delimited_text file_type only.

The character used in the file(s) to escape certain character sequences in text. For example, the escape character followed by a literal 'n' escapes to a newline character within the field. Can be used within quoted string to escape a quote character. An empty value for this option does not specify an escape character.

Definition at line 928 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_HAS_HEADER = "text_has_header"

For delimited_text file_type only.

Indicates whether the delimited text files have a header row. Supported values:

The default value is TRUE.

Definition at line 945 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_HEADER_PROPERTY_DELIMITER = "text_header_property_delimiter"

For delimited_text file_type only.

Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. The default value is '|'.

Definition at line 951 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_NULL_STRING = "text_null_string"

For delimited_text file_type only.

The value in the file(s) to treat as a null value in the database. The default value is ''.

Definition at line 956 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TEXT_QUOTE_CHARACTER = "text_quote_character"

For delimited_text file_type only.

The quote character used in the file(s), typically encompassing a field value. The character must appear at beginning and end of field to take effect. Delimiters within quoted fields are not treated as delimiters. Within a quoted field, double quotes (") can be used to escape a single literal quote character. To not have a quote character, specify an empty string (""). The default value is '"'.

Definition at line 966 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TRUE = "true"

Definition at line 806 of file InsertRecordsFromFiles.cs.

const string kinetica.InsertRecordsFromFilesRequest.Options.TRUNCATE_TABLE = "truncate_table"

If set to true, truncates the table specified by table_name prior to loading the file(s).

Supported values:

The default value is FALSE.

Definition at line 983 of file InsertRecordsFromFiles.cs.


The documentation for this struct was generated from the following file: