Class InsertRecordsFromPayloadRequest.Options
- java.lang.Object
-
- com.gpudb.protocol.InsertRecordsFromPayloadRequest.Options
-
- Enclosing class:
- InsertRecordsFromPayloadRequest
public static final class InsertRecordsFromPayloadRequest.Options extends Object
A set of string constants for theInsertRecordsFromPayloadRequestparameteroptions.Optional parameters.
-
-
Field Summary
Fields Modifier and Type Field Description static StringABORTStops current insertion and aborts entire operation when an error is encountered.static StringACCURACYScans data to get exactly-typed and sized columns for all data scanned.static StringAUTODefault.static StringAVROAvro file formatstatic StringBAD_RECORD_TABLE_LIMITA positive integer indicating the maximum number of records that can be written to the bad-record-table.static StringBAD_RECORD_TABLE_LIMIT_PER_INPUTFor subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload.static StringBAD_RECORD_TABLE_NAMEOptional name of a table to which records that were rejected are written.static StringBATCH_SIZEInternal tuning parameter--number of records per batch when inserting data.static StringBZIP2bzip2 file compression.static StringCOLUMN_FORMATSFor each target column specified, applies the column-property-bound format to the source data loaded into that column.static StringCOLUMNS_TO_LOADSpecifies a comma-delimited list of columns from the source data to load.static StringCOLUMNS_TO_SKIPSpecifies a comma-delimited list of columns from the source data to skip.static StringCOMPRESSION_TYPEOptional: payload compression type.static StringDEFAULT_COLUMN_FORMATSSpecifies the default format to be applied to source data loaded into columns with the corresponding column property.static StringDELIMITED_TEXTDelimited text file format; e.g., CSV, TSV, PSV, etc.static StringDISTRIBUTED_LOCALA single worker process on each node loads all files that are available to it.static StringDISTRIBUTED_SHAREDThe head node coordinates loading data by worker processes across all nodes from shared files available to all workers.static StringDRY_RUNDoes not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.static StringERROR_HANDLINGSpecifies how errors should be handled upon insertion.static StringFALSEReject new records when primary keys match existing recordsstatic StringFILE_TYPESpecifies the type of the file(s) whose records will be inserted.static StringFLATTEN_COLUMNSSpecifies how to handle nested columns.static StringFULLRun a type inference on the source data (if needed) and ingeststatic StringGDAL_CONFIGURATION_OPTIONSComma separated list of gdal conf options, for the specific requests: key=value.static StringGDBEsri/GDB file formatstatic StringGZIPgzip file compression.static StringHEADThe head node loads all data.static StringIGNORE_BAD_RECORDSMalformed records are skipped.static StringIGNORE_EXISTING_PKSpecifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE).static StringINGESTION_MODEWhether to do a full load, dry run, or perform a type inference on the source data.static StringJSONJson file formatstatic StringLAYEROptional: geo files layer(s) name(s): comma separated.static StringLOADING_MODEScheme for distributing the extraction and loading of data from the source data file(s).static StringLOCAL_TIME_OFFSETFor Avro local timestamp columnsstatic StringMAX_CONSECUTIVE_INVALID_SCHEMA_FAILUREMax records to skip due to schema related errors, before failingstatic StringMAX_RECORDS_TO_LOADLimit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread).static StringNONEUncompressedstatic StringNUM_TASKS_PER_RANKOptional: number of tasks for reading file per rank.static StringPARQUETApache Parquet file formatstatic StringPERMISSIVERecords with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.static StringPOLL_INTERVALIfTRUE, the number of seconds between attempts to load external files into the table.static StringPRIMARY_KEYSOptional: comma separated list of column names, to set as primary keys, when not specified in the type.static StringSCHEMA_REGISTRY_CONNECTION_RETRIESConfluent Schema registry connection timeout (in Secs)static StringSCHEMA_REGISTRY_CONNECTION_TIMEOUTConfluent Schema registry connection timeout (in Secs)static StringSCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURESMax records to skip due to SR connection failures, before failingstatic StringSCHEMA_REGISTRY_SCHEMA_NAMEName of the Avro schema in the schema registry to use when reading Avro records.static StringSHAPEFILEShapeFile file formatstatic StringSHARD_KEYSOptional: comma separated list of column names, to set as primary keys, when not specified in the type.static StringSINGLEstatic StringSKIP_LINESSkip a number of lines from the beginning of the file.static StringSPEEDScans data and picks the widest possible column types so that 'all' values will fit with minimum data scannedstatic StringSUBSCRIBEContinuously poll the data source to check for new data and load it into the table.static StringTABLE_INSERT_MODEOptional: table_insert_mode.static StringTABLE_PER_FILEstatic StringTEXT_COMMENT_STRINGSpecifies the character string that should be interpreted as a comment line prefix in the source data.static StringTEXT_DELIMITERSpecifies the character delimiting field values in the source data and field names in the header (if present).static StringTEXT_ESCAPE_CHARACTERSpecifies the character that is used to escape other characters in the source data.static StringTEXT_HAS_HEADERIndicates whether the source data contains a header row.static StringTEXT_HEADER_PROPERTY_DELIMITERSpecifies the delimiter for column properties in the header row (if present).static StringTEXT_NULL_STRINGSpecifies the character string that should be interpreted as a null value in the source data.static StringTEXT_QUOTE_CHARACTERSpecifies the character that should be interpreted as a field value quoting character in the source data.static StringTEXT_SEARCH_COLUMNSAdd 'text_search' property to internally inferenced string columns.static StringTEXT_SEARCH_MIN_COLUMN_LENGTHSet minimum column size.static StringTRIM_SPACEIf set toTRUE, remove leading or trailing space from fields.static StringTRUEUpsert new records when primary keys match existing recordsstatic StringTRUNCATE_STRINGSIf set toTRUE, truncate string values that are longer than the column's type size.static StringTRUNCATE_TABLEstatic StringTYPE_INFERENCE_MAX_RECORDS_READThe default value is ''.static StringTYPE_INFERENCE_MODEoptimize type inference for: Supported values:ACCURACY: Scans data to get exactly-typed and sized columns for all data scanned.static StringTYPE_INFERENCE_ONLYInfer the type of the source data and return, without ingesting any data.static StringUPDATE_ON_EXISTING_PKSpecifies the record collision policy for inserting into a table with a primary key.
-
-
-
Field Detail
-
BAD_RECORD_TABLE_NAME
public static final String BAD_RECORD_TABLE_NAME
Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string).- See Also:
- Constant Field Values
-
BAD_RECORD_TABLE_LIMIT
public static final String BAD_RECORD_TABLE_LIMIT
A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000- See Also:
- Constant Field Values
-
BAD_RECORD_TABLE_LIMIT_PER_INPUT
public static final String BAD_RECORD_TABLE_LIMIT_PER_INPUT
For subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will be 'bad_record_table_limit' and total size of the table per rank is limited to 'bad_record_table_limit'- See Also:
- Constant Field Values
-
BATCH_SIZE
public static final String BATCH_SIZE
Internal tuning parameter--number of records per batch when inserting data.- See Also:
- Constant Field Values
-
COLUMN_FORMATS
public static final String COLUMN_FORMATS
For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, and datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'.See
DEFAULT_COLUMN_FORMATSfor valid format syntax.- See Also:
- Constant Field Values
-
COLUMNS_TO_LOAD
public static final String COLUMNS_TO_LOAD
Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files.Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table.
If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful.
Mutually exclusive with
COLUMNS_TO_SKIP.- See Also:
- Constant Field Values
-
COLUMNS_TO_SKIP
public static final String COLUMNS_TO_SKIP
Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive withCOLUMNS_TO_LOAD.- See Also:
- Constant Field Values
-
COMPRESSION_TYPE
public static final String COMPRESSION_TYPE
Optional: payload compression type. Supported values:NONE: UncompressedAUTO: Default. Auto detect compression typeGZIP: gzip file compression.BZIP2: bzip2 file compression.
AUTO.- See Also:
- Constant Field Values
-
NONE
public static final String NONE
Uncompressed- See Also:
- Constant Field Values
-
AUTO
public static final String AUTO
Default. Auto detect compression type- See Also:
- Constant Field Values
-
GZIP
public static final String GZIP
gzip file compression.- See Also:
- Constant Field Values
-
BZIP2
public static final String BZIP2
bzip2 file compression.- See Also:
- Constant Field Values
-
DEFAULT_COLUMN_FORMATS
public static final String DEFAULT_COLUMN_FORMATS
Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, and datetime. This default column-property-bound format can be overridden by specifying a column property and format for a given target column inCOLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a customCOLUMN_FORMATSfor that annotation is specified.The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds).
Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
- See Also:
- Constant Field Values
-
ERROR_HANDLING
public static final String ERROR_HANDLING
Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.- See Also:
- Constant Field Values
-
PERMISSIVE
public static final String PERMISSIVE
Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.- See Also:
- Constant Field Values
-
IGNORE_BAD_RECORDS
public static final String IGNORE_BAD_RECORDS
Malformed records are skipped.- See Also:
- Constant Field Values
-
ABORT
public static final String ABORT
Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.- See Also:
- Constant Field Values
-
FILE_TYPE
public static final String FILE_TYPE
Specifies the type of the file(s) whose records will be inserted. Supported values:AVRO: Avro file formatDELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.GDB: Esri/GDB file formatJSON: Json file formatPARQUET: Apache Parquet file formatSHAPEFILE: ShapeFile file format
DELIMITED_TEXT.- See Also:
- Constant Field Values
-
AVRO
public static final String AVRO
Avro file format- See Also:
- Constant Field Values
-
DELIMITED_TEXT
public static final String DELIMITED_TEXT
Delimited text file format; e.g., CSV, TSV, PSV, etc.- See Also:
- Constant Field Values
-
GDB
public static final String GDB
Esri/GDB file format- See Also:
- Constant Field Values
-
JSON
public static final String JSON
Json file format- See Also:
- Constant Field Values
-
PARQUET
public static final String PARQUET
Apache Parquet file format- See Also:
- Constant Field Values
-
SHAPEFILE
public static final String SHAPEFILE
ShapeFile file format- See Also:
- Constant Field Values
-
FLATTEN_COLUMNS
public static final String FLATTEN_COLUMNS
Specifies how to handle nested columns. Supported values:TRUE: Break up nested columns to multiple columnsFALSE: Treat nested columns as json columns instead of flattening
FALSE.- See Also:
- Constant Field Values
-
TRUE
public static final String TRUE
Upsert new records when primary keys match existing records- See Also:
- Constant Field Values
-
FALSE
public static final String FALSE
Reject new records when primary keys match existing records- See Also:
- Constant Field Values
-
GDAL_CONFIGURATION_OPTIONS
public static final String GDAL_CONFIGURATION_OPTIONS
Comma separated list of gdal conf options, for the specific requests: key=value. The default value is ''.- See Also:
- Constant Field Values
-
IGNORE_EXISTING_PK
public static final String IGNORE_EXISTING_PK
Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.- See Also:
- Constant Field Values
-
INGESTION_MODE
public static final String INGESTION_MODE
Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.- See Also:
- Constant Field Values
-
FULL
public static final String FULL
Run a type inference on the source data (if needed) and ingest- See Also:
- Constant Field Values
-
DRY_RUN
public static final String DRY_RUN
Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.- See Also:
- Constant Field Values
-
TYPE_INFERENCE_ONLY
public static final String TYPE_INFERENCE_ONLY
Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.- See Also:
- Constant Field Values
-
LAYER
public static final String LAYER
Optional: geo files layer(s) name(s): comma separated. The default value is ''.- See Also:
- Constant Field Values
-
LOADING_MODE
public static final String LOADING_MODE
Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database. Supported values:HEAD: The head node loads all data. All files must be available to the head node.DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
HEAD.- See Also:
- Constant Field Values
-
HEAD
public static final String HEAD
The head node loads all data. All files must be available to the head node.- See Also:
- Constant Field Values
-
DISTRIBUTED_SHARED
public static final String DISTRIBUTED_SHARED
The head node coordinates loading data by worker processes across all nodes from shared files available to all workers.NOTE:
Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
- See Also:
- Constant Field Values
-
DISTRIBUTED_LOCAL
public static final String DISTRIBUTED_LOCAL
A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data).NOTE:
If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail.
If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
- See Also:
- Constant Field Values
-
LOCAL_TIME_OFFSET
public static final String LOCAL_TIME_OFFSET
For Avro local timestamp columns- See Also:
- Constant Field Values
-
MAX_RECORDS_TO_LOAD
public static final String MAX_RECORDS_TO_LOAD
Limit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread). The default value is ''.- See Also:
- Constant Field Values
-
NUM_TASKS_PER_RANK
public static final String NUM_TASKS_PER_RANK
Optional: number of tasks for reading file per rank. Default will be external_file_reader_num_tasks- See Also:
- Constant Field Values
-
POLL_INTERVAL
public static final String POLL_INTERVAL
IfTRUE, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds.- See Also:
- Constant Field Values
-
PRIMARY_KEYS
public static final String PRIMARY_KEYS
Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.- See Also:
- Constant Field Values
-
SCHEMA_REGISTRY_CONNECTION_RETRIES
public static final String SCHEMA_REGISTRY_CONNECTION_RETRIES
Confluent Schema registry connection timeout (in Secs)- See Also:
- Constant Field Values
-
SCHEMA_REGISTRY_CONNECTION_TIMEOUT
public static final String SCHEMA_REGISTRY_CONNECTION_TIMEOUT
Confluent Schema registry connection timeout (in Secs)- See Also:
- Constant Field Values
-
SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES
public static final String SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES
Max records to skip due to SR connection failures, before failing- See Also:
- Constant Field Values
-
MAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE
public static final String MAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE
Max records to skip due to schema related errors, before failing- See Also:
- Constant Field Values
-
SCHEMA_REGISTRY_SCHEMA_NAME
public static final String SCHEMA_REGISTRY_SCHEMA_NAME
Name of the Avro schema in the schema registry to use when reading Avro records.- See Also:
- Constant Field Values
-
SHARD_KEYS
public static final String SHARD_KEYS
Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.- See Also:
- Constant Field Values
-
SKIP_LINES
public static final String SKIP_LINES
Skip a number of lines from the beginning of the file.- See Also:
- Constant Field Values
-
SUBSCRIBE
public static final String SUBSCRIBE
Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.- See Also:
- Constant Field Values
-
TABLE_INSERT_MODE
public static final String TABLE_INSERT_MODE
Optional: table_insert_mode. When inserting records from multiple files: if table_per_file then insert from each file into a new table. Currently supported only for shapefiles. Supported values: The default value isSINGLE.- See Also:
- Constant Field Values
-
SINGLE
public static final String SINGLE
- See Also:
- Constant Field Values
-
TABLE_PER_FILE
public static final String TABLE_PER_FILE
- See Also:
- Constant Field Values
-
TEXT_COMMENT_STRING
public static final String TEXT_COMMENT_STRING
Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored.For
DELIMITED_TEXTFILE_TYPEonly. The default value is '#'.- See Also:
- Constant Field Values
-
TEXT_DELIMITER
public static final String TEXT_DELIMITER
Specifies the character delimiting field values in the source data and field names in the header (if present).For
DELIMITED_TEXTFILE_TYPEonly. The default value is ','.- See Also:
- Constant Field Values
-
TEXT_ESCAPE_CHARACTER
public static final String TEXT_ESCAPE_CHARACTER
Specifies the character that is used to escape other characters in the source data.An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, and vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value.
The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not.
For
DELIMITED_TEXTFILE_TYPEonly.- See Also:
- Constant Field Values
-
TEXT_HAS_HEADER
public static final String TEXT_HAS_HEADER
Indicates whether the source data contains a header row.For
The default value isDELIMITED_TEXTFILE_TYPEonly. Supported values:TRUE.- See Also:
- Constant Field Values
-
TEXT_HEADER_PROPERTY_DELIMITER
public static final String TEXT_HEADER_PROPERTY_DELIMITER
Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value asTEXT_DELIMITER.For
DELIMITED_TEXTFILE_TYPEonly. The default value is '|'.- See Also:
- Constant Field Values
-
TEXT_NULL_STRING
public static final String TEXT_NULL_STRING
Specifies the character string that should be interpreted as a null value in the source data.For
DELIMITED_TEXTFILE_TYPEonly. The default value is '\N'.- See Also:
- Constant Field Values
-
TEXT_QUOTE_CHARACTER
public static final String TEXT_QUOTE_CHARACTER
Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string.For
DELIMITED_TEXTFILE_TYPEonly. The default value is '"'.- See Also:
- Constant Field Values
-
TEXT_SEARCH_COLUMNS
public static final String TEXT_SEARCH_COLUMNS
Add 'text_search' property to internally inferenced string columns. Comma separated list of column names or '*' for all columns. To add text_search property only to string columns of minimum size, set also the option 'text_search_min_column_length'- See Also:
- Constant Field Values
-
TEXT_SEARCH_MIN_COLUMN_LENGTH
public static final String TEXT_SEARCH_MIN_COLUMN_LENGTH
Set minimum column size. Used only when 'text_search_columns' has a value.- See Also:
- Constant Field Values
-
TRIM_SPACE
public static final String TRIM_SPACE
If set toTRUE, remove leading or trailing space from fields. Supported values: The default value isFALSE.- See Also:
- Constant Field Values
-
TRUNCATE_STRINGS
public static final String TRUNCATE_STRINGS
If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.- See Also:
- Constant Field Values
-
TRUNCATE_TABLE
public static final String TRUNCATE_TABLE
If set toTRUE, truncates the table specified bytableNameprior to loading the file(s). Supported values: The default value isFALSE.- See Also:
- Constant Field Values
-
TYPE_INFERENCE_MAX_RECORDS_READ
public static final String TYPE_INFERENCE_MAX_RECORDS_READ
The default value is ''.- See Also:
- Constant Field Values
-
TYPE_INFERENCE_MODE
public static final String TYPE_INFERENCE_MODE
optimize type inference for: Supported values:ACCURACY: Scans data to get exactly-typed and sized columns for all data scanned.SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
ACCURACY.- See Also:
- Constant Field Values
-
ACCURACY
public static final String ACCURACY
Scans data to get exactly-typed and sized columns for all data scanned.- See Also:
- Constant Field Values
-
SPEED
public static final String SPEED
Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned- See Also:
- Constant Field Values
-
UPDATE_ON_EXISTING_PK
public static final String UPDATE_ON_EXISTING_PK
Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.- See Also:
- Constant Field Values
-
-