public static final class CreateTableExternalRequest.Options extends Object
CreateTableExternalRequest
parameter options
.
Optional parameters.
Modifier and Type | Field and Description | ||
---|---|---|---|
static String |
ABORT
Stops current insertion and aborts entire operation when an error is
encountered.
|
||
static String |
ACCURACY
Scans data to get exactly-typed & sized columns for all data
scanned.
|
||
static String |
AUTO
Auto detect compression type
|
||
static String |
AVRO
Avro file format
|
||
static String |
BAD_RECORD_TABLE_LIMIT
A positive integer indicating the maximum number of records that can
be written to the bad-record-table.
|
||
static String |
BAD_RECORD_TABLE_LIMIT_PER_INPUT
For subscriptions, a positive integer indicating the maximum number
of records that can be written to the bad-record-table per
file/payload.
|
||
static String |
BAD_RECORD_TABLE_NAME
Name of a table to which records that were rejected are written.
|
||
static String |
BATCH_SIZE
Number of records to insert per batch when inserting data.
|
||
static String |
BZIP2
bzip2 file compression.
|
||
static String |
COLUMN_FORMATS
For each target column specified, applies the column-property-bound
format to the source data loaded into that column.
|
||
static String |
COLUMNS_TO_LOAD
Specifies a comma-delimited list of columns from the source data to
load.
|
||
static String |
COLUMNS_TO_SKIP
Specifies a comma-delimited list of columns from the source data to
skip.
|
||
static String |
COMPRESSION_TYPE
Source data compression type.
|
||
static String |
DATASOURCE_NAME
Name of an existing external data source from which data file(s)
specified in
filepaths will be loaded |
||
static String |
DEFAULT_COLUMN_FORMATS
Specifies the default format to be applied to source data loaded
into columns with the corresponding column property.
|
||
static String |
DELIMITED_TEXT
Delimited text file format; e.g., CSV, TSV, PSV, etc.
|
||
static String |
DISTRIBUTED_LOCAL
A single worker process on each node loads all files that are
available to it.
|
||
static String |
DISTRIBUTED_SHARED
The head node coordinates loading data by worker processes across
all nodes from shared files available to all workers.
|
||
static String |
DRY_RUN
Does not load data, but walks through the source data and determines
the number of valid records, taking into account the current mode of
ERROR_HANDLING . |
||
static String |
EARLIEST |
||
static String |
ERROR_HANDLING
Specifies how errors should be handled upon insertion.
|
||
static String |
EXTERNAL_TABLE_TYPE
Specifies whether the external table holds a local copy of the
external data.
|
||
static String |
FALSE
Reject new records when primary keys match existing records
|
||
static String |
FILE_TYPE
Specifies the type of the file(s) whose records will be inserted.
|
||
static String |
FLATTEN_COLUMNS
Specifies how to handle nested columns.
|
||
static String |
FULL
Run a type inference on the source data (if needed) and ingest
|
||
static String |
GDAL_CONFIGURATION_OPTIONS
Comma separated list of gdal conf options, for the specific requets:
key=value
|
||
static String |
GDB
Esri/GDB file format
|
||
static String |
GZIP
gzip file compression.
|
||
static String |
HEAD
The head node loads all data.
|
||
static String |
IGNORE_BAD_RECORDS
Malformed records are skipped.
|
||
static String |
IGNORE_EXISTING_PK
Specifies the record collision error-suppression policy for
inserting into a table with a
static String INGESTION_MODE
Whether to do a full load, dry run, or perform a type inference on
the source data.
| ||
static String |
JDBC_FETCH_SIZE
The JDBC fetch size, which determines how many rows to fetch per
round trip.
|
||
static String |
JSON
Json file format
|
||
static String |
KAFKA_CONSUMERS_PER_RANK
Number of Kafka consumer threads per rank (valid range 1-6).
|
||
static String |
KAFKA_GROUP_ID
The group id to be used when consuming data from a Kafka topic
(valid only for Kafka datasource subscriptions).
|
||
static String |
KAFKA_OFFSET_RESET_POLICY
Policy to determine whether the Kafka data consumption starts either
at earliest offset or latest offset.
|
||
static String |
KAFKA_OPTIMISTIC_INGEST
Enable optimistic ingestion where Kafka topic offsets and table data
are committed independently to achieve parallelism.
|
||
static String |
KAFKA_SUBSCRIPTION_CANCEL_AFTER
Sets the Kafka subscription lifespan (in minutes).
|
||
static String |
KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT
Maximum time to collect Kafka messages before type inferencing on
the set of them.
|
||
static String |
LATEST |
||
static String |
LAYER
Geo files layer(s) name(s): comma separated.
|
||
static String |
LOADING_MODE
Scheme for distributing the extraction and loading of data from the
source data file(s).
|
||
static String |
LOCAL_TIME_OFFSET
Apply an offset to Avro local timestamp columns.
|
||
static String |
LOGICAL
External data will not be loaded into the database; the data will be
retrieved from the source upon servicing each query against the
external table
|
||
static String |
MANUAL
Refresh only occurs when manually requested by invoking the refresh
action of
GPUdb.alterTable on this table. |
||
static String |
MATERIALIZED
Loads a copy of the external data into the database, refreshed on
demand
|
||
static String |
MAX_RECORDS_TO_LOAD
Limit the number of records to load in this request: if this number
is larger than
BATCH_SIZE , then the
number of records loaded will be limited to the next whole number of
BATCH_SIZE (per working thread). |
||
static String |
NONE
No compression.
|
||
static String |
NUM_TASKS_PER_RANK
Number of tasks for reading file per rank.
|
||
static String |
ON_START
Refresh table on database startup and when manually requested by
invoking the refresh action of
GPUdb.alterTable on
this table. |
||
static String |
PARQUET
Apache Parquet file format
|
||
static String |
PERMISSIVE
Records with missing columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
|
||
static String |
POLL_INTERVAL
If
TRUE , the number of seconds between attempts
to load external files into the table. |
||
static String |
PRIMARY_KEYS
Comma separated list of column names to set as primary keys, when
not specified in the type.
|
||
static String |
REFRESH_METHOD
Method by which the table can be refreshed from its source data.
|
||
static String |
REMOTE_QUERY
Remote SQL query from which data will be sourced
|
||
static String |
REMOTE_QUERY_FILTER_COLUMN
Name of column to be used for splitting
REMOTE_QUERY into multiple sub-queries using the data distribution
of given column |
||
static String |
REMOTE_QUERY_INCREASING_COLUMN
Column on subscribed remote query result that will increase for new
records (e.g., TIMESTAMP).
|
||
static String |
REMOTE_QUERY_PARTITION_COLUMN
Alias name for
REMOTE_QUERY_FILTER_COLUMN . |
||
static String |
SCHEMA_REGISTRY_SCHEMA_NAME
Name of the Avro schema in the schema registry to use when reading
Avro records.
|
||
static String |
SHAPEFILE
ShapeFile file format
|
||
static String |
SHARD_KEYS
Comma separated list of column names to set as shard keys, when not
specified in the type.
|
||
static String |
SINGLE
Insert all records into a single table.
|
||
static String |
SKIP_LINES
Skip number of lines from begining of file.
|
||
static String |
SPEED
Scans data and picks the widest possible column types so that 'all'
values will fit with minimum data scanned
|
||
static String |
START_OFFSETS
Starting offsets by partition to fetch from kafka.
|
||
static String |
SUBSCRIBE
Continuously poll the data source to check for new data and load it
into the table.
|
||
static String |
TABLE_INSERT_MODE
Insertion scheme to use when inserting records from multiple
shapefiles.
|
||
static String |
TABLE_PER_FILE
Insert records from each file into a new table corresponding to that
file.
|
||
static String |
TEXT_COMMENT_STRING
Specifies the character string that should be interpreted as a
comment line prefix in the source data.
|
||
static String |
TEXT_DELIMITER
Specifies the character delimiting field values in the source data
and field names in the header (if present).
|
||
static String |
TEXT_ESCAPE_CHARACTER
Specifies the character that is used to escape other characters in
the source data.
|
||
static String |
TEXT_HAS_HEADER
Indicates whether the source data contains a header row.
|
||
static String |
TEXT_HEADER_PROPERTY_DELIMITER
Specifies the delimiter for
static String TEXT_NULL_STRING
Specifies the character string that should be interpreted as a null
value in the source data.
| ||
static String |
TEXT_QUOTE_CHARACTER
Specifies the character that should be interpreted as a field value
quoting character in the source data.
|
||
static String |
TEXT_SEARCH_COLUMNS
Add 'text_search' property to internally inferenced string columns.
|
||
static String |
TEXT_SEARCH_MIN_COLUMN_LENGTH
Set the minimum column size for strings to apply the 'text_search'
property to.
|
||
static String |
TRUE
Upsert new records when primary keys match existing records
|
||
static String |
TRUNCATE_STRINGS
If set to
TRUE , truncate string values that are
longer than the column's type size. |
||
static String |
TRUNCATE_TABLE
|
||
static String |
TYPE_INFERENCE_MODE
Optimize type inferencing for either speed or accuracy.
|
||
static String |
TYPE_INFERENCE_ONLY
Infer the type of the source data and return, without ingesting any
data.
|
||
static String |
UPDATE_ON_EXISTING_PK
Specifies the record collision policy for inserting into a table
with a
Method Summary
Copyright © 2025. All rights reserved. |