Package com.gpudb.protocol
Class InsertRecordsFromQueryRequest
- java.lang.Object
-
- com.gpudb.protocol.InsertRecordsFromQueryRequest
-
- All Implemented Interfaces:
org.apache.avro.generic.GenericContainer,org.apache.avro.generic.IndexedRecord
public class InsertRecordsFromQueryRequest extends Object implements org.apache.avro.generic.IndexedRecord
A set of parameters forGPUdb.insertRecordsFromQuery.Computes remote query result and inserts the result data into a new or existing table
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classInsertRecordsFromQueryRequest.CreateTableOptionsA set of string constants for theInsertRecordsFromQueryRequestparametercreateTableOptions.static classInsertRecordsFromQueryRequest.OptionsA set of string constants for theInsertRecordsFromQueryRequestparameteroptions.
-
Constructor Summary
Constructors Constructor Description InsertRecordsFromQueryRequest()Constructs an InsertRecordsFromQueryRequest object with default parameters.InsertRecordsFromQueryRequest(String tableName, String remoteQuery, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)Constructs an InsertRecordsFromQueryRequest object with the specified parameters.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanequals(Object obj)Objectget(int index)This method supports the Avro framework and is not intended to be called directly by the user.static org.apache.avro.SchemagetClassSchema()This method supports the Avro framework and is not intended to be called directly by the user.Map<String,String>getCreateTableOptions()Options used when creating the target table.Map<String,Map<String,String>>getModifyColumns()Not implemented yet.Map<String,String>getOptions()Optional parameters.StringgetRemoteQuery()Query for which result data needs to be importedorg.apache.avro.SchemagetSchema()This method supports the Avro framework and is not intended to be called directly by the user.StringgetTableName()Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules.inthashCode()voidput(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.InsertRecordsFromQueryRequestsetCreateTableOptions(Map<String,String> createTableOptions)Options used when creating the target table.InsertRecordsFromQueryRequestsetModifyColumns(Map<String,Map<String,String>> modifyColumns)Not implemented yet.InsertRecordsFromQueryRequestsetOptions(Map<String,String> options)Optional parameters.InsertRecordsFromQueryRequestsetRemoteQuery(String remoteQuery)Query for which result data needs to be importedInsertRecordsFromQueryRequestsetTableName(String tableName)Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules.StringtoString()
-
-
-
Constructor Detail
-
InsertRecordsFromQueryRequest
public InsertRecordsFromQueryRequest()
Constructs an InsertRecordsFromQueryRequest object with default parameters.
-
InsertRecordsFromQueryRequest
public InsertRecordsFromQueryRequest(String tableName, String remoteQuery, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)
Constructs an InsertRecordsFromQueryRequest object with the specified parameters.- Parameters:
tableName- Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the remote query, and the new table name will have to meet standard table naming criteria.remoteQuery- Query for which result data needs to be importedmodifyColumns- Not implemented yet. The default value is an emptyMap.createTableOptions- Options used when creating the target table.TYPE_ID: ID of a currently registered type. The default value is ''.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.options- Optional parameters.BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000BATCH_SIZE: Number of records per batch when inserting data.DATASOURCE_NAME: Name of an existing external data source from which table will be loadedERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasksPRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the data. Supported values: The default value isFALSE.REMOTE_QUERY: Remote SQL query from which data will be sourcedREMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Returns:
- The schema for the class.
-
getTableName
public String getTableName()
Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the remote query, and the new table name will have to meet standard table naming criteria.- Returns:
- The current value of
tableName.
-
setTableName
public InsertRecordsFromQueryRequest setTableName(String tableName)
Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the remote query, and the new table name will have to meet standard table naming criteria.- Parameters:
tableName- The new value fortableName.- Returns:
thisto mimic the builder pattern.
-
getRemoteQuery
public String getRemoteQuery()
Query for which result data needs to be imported- Returns:
- The current value of
remoteQuery.
-
setRemoteQuery
public InsertRecordsFromQueryRequest setRemoteQuery(String remoteQuery)
Query for which result data needs to be imported- Parameters:
remoteQuery- The new value forremoteQuery.- Returns:
thisto mimic the builder pattern.
-
getModifyColumns
public Map<String,Map<String,String>> getModifyColumns()
Not implemented yet. The default value is an emptyMap.- Returns:
- The current value of
modifyColumns.
-
setModifyColumns
public InsertRecordsFromQueryRequest setModifyColumns(Map<String,Map<String,String>> modifyColumns)
Not implemented yet. The default value is an emptyMap.- Parameters:
modifyColumns- The new value formodifyColumns.- Returns:
thisto mimic the builder pattern.
-
getCreateTableOptions
public Map<String,String> getCreateTableOptions()
Options used when creating the target table.TYPE_ID: ID of a currently registered type. The default value is ''.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.- Returns:
- The current value of
createTableOptions.
-
setCreateTableOptions
public InsertRecordsFromQueryRequest setCreateTableOptions(Map<String,String> createTableOptions)
Options used when creating the target table.TYPE_ID: ID of a currently registered type. The default value is ''.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.- Parameters:
createTableOptions- The new value forcreateTableOptions.- Returns:
thisto mimic the builder pattern.
-
getOptions
public Map<String,String> getOptions()
Optional parameters.BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000BATCH_SIZE: Number of records per batch when inserting data.DATASOURCE_NAME: Name of an existing external data source from which table will be loadedERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasksPRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the data. Supported values: The default value isFALSE.REMOTE_QUERY: Remote SQL query from which data will be sourcedREMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Returns:
- The current value of
options.
-
setOptions
public InsertRecordsFromQueryRequest setOptions(Map<String,String> options)
Optional parameters.BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000BATCH_SIZE: Number of records per batch when inserting data.DATASOURCE_NAME: Name of an existing external data source from which table will be loadedERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasksPRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the data. Supported values: The default value isFALSE.REMOTE_QUERY: Remote SQL query from which data will be sourcedREMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Parameters:
options- The new value foroptions.- Returns:
thisto mimic the builder pattern.
-
getSchema
public org.apache.avro.Schema getSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getSchemain interfaceorg.apache.avro.generic.GenericContainer- Returns:
- The schema object describing this class.
-
get
public Object get(int index)
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to get- Returns:
- value of the field with the given index.
- Throws:
IndexOutOfBoundsException
-
put
public void put(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
putin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to setvalue- the value to set- Throws:
IndexOutOfBoundsException
-
-