Package com.gpudb.protocol
Class AlterDatasinkRequest
- java.lang.Object
-
- com.gpudb.protocol.AlterDatasinkRequest
-
- All Implemented Interfaces:
org.apache.avro.generic.GenericContainer,org.apache.avro.generic.IndexedRecord
public class AlterDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
A set of parameters forGPUdb.alterDatasink.Alters the properties of an existing data sink
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classAlterDatasinkRequest.DatasinkUpdatesMapA set of string constants for theAlterDatasinkRequestparameterdatasinkUpdatesMap.
-
Constructor Summary
Constructors Constructor Description AlterDatasinkRequest()Constructs an AlterDatasinkRequest object with default parameters.AlterDatasinkRequest(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options)Constructs an AlterDatasinkRequest object with the specified parameters.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanequals(Object obj)Objectget(int index)This method supports the Avro framework and is not intended to be called directly by the user.static org.apache.avro.SchemagetClassSchema()This method supports the Avro framework and is not intended to be called directly by the user.Map<String,String>getDatasinkUpdatesMap()Map containing the properties of the data sink to be updated.StringgetName()Name of the data sink to be altered.Map<String,String>getOptions()Optional parameters.org.apache.avro.SchemagetSchema()This method supports the Avro framework and is not intended to be called directly by the user.inthashCode()voidput(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.AlterDatasinkRequestsetDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap)Map containing the properties of the data sink to be updated.AlterDatasinkRequestsetName(String name)Name of the data sink to be altered.AlterDatasinkRequestsetOptions(Map<String,String> options)Optional parameters.StringtoString()
-
-
-
Constructor Detail
-
AlterDatasinkRequest
public AlterDatasinkRequest()
Constructs an AlterDatasinkRequest object with default parameters.
-
AlterDatasinkRequest
public AlterDatasinkRequest(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options)
Constructs an AlterDatasinkRequest object with the specified parameters.- Parameters:
name- Name of the data sink to be altered. Must be an existing data sink.datasinkUpdatesMap- Map containing the properties of the data sink to be updated. Error if empty.DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka brokerANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
options- Optional parameters.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Returns:
- The schema for the class.
-
getName
public String getName()
Name of the data sink to be altered. Must be an existing data sink.- Returns:
- The current value of
name.
-
setName
public AlterDatasinkRequest setName(String name)
Name of the data sink to be altered. Must be an existing data sink.- Parameters:
name- The new value forname.- Returns:
thisto mimic the builder pattern.
-
getDatasinkUpdatesMap
public Map<String,String> getDatasinkUpdatesMap()
Map containing the properties of the data sink to be updated. Error if empty.DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka brokerANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
- Returns:
- The current value of
datasinkUpdatesMap.
-
setDatasinkUpdatesMap
public AlterDatasinkRequest setDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap)
Map containing the properties of the data sink to be updated. Error if empty.DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka brokerANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
- Parameters:
datasinkUpdatesMap- The new value fordatasinkUpdatesMap.- Returns:
thisto mimic the builder pattern.
-
getOptions
public Map<String,String> getOptions()
Optional parameters.- Returns:
- The current value of
options.
-
setOptions
public AlterDatasinkRequest setOptions(Map<String,String> options)
Optional parameters.- Parameters:
options- The new value foroptions.- Returns:
thisto mimic the builder pattern.
-
getSchema
public org.apache.avro.Schema getSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getSchemain interfaceorg.apache.avro.generic.GenericContainer- Returns:
- The schema object describing this class.
-
get
public Object get(int index)
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to get- Returns:
- value of the field with the given index.
- Throws:
IndexOutOfBoundsException
-
put
public void put(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
putin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to setvalue- the value to set- Throws:
IndexOutOfBoundsException
-
-