Package com.gpudb.protocol
Class CreateDatasinkRequest
- java.lang.Object
-
- com.gpudb.protocol.CreateDatasinkRequest
-
- All Implemented Interfaces:
org.apache.avro.generic.GenericContainer,org.apache.avro.generic.IndexedRecord
public class CreateDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
A set of parameters forGPUdb.createDatasink.Creates a data sink, which contains the destination information for a data sink that is external to the database.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classCreateDatasinkRequest.OptionsA set of string constants for theCreateDatasinkRequestparameteroptions.
-
Constructor Summary
Constructors Constructor Description CreateDatasinkRequest()Constructs a CreateDatasinkRequest object with default parameters.CreateDatasinkRequest(String name, String destination, Map<String,String> options)Constructs a CreateDatasinkRequest object with the specified parameters.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanequals(Object obj)Objectget(int index)This method supports the Avro framework and is not intended to be called directly by the user.static org.apache.avro.SchemagetClassSchema()This method supports the Avro framework and is not intended to be called directly by the user.StringgetDestination()Destination for the output data in format 'storage_provider_type://path[:port]'.StringgetName()Name of the data sink to be created.Map<String,String>getOptions()Optional parameters.org.apache.avro.SchemagetSchema()This method supports the Avro framework and is not intended to be called directly by the user.inthashCode()voidput(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.CreateDatasinkRequestsetDestination(String destination)Destination for the output data in format 'storage_provider_type://path[:port]'.CreateDatasinkRequestsetName(String name)Name of the data sink to be created.CreateDatasinkRequestsetOptions(Map<String,String> options)Optional parameters.StringtoString()
-
-
-
Constructor Detail
-
CreateDatasinkRequest
public CreateDatasinkRequest()
Constructs a CreateDatasinkRequest object with default parameters.
-
CreateDatasinkRequest
public CreateDatasinkRequest(String name, String destination, Map<String,String> options)
Constructs a CreateDatasinkRequest object with the specified parameters.- Parameters:
name- Name of the data sink to be created.destination- Destination for the output data in format 'storage_provider_type://path[:port]'. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.options- Optional parameters.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this data sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this data sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file locationJDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_TOPIC_NAME: Name of the Kafka topic to publish to ifdestinationis a Kafka brokerMAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.
Map.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Returns:
- The schema for the class.
-
getName
public String getName()
Name of the data sink to be created.- Returns:
- The current value of
name.
-
setName
public CreateDatasinkRequest setName(String name)
Name of the data sink to be created.- Parameters:
name- The new value forname.- Returns:
thisto mimic the builder pattern.
-
getDestination
public String getDestination()
Destination for the output data in format 'storage_provider_type://path[:port]'.Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.
- Returns:
- The current value of
destination.
-
setDestination
public CreateDatasinkRequest setDestination(String destination)
Destination for the output data in format 'storage_provider_type://path[:port]'.Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.
- Parameters:
destination- The new value fordestination.- Returns:
thisto mimic the builder pattern.
-
getOptions
public Map<String,String> getOptions()
Optional parameters.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this data sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this data sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file locationJDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_TOPIC_NAME: Name of the Kafka topic to publish to ifdestinationis a Kafka brokerMAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.
Map.- Returns:
- The current value of
options.
-
setOptions
public CreateDatasinkRequest setOptions(Map<String,String> options)
Optional parameters.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this data sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this data sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file locationJDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_TOPIC_NAME: Name of the Kafka topic to publish to ifdestinationis a Kafka brokerMAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.
Map.- Parameters:
options- The new value foroptions.- Returns:
thisto mimic the builder pattern.
-
getSchema
public org.apache.avro.Schema getSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getSchemain interfaceorg.apache.avro.generic.GenericContainer- Returns:
- The schema object describing this class.
-
get
public Object get(int index)
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to get- Returns:
- value of the field with the given index.
- Throws:
IndexOutOfBoundsException
-
put
public void put(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
putin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to setvalue- the value to set- Throws:
IndexOutOfBoundsException
-
-