Package com.gpudb.protocol
Class CreateDatasourceRequest
- java.lang.Object
-
- com.gpudb.protocol.CreateDatasourceRequest
-
- All Implemented Interfaces:
org.apache.avro.generic.GenericContainer,org.apache.avro.generic.IndexedRecord
public class CreateDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
A set of parameters forGPUdb.createDatasource.Creates a data source, which contains the location and connection information for a data store that is external to the database.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classCreateDatasourceRequest.OptionsA set of string constants for theCreateDatasourceRequestparameteroptions.
-
Constructor Summary
Constructors Constructor Description CreateDatasourceRequest()Constructs a CreateDatasourceRequest object with default parameters.CreateDatasourceRequest(String name, String location, String userName, String password, Map<String,String> options)Constructs a CreateDatasourceRequest object with the specified parameters.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanequals(Object obj)Objectget(int index)This method supports the Avro framework and is not intended to be called directly by the user.static org.apache.avro.SchemagetClassSchema()This method supports the Avro framework and is not intended to be called directly by the user.StringgetLocation()Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.StringgetName()Name of the data source to be created.Map<String,String>getOptions()Optional parameters.StringgetPassword()Password for the remote system user; may be an empty stringorg.apache.avro.SchemagetSchema()This method supports the Avro framework and is not intended to be called directly by the user.StringgetUserName()Name of the remote system user; may be an empty stringinthashCode()voidput(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.CreateDatasourceRequestsetLocation(String location)Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.CreateDatasourceRequestsetName(String name)Name of the data source to be created.CreateDatasourceRequestsetOptions(Map<String,String> options)Optional parameters.CreateDatasourceRequestsetPassword(String password)Password for the remote system user; may be an empty stringCreateDatasourceRequestsetUserName(String userName)Name of the remote system user; may be an empty stringStringtoString()
-
-
-
Constructor Detail
-
CreateDatasourceRequest
public CreateDatasourceRequest()
Constructs a CreateDatasourceRequest object with default parameters.
-
CreateDatasourceRequest
public CreateDatasourceRequest(String name, String location, String userName, String password, Map<String,String> options)
Constructs a CreateDatasourceRequest object with the specified parameters.- Parameters:
name- Name of the data source to be created.location- Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.userName- Name of the remote system user; may be an empty stringpassword- Password for the remote system user; may be an empty stringoptions- Optional parameters.SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceIS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values: The default value isFALSE.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classANONYMOUS: Use anonymous connection to storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
Map.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Returns:
- The schema for the class.
-
getName
public String getName()
Name of the data source to be created.- Returns:
- The current value of
name.
-
setName
public CreateDatasourceRequest setName(String name)
Name of the data source to be created.- Parameters:
name- The new value forname.- Returns:
thisto mimic the builder pattern.
-
getLocation
public String getLocation()
Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
- Returns:
- The current value of
location.
-
setLocation
public CreateDatasourceRequest setLocation(String location)
Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
- Parameters:
location- The new value forlocation.- Returns:
thisto mimic the builder pattern.
-
getUserName
public String getUserName()
Name of the remote system user; may be an empty string- Returns:
- The current value of
userName.
-
setUserName
public CreateDatasourceRequest setUserName(String userName)
Name of the remote system user; may be an empty string- Parameters:
userName- The new value foruserName.- Returns:
thisto mimic the builder pattern.
-
getPassword
public String getPassword()
Password for the remote system user; may be an empty string- Returns:
- The current value of
password.
-
setPassword
public CreateDatasourceRequest setPassword(String password)
Password for the remote system user; may be an empty string- Parameters:
password- The new value forpassword.- Returns:
thisto mimic the builder pattern.
-
getOptions
public Map<String,String> getOptions()
Optional parameters.SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceIS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values: The default value isFALSE.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classANONYMOUS: Use anonymous connection to storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
Map.- Returns:
- The current value of
options.
-
setOptions
public CreateDatasourceRequest setOptions(Map<String,String> options)
Optional parameters.SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceIS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values: The default value isFALSE.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classANONYMOUS: Use anonymous connection to storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
Map.- Parameters:
options- The new value foroptions.- Returns:
thisto mimic the builder pattern.
-
getSchema
public org.apache.avro.Schema getSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getSchemain interfaceorg.apache.avro.generic.GenericContainer- Returns:
- The schema object describing this class.
-
get
public Object get(int index)
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to get- Returns:
- value of the field with the given index.
- Throws:
IndexOutOfBoundsException
-
put
public void put(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
putin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to setvalue- the value to set- Throws:
IndexOutOfBoundsException
-
-