Package com.gpudb.protocol
Class AlterDatasourceRequest
- java.lang.Object
-
- com.gpudb.protocol.AlterDatasourceRequest
-
- All Implemented Interfaces:
org.apache.avro.generic.GenericContainer,org.apache.avro.generic.IndexedRecord
public class AlterDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
A set of parameters forGPUdb.alterDatasource.Alters the properties of an existing data source
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classAlterDatasourceRequest.DatasourceUpdatesMapA set of string constants for theAlterDatasourceRequestparameterdatasourceUpdatesMap.
-
Constructor Summary
Constructors Constructor Description AlterDatasourceRequest()Constructs an AlterDatasourceRequest object with default parameters.AlterDatasourceRequest(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options)Constructs an AlterDatasourceRequest object with the specified parameters.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description booleanequals(Object obj)Objectget(int index)This method supports the Avro framework and is not intended to be called directly by the user.static org.apache.avro.SchemagetClassSchema()This method supports the Avro framework and is not intended to be called directly by the user.Map<String,String>getDatasourceUpdatesMap()Map containing the properties of the data source to be updated.StringgetName()Name of the data source to be altered.Map<String,String>getOptions()Optional parameters.org.apache.avro.SchemagetSchema()This method supports the Avro framework and is not intended to be called directly by the user.inthashCode()voidput(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.AlterDatasourceRequestsetDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap)Map containing the properties of the data source to be updated.AlterDatasourceRequestsetName(String name)Name of the data source to be altered.AlterDatasourceRequestsetOptions(Map<String,String> options)Optional parameters.StringtoString()
-
-
-
Constructor Detail
-
AlterDatasourceRequest
public AlterDatasourceRequest()
Constructs an AlterDatasourceRequest object with default parameters.
-
AlterDatasourceRequest
public AlterDatasourceRequest(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options)
Constructs an AlterDatasourceRequest object with the specified parameters.- Parameters:
name- Name of the data source to be altered. Must be an existing data source.datasourceUpdatesMap- Map containing the properties of the data source to be updated. Error if empty.LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.USER_NAME: Name of the remote system user; may be an empty stringPASSWORD: Password for the remote system user; may be an empty stringSKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
options- Optional parameters.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Returns:
- The schema for the class.
-
getName
public String getName()
Name of the data source to be altered. Must be an existing data source.- Returns:
- The current value of
name.
-
setName
public AlterDatasourceRequest setName(String name)
Name of the data source to be altered. Must be an existing data source.- Parameters:
name- The new value forname.- Returns:
thisto mimic the builder pattern.
-
getDatasourceUpdatesMap
public Map<String,String> getDatasourceUpdatesMap()
Map containing the properties of the data source to be updated. Error if empty.LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.USER_NAME: Name of the remote system user; may be an empty stringPASSWORD: Password for the remote system user; may be an empty stringSKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
- Returns:
- The current value of
datasourceUpdatesMap.
-
setDatasourceUpdatesMap
public AlterDatasourceRequest setDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap)
Map containing the properties of the data source to be updated. Error if empty.LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.USER_NAME: Name of the remote system user; may be an empty stringPASSWORD: Password for the remote system user; may be an empty stringSKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
- Parameters:
datasourceUpdatesMap- The new value fordatasourceUpdatesMap.- Returns:
thisto mimic the builder pattern.
-
getOptions
public Map<String,String> getOptions()
Optional parameters.- Returns:
- The current value of
options.
-
setOptions
public AlterDatasourceRequest setOptions(Map<String,String> options)
Optional parameters.- Parameters:
options- The new value foroptions.- Returns:
thisto mimic the builder pattern.
-
getSchema
public org.apache.avro.Schema getSchema()
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getSchemain interfaceorg.apache.avro.generic.GenericContainer- Returns:
- The schema object describing this class.
-
get
public Object get(int index)
This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
getin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to get- Returns:
- value of the field with the given index.
- Throws:
IndexOutOfBoundsException
-
put
public void put(int index, Object value)This method supports the Avro framework and is not intended to be called directly by the user.- Specified by:
putin interfaceorg.apache.avro.generic.IndexedRecord- Parameters:
index- the position of the field to setvalue- the value to set- Throws:
IndexOutOfBoundsException
-
-