Class AlterDatasourceRequest

  • All Implemented Interfaces:
    org.apache.avro.generic.GenericContainer, org.apache.avro.generic.IndexedRecord

    public class AlterDatasourceRequest
    extends Object
    implements org.apache.avro.generic.IndexedRecord
    A set of parameters for GPUdb.alterDatasource.

    Alters the properties of an existing data source

    • Constructor Detail

      • AlterDatasourceRequest

        public AlterDatasourceRequest()
        Constructs an AlterDatasourceRequest object with default parameters.
      • AlterDatasourceRequest

        public AlterDatasourceRequest​(String name,
                                      Map<String,​String> datasourceUpdatesMap,
                                      Map<String,​String> options)
        Constructs an AlterDatasourceRequest object with the specified parameters.
        Parameters:
        name - Name of the data source to be altered. Must be an existing data source.
        datasourceUpdatesMap - Map containing the properties of the data source to be updated. Error if empty.
        • LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
        • USER_NAME: Name of the remote system user; may be an empty string
        • PASSWORD: Password for the remote system user; may be an empty string
        • SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value is FALSE.
        • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
        • WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
        • CREDENTIAL: Name of the credential object to be used in data source
        • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
        • S3_REGION: Name of the Amazon S3 region where the given bucket is located
        • S3_VERIFY_SSL: Whether to verify SSL connections. Supported values:
          • TRUE: Connect with SSL verification
          • FALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
          The default value is TRUE.
        • S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
          • TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
          • FALSE: Use path-style URI for requests.
          The default value is TRUE.
        • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
        • S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
        • S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
        • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
        • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
        • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value is FALSE.
        • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
        • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
        • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
        • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
        • AZURE_OAUTH_TOKEN: OAuth token to access given storage container
        • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
        • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
        • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
        • JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
        • JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
        • KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
        • KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
        • ANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value is TRUE.
        • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
        • USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value is TRUE.
        • SCHEMA_NAME: Updates the schema name. If SCHEMA_NAME doesn't exist, an error will be thrown. If SCHEMA_NAME is empty, then the user's default schema will be used.
        • SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
        • SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
        • SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
        options - Optional parameters.
    • Method Detail

      • getClassSchema

        public static org.apache.avro.Schema getClassSchema()
        This method supports the Avro framework and is not intended to be called directly by the user.
        Returns:
        The schema for the class.
      • getName

        public String getName()
        Name of the data source to be altered. Must be an existing data source.
        Returns:
        The current value of name.
      • setName

        public AlterDatasourceRequest setName​(String name)
        Name of the data source to be altered. Must be an existing data source.
        Parameters:
        name - The new value for name.
        Returns:
        this to mimic the builder pattern.
      • getDatasourceUpdatesMap

        public Map<String,​String> getDatasourceUpdatesMap()
        Map containing the properties of the data source to be updated. Error if empty.
        • LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
        • USER_NAME: Name of the remote system user; may be an empty string
        • PASSWORD: Password for the remote system user; may be an empty string
        • SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value is FALSE.
        • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
        • WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
        • CREDENTIAL: Name of the credential object to be used in data source
        • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
        • S3_REGION: Name of the Amazon S3 region where the given bucket is located
        • S3_VERIFY_SSL: Whether to verify SSL connections. Supported values:
          • TRUE: Connect with SSL verification
          • FALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
          The default value is TRUE.
        • S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
          • TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
          • FALSE: Use path-style URI for requests.
          The default value is TRUE.
        • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
        • S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
        • S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
        • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
        • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
        • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value is FALSE.
        • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
        • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
        • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
        • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
        • AZURE_OAUTH_TOKEN: OAuth token to access given storage container
        • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
        • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
        • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
        • JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
        • JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
        • KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
        • KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
        • ANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value is TRUE.
        • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
        • USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value is TRUE.
        • SCHEMA_NAME: Updates the schema name. If SCHEMA_NAME doesn't exist, an error will be thrown. If SCHEMA_NAME is empty, then the user's default schema will be used.
        • SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
        • SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
        • SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
        Returns:
        The current value of datasourceUpdatesMap.
      • setDatasourceUpdatesMap

        public AlterDatasourceRequest setDatasourceUpdatesMap​(Map<String,​String> datasourceUpdatesMap)
        Map containing the properties of the data source to be updated. Error if empty.
        • LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
        • USER_NAME: Name of the remote system user; may be an empty string
        • PASSWORD: Password for the remote system user; may be an empty string
        • SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value is FALSE.
        • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
        • WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
        • CREDENTIAL: Name of the credential object to be used in data source
        • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
        • S3_REGION: Name of the Amazon S3 region where the given bucket is located
        • S3_VERIFY_SSL: Whether to verify SSL connections. Supported values:
          • TRUE: Connect with SSL verification
          • FALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
          The default value is TRUE.
        • S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
          • TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
          • FALSE: Use path-style URI for requests.
          The default value is TRUE.
        • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
        • S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
        • S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
        • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
        • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
        • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value is FALSE.
        • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
        • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
        • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
        • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
        • AZURE_OAUTH_TOKEN: OAuth token to access given storage container
        • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
        • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
        • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
        • JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
        • JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
        • KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
        • KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
        • ANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value is TRUE.
        • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
        • USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value is TRUE.
        • SCHEMA_NAME: Updates the schema name. If SCHEMA_NAME doesn't exist, an error will be thrown. If SCHEMA_NAME is empty, then the user's default schema will be used.
        • SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
        • SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
        • SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
        • SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
        Parameters:
        datasourceUpdatesMap - The new value for datasourceUpdatesMap.
        Returns:
        this to mimic the builder pattern.
      • getOptions

        public Map<String,​String> getOptions()
        Optional parameters.
        Returns:
        The current value of options.
      • setOptions

        public AlterDatasourceRequest setOptions​(Map<String,​String> options)
        Optional parameters.
        Parameters:
        options - The new value for options.
        Returns:
        this to mimic the builder pattern.
      • getSchema

        public org.apache.avro.Schema getSchema()
        This method supports the Avro framework and is not intended to be called directly by the user.
        Specified by:
        getSchema in interface org.apache.avro.generic.GenericContainer
        Returns:
        The schema object describing this class.
      • get

        public Object get​(int index)
        This method supports the Avro framework and is not intended to be called directly by the user.
        Specified by:
        get in interface org.apache.avro.generic.IndexedRecord
        Parameters:
        index - the position of the field to get
        Returns:
        value of the field with the given index.
        Throws:
        IndexOutOfBoundsException
      • put

        public void put​(int index,
                        Object value)
        This method supports the Avro framework and is not intended to be called directly by the user.
        Specified by:
        put in interface org.apache.avro.generic.IndexedRecord
        Parameters:
        index - the position of the field to set
        value - the value to set
        Throws:
        IndexOutOfBoundsException
      • hashCode

        public int hashCode()
        Overrides:
        hashCode in class Object