public class AlterDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.alterDatasource.
Alters the properties of an existing data source
| Modifier and Type | Class and Description |
|---|---|
static class |
AlterDatasourceRequest.DatasourceUpdatesMap
A set of string constants for the
AlterDatasourceRequest
parameter datasourceUpdatesMap. |
| Constructor and Description |
|---|
AlterDatasourceRequest()
Constructs an AlterDatasourceRequest object with default parameters.
|
AlterDatasourceRequest(String name,
Map<String,String> datasourceUpdatesMap,
Map<String,String> options)
Constructs an AlterDatasourceRequest object with the specified
parameters.
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
Map<String,String> |
getDatasourceUpdatesMap()
Map containing the properties of the data source to be updated.
|
String |
getName()
Name of the data source to be altered.
|
Map<String,String> |
getOptions()
Optional parameters.
|
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
AlterDatasourceRequest |
setDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap)
Map containing the properties of the data source to be updated.
|
AlterDatasourceRequest |
setName(String name)
Name of the data source to be altered.
|
AlterDatasourceRequest |
setOptions(Map<String,String> options)
Optional parameters.
|
String |
toString() |
public AlterDatasourceRequest()
public AlterDatasourceRequest(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options)
name - Name of the data source to be altered. Must be an existing
data source.datasourceUpdatesMap - Map containing the properties of the data
source to be updated. Error if empty.
LOCATION: Location of the remote
storage in
'storage_provider_type://[storage_path[:storage_port]]'
format. Supported storage provider
types are 'azure', 'gcs', 'hdfs',
'jdbc', 'kafka', 'confluent', and
's3'.
USER_NAME: Name of the remote
system user; may be an empty string
PASSWORD: Password for the remote
system user; may be an empty string
SKIP_VALIDATION: Bypass validation
of connection to remote source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in
seconds for connecting to this
storage provider
WAIT_TIMEOUT: Timeout in seconds
for reading from this storage
provider
CREDENTIAL: Name of the credential object
to be used in data source
S3_BUCKET_NAME: Name of the Amazon
S3 bucket to use as the data source
S3_REGION: Name of the Amazon S3
region where the given bucket is
located
S3_VERIFY_SSL: Whether to verify
SSL connections.
Supported values:
TRUE: Connect with SSL
verification
FALSE: Connect without
verifying the SSL
connection; for testing
purposes, bypassing TLS
errors, self-signed
certificates, etc.
TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether
to use virtual addressing when
referencing the Amazon S3 source.
Supported values:
TRUE: The requests URI
should be specified in
virtual-hosted-style format
where the bucket name is
part of the domain name in
the URL.
FALSE: Use path-style URI
for requests.
TRUE.
S3_AWS_ROLE_ARN: Amazon IAM Role
ARN which has required S3
permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM:
Customer encryption algorithm used
encrypting data
S3_ENCRYPTION_CUSTOMER_KEY:
Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos
keytab file location for the given
HDFS user. This may be a KIFS
file.
HDFS_DELEGATION_TOKEN: Delegation
token for the given HDFS user
HDFS_USE_KERBEROS: Use kerberos
authentication for the given HDFS
cluster.
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name
of the Azure storage account to use
as the data source, this is valid
only if tenant_id is specified
AZURE_CONTAINER_NAME: Name of the
Azure storage container to use as
the data source
AZURE_TENANT_ID: Active Directory
tenant ID (or directory ID)
AZURE_SAS_TOKEN: Shared access
signature token for Azure storage
account to use as the data source
AZURE_OAUTH_TOKEN: OAuth token to
access given storage container
GCS_BUCKET_NAME: Name of the
Google Cloud Storage bucket to use
as the data source
GCS_PROJECT_ID: Name of the Google
Cloud project to use as the data
source
GCS_SERVICE_ACCOUNT_KEYS: Google
Cloud service account keys to use
for authenticating the data source
JDBC_DRIVER_JAR_PATH: JDBC driver
jar file location. This may be a
KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of
the JDBC driver class
KAFKA_URL: The publicly-accessible
full path URL to the Kafka broker,
e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME: Name of the
Kafka topic to use as the data
source
ANONYMOUS: Create an anonymous
connection to the storage
provider--DEPRECATED: this is now
the default. Specify
use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no
credentials are supplied, we use
anonymous access by default. If
this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to
datasource if true, otherwise use
http.
Supported values:
The default value is TRUE.
SCHEMA_NAME: Updates the schema
name. If SCHEMA_NAME doesn't exist, an
error will be thrown. If SCHEMA_NAME is empty, then the
user's default schema will be used.
SCHEMA_REGISTRY_CONNECTION_RETRIES:
Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CONNECTION_TIMEOUT:
Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CREDENTIAL:
Confluent Schema Registry credential object
name.
SCHEMA_REGISTRY_LOCATION: Location
of Confluent Schema Registry in
'[storage_path[:storage_port]]'
format.
SCHEMA_REGISTRY_PORT: Confluent
Schema Registry port (optional).
options - Optional parameters.public static org.apache.avro.Schema getClassSchema()
public String getName()
name.public AlterDatasourceRequest setName(String name)
name - The new value for name.this to mimic the builder pattern.public Map<String,String> getDatasourceUpdatesMap()
LOCATION: Location of the
remote storage in
'storage_provider_type://[storage_path[:storage_port]]' format.
Supported storage provider types are 'azure', 'gcs', 'hdfs',
'jdbc', 'kafka', 'confluent', and 's3'.
USER_NAME: Name of the
remote system user; may be an empty string
PASSWORD: Password for the
remote system user; may be an empty string
SKIP_VALIDATION:
Bypass validation of connection to remote source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this
storage provider
WAIT_TIMEOUT: Timeout
in seconds for reading from this storage provider
CREDENTIAL: Name of the
credential object to be used in data source
S3_BUCKET_NAME: Name
of the Amazon S3 bucket to use as the data source
S3_REGION: Name of the
Amazon S3 region where the given bucket is located
S3_VERIFY_SSL:
Whether to verify SSL connections.
Supported values:
TRUE: Connect with SSL
verification
FALSE: Connect
without verifying the SSL connection; for testing
purposes, bypassing TLS errors, self-signed
certificates, etc.
TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing
when referencing the Amazon S3 source.
Supported values:
TRUE: The requests URI
should be specified in virtual-hosted-style format where
the bucket name is part of the domain name in the URL.
FALSE: Use path-style
URI for requests.
TRUE.
S3_AWS_ROLE_ARN:
Amazon IAM Role ARN which has required S3 permissions that can
be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the given
HDFS cluster.
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure storage container to
use as the data source
AZURE_TENANT_ID:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN:
Shared access signature token for Azure storage account to use
as the data source
AZURE_OAUTH_TOKEN: OAuth token to access given storage
container
GCS_BUCKET_NAME:
Name of the Google Cloud Storage bucket to use as the data
source
GCS_PROJECT_ID: Name
of the Google Cloud project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to
use for authenticating the data source
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
KAFKA_URL: The
publicly-accessible full path URL to the Kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME:
Name of the Kafka topic to use as the data source
ANONYMOUS: Create an
anonymous connection to the storage provider--DEPRECATED: this
is now the default. Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to
connect to datasource if true, otherwise use http.
Supported values:
The default value is TRUE.
SCHEMA_NAME: Updates
the schema name. If SCHEMA_NAME doesn't exist, an error will be thrown. If SCHEMA_NAME is empty, then the
user's default schema will be used.
SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry
in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port
(optional).
datasourceUpdatesMap.public AlterDatasourceRequest setDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap)
LOCATION: Location of the
remote storage in
'storage_provider_type://[storage_path[:storage_port]]' format.
Supported storage provider types are 'azure', 'gcs', 'hdfs',
'jdbc', 'kafka', 'confluent', and 's3'.
USER_NAME: Name of the
remote system user; may be an empty string
PASSWORD: Password for the
remote system user; may be an empty string
SKIP_VALIDATION:
Bypass validation of connection to remote source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this
storage provider
WAIT_TIMEOUT: Timeout
in seconds for reading from this storage provider
CREDENTIAL: Name of the
credential object to be used in data source
S3_BUCKET_NAME: Name
of the Amazon S3 bucket to use as the data source
S3_REGION: Name of the
Amazon S3 region where the given bucket is located
S3_VERIFY_SSL:
Whether to verify SSL connections.
Supported values:
TRUE: Connect with SSL
verification
FALSE: Connect
without verifying the SSL connection; for testing
purposes, bypassing TLS errors, self-signed
certificates, etc.
TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing
when referencing the Amazon S3 source.
Supported values:
TRUE: The requests URI
should be specified in virtual-hosted-style format where
the bucket name is part of the domain name in the URL.
FALSE: Use path-style
URI for requests.
TRUE.
S3_AWS_ROLE_ARN:
Amazon IAM Role ARN which has required S3 permissions that can
be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the given
HDFS cluster.
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure storage container to
use as the data source
AZURE_TENANT_ID:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN:
Shared access signature token for Azure storage account to use
as the data source
AZURE_OAUTH_TOKEN: OAuth token to access given storage
container
GCS_BUCKET_NAME:
Name of the Google Cloud Storage bucket to use as the data
source
GCS_PROJECT_ID: Name
of the Google Cloud project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to
use for authenticating the data source
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
KAFKA_URL: The
publicly-accessible full path URL to the Kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME:
Name of the Kafka topic to use as the data source
ANONYMOUS: Create an
anonymous connection to the storage provider--DEPRECATED: this
is now the default. Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to
connect to datasource if true, otherwise use http.
Supported values:
The default value is TRUE.
SCHEMA_NAME: Updates
the schema name. If SCHEMA_NAME doesn't exist, an error will be thrown. If SCHEMA_NAME is empty, then the
user's default schema will be used.
SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry
connection timeout (in Secs)
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry
in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port
(optional).
datasourceUpdatesMap - The new value for datasourceUpdatesMap.this to mimic the builder pattern.public Map<String,String> getOptions()
options.public AlterDatasourceRequest setOptions(Map<String,String> options)
options - The new value for options.this to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema in interface org.apache.avro.generic.GenericContainerpublic Object get(int index)
get in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to getIndexOutOfBoundsExceptionpublic void put(int index,
Object value)
put in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to setvalue - the value to setIndexOutOfBoundsExceptionCopyright © 2025. All rights reserved.