public class AlterDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.alterDatasource(AlterDatasourceRequest)
.
Alters the properties of an existing data source
Modifier and Type | Class and Description |
---|---|
static class |
AlterDatasourceRequest.DatasourceUpdatesMap
Map containing the properties of the data source to be updated.
|
Constructor and Description |
---|
AlterDatasourceRequest()
Constructs an AlterDatasourceRequest object with default parameters.
|
AlterDatasourceRequest(String name,
Map<String,String> datasourceUpdatesMap,
Map<String,String> options)
Constructs an AlterDatasourceRequest object with the specified
parameters.
|
Modifier and Type | Method and Description |
---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
Map<String,String> |
getDatasourceUpdatesMap() |
String |
getName() |
Map<String,String> |
getOptions() |
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
AlterDatasourceRequest |
setDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap) |
AlterDatasourceRequest |
setName(String name) |
AlterDatasourceRequest |
setOptions(Map<String,String> options) |
String |
toString() |
public AlterDatasourceRequest()
public AlterDatasourceRequest(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options)
name
- Name of the data source to be altered. Must be an existing
data source.datasourceUpdatesMap
- Map containing the properties of the data
source to be updated. Error if empty.
LOCATION
: Location of the remote storage
in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','kafka' and 's3'.
USER_NAME
: Name of the remote system user;
may be an empty string
PASSWORD
: Password for the remote system
user; may be an empty string
SKIP_VALIDATION
: Bypass validation of
connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this storage provider
WAIT_TIMEOUT
: Timeout in seconds for
reading from this storage provider
CREDENTIAL
: Name of the credential object to be
used in data source
S3_BUCKET_NAME
: Name of the Amazon S3
bucket to use as the data source
S3_REGION
: Name of the Amazon S3 region
where the given bucket is located
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which
has required S3 permissions that can be
assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer
encryption key to encrypt or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token
for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the
Azure storage account to use as the data
source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure
storage container to use as the data source
AZURE_TENANT_ID
: Active Directory tenant
ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access signature
token for Azure storage account to use as
the data source
AZURE_OAUTH_TOKEN
: OAuth token to access
given storage container
GCS_BUCKET_NAME
: Name of the Google Cloud
Storage bucket to use as the data source
GCS_PROJECT_ID
: Name of the Google Cloud
project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud
service account keys to use for
authenticating the data source
KAFKA_URL
: The publicly-accessible full
path URL to the Kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the Kafka topic
to use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC
driver class
ANONYMOUS
: Create an anonymous connection
to the storage provider--DEPRECATED: this
is now the default. Specify
use_managed_credentials for non-anonymous
connection
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no
credentials are supplied, we use anonymous
access by default. If this is set, we will
use cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasource if true, otherwise use http
Supported values:
The default value is TRUE
.
SCHEMA_NAME
: Updates the schema name. If
schema_name
doesn't exist, an error will be thrown. If
schema_name
is empty, then the
user's
default schema will be used.
options
- Optional parameters.public static org.apache.avro.Schema getClassSchema()
public String getName()
public AlterDatasourceRequest setName(String name)
name
- Name of the data source to be altered. Must be an existing
data source.this
to mimic the builder pattern.public Map<String,String> getDatasourceUpdatesMap()
LOCATION
: Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]' format.
Supported storage provider types are
'azure','gcs','hdfs','kafka' and 's3'.
USER_NAME
: Name of the remote system user; may be an empty
string
PASSWORD
: Password for the remote system user; may be an empty
string
SKIP_VALIDATION
: Bypass validation of connection to remote
source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in seconds for connecting to this
storage provider
WAIT_TIMEOUT
: Timeout in seconds for reading from this storage
provider
CREDENTIAL
: Name of the credential object to be used in data source
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to use as the data
source
S3_REGION
: Name of the Amazon S3 region where the given bucket
is located
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication for the given
HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure storage container to
use as the data source
AZURE_TENANT_ID
: Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access signature token for Azure
storage account to use as the data source
AZURE_OAUTH_TOKEN
: OAuth token to access given storage
container
GCS_BUCKET_NAME
: Name of the Google Cloud Storage bucket to use
as the data source
GCS_PROJECT_ID
: Name of the Google Cloud project to use as the
data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data source
KAFKA_URL
: The publicly-accessible full path URL to the Kafka
broker, e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the Kafka topic to use as the data
source
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
ANONYMOUS
: Create an anonymous connection to the storage
provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasource if true,
otherwise use http
Supported values:
The default value is TRUE
.
SCHEMA_NAME
: Updates the schema name. If schema_name
doesn't exist, an error will be thrown. If schema_name
is empty, then the user's
default schema will be used.
public AlterDatasourceRequest setDatasourceUpdatesMap(Map<String,String> datasourceUpdatesMap)
datasourceUpdatesMap
- Map containing the properties of the data
source to be updated. Error if empty.
LOCATION
: Location of the remote storage
in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','kafka' and 's3'.
USER_NAME
: Name of the remote system user;
may be an empty string
PASSWORD
: Password for the remote system
user; may be an empty string
SKIP_VALIDATION
: Bypass validation of
connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this storage provider
WAIT_TIMEOUT
: Timeout in seconds for
reading from this storage provider
CREDENTIAL
: Name of the credential object to be
used in data source
S3_BUCKET_NAME
: Name of the Amazon S3
bucket to use as the data source
S3_REGION
: Name of the Amazon S3 region
where the given bucket is located
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which
has required S3 permissions that can be
assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer
encryption key to encrypt or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token
for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the
Azure storage account to use as the data
source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure
storage container to use as the data source
AZURE_TENANT_ID
: Active Directory tenant
ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access signature
token for Azure storage account to use as
the data source
AZURE_OAUTH_TOKEN
: OAuth token to access
given storage container
GCS_BUCKET_NAME
: Name of the Google Cloud
Storage bucket to use as the data source
GCS_PROJECT_ID
: Name of the Google Cloud
project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud
service account keys to use for
authenticating the data source
KAFKA_URL
: The publicly-accessible full
path URL to the Kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the Kafka topic
to use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC
driver class
ANONYMOUS
: Create an anonymous connection
to the storage provider--DEPRECATED: this
is now the default. Specify
use_managed_credentials for non-anonymous
connection
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no
credentials are supplied, we use anonymous
access by default. If this is set, we will
use cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasource if true, otherwise use http
Supported values:
The default value is TRUE
.
SCHEMA_NAME
: Updates the schema name. If
schema_name
doesn't exist, an error will be thrown. If
schema_name
is empty, then the
user's
default schema will be used.
this
to mimic the builder pattern.public AlterDatasourceRequest setOptions(Map<String,String> options)
options
- Optional parameters.this
to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema
in interface org.apache.avro.generic.GenericContainer
public Object get(int index)
get
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to getIndexOutOfBoundsException
public void put(int index, Object value)
put
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to setvalue
- the value to setIndexOutOfBoundsException
Copyright © 2024. All rights reserved.