public class AlterDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.alterDatasink
.
Alters the properties of an existing data sink
Modifier and Type | Class and Description |
---|---|
static class |
AlterDatasinkRequest.DatasinkUpdatesMap
A set of string constants for the
AlterDatasinkRequest parameter
datasinkUpdatesMap . |
Constructor and Description |
---|
AlterDatasinkRequest()
Constructs an AlterDatasinkRequest object with default parameters.
|
AlterDatasinkRequest(String name,
Map<String,String> datasinkUpdatesMap,
Map<String,String> options)
Constructs an AlterDatasinkRequest object with the specified parameters.
|
Modifier and Type | Method and Description |
---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
Map<String,String> |
getDatasinkUpdatesMap()
Map containing the properties of the data sink to be updated.
|
String |
getName()
Name of the data sink to be altered.
|
Map<String,String> |
getOptions()
Optional parameters.
|
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
AlterDatasinkRequest |
setDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap)
Map containing the properties of the data sink to be updated.
|
AlterDatasinkRequest |
setName(String name)
Name of the data sink to be altered.
|
AlterDatasinkRequest |
setOptions(Map<String,String> options)
Optional parameters.
|
String |
toString() |
public AlterDatasinkRequest()
public AlterDatasinkRequest(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options)
name
- Name of the data sink to be altered. Must be an existing
data sink.datasinkUpdatesMap
- Map containing the properties of the data
sink to be updated. Error if empty.
DESTINATION
: Destination for the
output data in format
'destination_type://path[:port]'.
Supported destination types are
'azure', 'gcs', 'hdfs', 'http',
'https', 'jdbc', 'kafka', and 's3'.
CONNECTION_TIMEOUT
: Timeout in
seconds for connecting to this sink
WAIT_TIMEOUT
: Timeout in seconds for
waiting for a response from this sink
CREDENTIAL
: Name of the credential object
to be used in this data sink
S3_BUCKET_NAME
: Name of the Amazon
S3 bucket to use as the data sink
S3_REGION
: Name of the Amazon S3
region where the given bucket is
located
S3_VERIFY_SSL
: Whether to verify SSL
connections.
Supported values:
TRUE
: Connect with SSL
verification
FALSE
: Connect without
verifying the SSL connection;
for testing purposes,
bypassing TLS errors,
self-signed certificates,
etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether
to use virtual addressing when
referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI
should be specified in
virtual-hosted-style format
where the bucket name is part
of the domain name in the
URL.
FALSE
: Use path-style URI
for requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN
which has required S3 permissions
that can be assumed for the given S3
IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
:
Customer encryption algorithm used
encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer
encryption key to encrypt or decrypt
data
S3_ENCRYPTION_TYPE
: Server side
encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos
keytab file location for the given
HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation
token for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos
authentication for the given HDFS
cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of
the Azure storage account to use as
the data sink, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the
Azure storage container to use as the
data sink
AZURE_TENANT_ID
: Active Directory
tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage
account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to
access given storage container
GCS_BUCKET_NAME
: Name of the Google
Cloud Storage bucket to use as the
data sink
GCS_PROJECT_ID
: Name of the Google
Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google
Cloud service account keys to use for
authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver
jar file location. This may be a
KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the
JDBC driver class
KAFKA_URL
: The publicly-accessible
full path URL to the kafka broker,
e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the Kafka
topic to use for this data sink, if
it references a Kafka broker
ANONYMOUS
: Create an anonymous
connection to the storage
provider--DEPRECATED: this is now the
default. Specify
use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no
credentials are supplied, we use
anonymous access by default. If this
is set, we will use cloud provider
user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
MAX_BATCH_SIZE
: Maximum number of
records per notification message. The
default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in
bytes of each notification message.
The default value is '1000000'.
JSON_FORMAT
: The desired format of
JSON encoded notifications message.
Supported values:
The default value is FLAT
.
SKIP_VALIDATION
: Bypass validation
of connection to this data sink.
Supported values:
The default value is FALSE
.
SCHEMA_NAME
: Updates the schema
name. If SCHEMA_NAME
doesn't exist, an error
will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
options
- Optional parameters.public static org.apache.avro.Schema getClassSchema()
public String getName()
name
.public AlterDatasinkRequest setName(String name)
name
- The new value for name
.this
to mimic the builder pattern.public Map<String,String> getDatasinkUpdatesMap()
DESTINATION
: Destination
for the output data in format 'destination_type://path[:port]'.
Supported destination types are 'azure', 'gcs', 'hdfs', 'http',
'https', 'jdbc', 'kafka', and 's3'.
CONNECTION_TIMEOUT
: Timeout in seconds for connecting to this
sink
WAIT_TIMEOUT
: Timeout in
seconds for waiting for a response from this sink
CREDENTIAL
: Name of the credential object to be used in this data sink
S3_BUCKET_NAME
: Name
of the Amazon S3 bucket to use as the data sink
S3_REGION
: Name of the
Amazon S3 region where the given bucket is located
S3_VERIFY_SSL
: Whether
to verify SSL connections.
Supported values:
TRUE
: Connect with SSL
verification
FALSE
: Connect without
verifying the SSL connection; for testing purposes,
bypassing TLS errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI
should be specified in virtual-hosted-style format where
the bucket name is part of the domain name in the URL.
FALSE
: Use path-style
URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
:
Amazon IAM Role ARN which has required S3 permissions that can
be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given HDFS user
HDFS_USE_KERBEROS
:
Use kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure storage container to
use as the data sink
AZURE_TENANT_ID
:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
:
Shared access signature token for Azure storage account to use
as the data sink
AZURE_OAUTH_TOKEN
:
Oauth token to access given storage container
GCS_BUCKET_NAME
: Name
of the Google Cloud Storage bucket to use as the data sink
GCS_PROJECT_ID
: Name
of the Google Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
KAFKA_URL
: The
publicly-accessible full path URL to the kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
:
Name of the Kafka topic to use for this data sink, if it
references a Kafka broker
ANONYMOUS
: Create an
anonymous connection to the storage provider--DEPRECATED: this
is now the default. Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to
connect to datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
MAX_BATCH_SIZE
:
Maximum number of records per notification message. The default
value is '1'.
MAX_MESSAGE_SIZE
:
Maximum size in bytes of each notification message. The default
value is '1000000'.
JSON_FORMAT
: The desired
format of JSON encoded notifications message.
Supported values:
The default value is FLAT
.
SKIP_VALIDATION
:
Bypass validation of connection to this data sink.
Supported values:
The default value is FALSE
.
SCHEMA_NAME
: Updates the
schema name. If SCHEMA_NAME
doesn't exist, an error will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
datasinkUpdatesMap
.public AlterDatasinkRequest setDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap)
DESTINATION
: Destination
for the output data in format 'destination_type://path[:port]'.
Supported destination types are 'azure', 'gcs', 'hdfs', 'http',
'https', 'jdbc', 'kafka', and 's3'.
CONNECTION_TIMEOUT
: Timeout in seconds for connecting to this
sink
WAIT_TIMEOUT
: Timeout in
seconds for waiting for a response from this sink
CREDENTIAL
: Name of the credential object to be used in this data sink
S3_BUCKET_NAME
: Name
of the Amazon S3 bucket to use as the data sink
S3_REGION
: Name of the
Amazon S3 region where the given bucket is located
S3_VERIFY_SSL
: Whether
to verify SSL connections.
Supported values:
TRUE
: Connect with SSL
verification
FALSE
: Connect without
verifying the SSL connection; for testing purposes,
bypassing TLS errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI
should be specified in virtual-hosted-style format where
the bucket name is part of the domain name in the URL.
FALSE
: Use path-style
URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
:
Amazon IAM Role ARN which has required S3 permissions that can
be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given HDFS user
HDFS_USE_KERBEROS
:
Use kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure storage container to
use as the data sink
AZURE_TENANT_ID
:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
:
Shared access signature token for Azure storage account to use
as the data sink
AZURE_OAUTH_TOKEN
:
Oauth token to access given storage container
GCS_BUCKET_NAME
: Name
of the Google Cloud Storage bucket to use as the data sink
GCS_PROJECT_ID
: Name
of the Google Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
KAFKA_URL
: The
publicly-accessible full path URL to the kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
:
Name of the Kafka topic to use for this data sink, if it
references a Kafka broker
ANONYMOUS
: Create an
anonymous connection to the storage provider--DEPRECATED: this
is now the default. Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to
connect to datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
MAX_BATCH_SIZE
:
Maximum number of records per notification message. The default
value is '1'.
MAX_MESSAGE_SIZE
:
Maximum size in bytes of each notification message. The default
value is '1000000'.
JSON_FORMAT
: The desired
format of JSON encoded notifications message.
Supported values:
The default value is FLAT
.
SKIP_VALIDATION
:
Bypass validation of connection to this data sink.
Supported values:
The default value is FALSE
.
SCHEMA_NAME
: Updates the
schema name. If SCHEMA_NAME
doesn't exist, an error will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
datasinkUpdatesMap
- The new value for datasinkUpdatesMap
.this
to mimic the builder pattern.public Map<String,String> getOptions()
options
.public AlterDatasinkRequest setOptions(Map<String,String> options)
options
- The new value for options
.this
to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema
in interface org.apache.avro.generic.GenericContainer
public Object get(int index)
get
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to getIndexOutOfBoundsException
public void put(int index, Object value)
put
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to setvalue
- the value to setIndexOutOfBoundsException
Copyright © 2025. All rights reserved.