public class AlterDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.alterDatasink(AlterDatasinkRequest).
Alters the properties of an existing data sink
| Modifier and Type | Class and Description |
|---|---|
static class |
AlterDatasinkRequest.DatasinkUpdatesMap
Map containing the properties of the data sink to be updated.
|
| Constructor and Description |
|---|
AlterDatasinkRequest()
Constructs an AlterDatasinkRequest object with default parameters.
|
AlterDatasinkRequest(String name,
Map<String,String> datasinkUpdatesMap,
Map<String,String> options)
Constructs an AlterDatasinkRequest object with the specified parameters.
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
Map<String,String> |
getDatasinkUpdatesMap() |
String |
getName() |
Map<String,String> |
getOptions() |
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
AlterDatasinkRequest |
setDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap) |
AlterDatasinkRequest |
setName(String name) |
AlterDatasinkRequest |
setOptions(Map<String,String> options) |
String |
toString() |
public AlterDatasinkRequest()
public AlterDatasinkRequest(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options)
name - Name of the data sink to be altered. Must be an existing
data sink.datasinkUpdatesMap - Map containing the properties of the data
sink to be updated. Error if empty.
DESTINATION: Destination for the output data
in format 'destination_type://path[:port]'.
Supported destination types are 'http',
'https' and 'kafka'.
CONNECTION_TIMEOUT: Timeout in seconds for
connecting to this sink
WAIT_TIMEOUT: Timeout in seconds for waiting
for a response from this sink
CREDENTIAL: Name of the credential object to be
used in this data sink
S3_BUCKET_NAME: Name of the Amazon S3 bucket
to use as the data sink
S3_REGION: Name of the Amazon S3 region
where the given bucket is located
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which
has required S3 permissions that can be
assumed for the given S3 IAM user
HDFS_KERBEROS_KEYTAB: Kerberos keytab file
location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for
the given HDFS user
HDFS_USE_KERBEROS: Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the
Azure storage account to use as the data
sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure
storage container to use as the data sink
AZURE_TENANT_ID: Active Directory tenant ID
(or directory ID)
AZURE_SAS_TOKEN: Shared access signature
token for Azure storage account to use as the
data sink
AZURE_OAUTH_TOKEN: Oauth token to access
given storage container
GCS_BUCKET_NAME: Name of the Google Cloud
Storage bucket to use as the data sink
GCS_PROJECT_ID: Name of the Google Cloud
project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud
service account keys to use for
authenticating the data sink
KAFKA_URL: The publicly-accessible full path
URL to the kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME: Name of the Kafka topic to
use for this data sink, if it references a
Kafka broker
ANONYMOUS: Create an anonymous connection to
the storage provider--DEPRECATED: this is now
the default. Specify use_managed_credentials
for non-anonymous connection
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasink
if true, otherwise use http
Supported values:
The default value is TRUE.
MAX_BATCH_SIZE: Maximum number of records
per notification message. The default value
is '1'.
MAX_MESSAGE_SIZE: Maximum size in bytes of
each notification message. The default value
is '1000000'.
JSON_FORMAT: The desired format of JSON
encoded notifications message.
If nested, records are returned as an
array.
Otherwise, only a single record per messages
is returned.
Supported values:
The default value is FLAT.
SKIP_VALIDATION: Bypass validation of
connection to this data sink.
Supported values:
The default value is FALSE.
SCHEMA_NAME: Updates the schema name. If
schema_name
doesn't exist, an error will be thrown. If
schema_name is empty, then the user's
default schema will be used.
options - Optional parameters.public static org.apache.avro.Schema getClassSchema()
public String getName()
public AlterDatasinkRequest setName(String name)
name - Name of the data sink to be altered. Must be an existing
data sink.this to mimic the builder pattern.public Map<String,String> getDatasinkUpdatesMap()
DESTINATION: Destination for the output data in format
'destination_type://path[:port]'.
Supported destination types are 'http', 'https' and 'kafka'.
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this
sink
WAIT_TIMEOUT: Timeout in seconds for waiting for a response
from this sink
CREDENTIAL: Name of the credential object to be used in this data sink
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data
sink
S3_REGION: Name of the Amazon S3 region where the given bucket
is located
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3 IAM user
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the given
HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure storage container to
use as the data sink
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN: Shared access signature token for Azure
storage account to use as the data sink
AZURE_OAUTH_TOKEN: Oauth token to access given storage
container
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use
as the data sink
GCS_PROJECT_ID: Name of the Google Cloud project to use as the
data sink
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to
use for authenticating the data sink
KAFKA_URL: The publicly-accessible full path URL to the kafka
broker, e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data
sink, if it references a Kafka broker
ANONYMOUS: Create an anonymous connection to the storage
provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasink if true, otherwise
use http
Supported values:
The default value is TRUE.
MAX_BATCH_SIZE: Maximum number of records per notification
message. The default value is '1'.
MAX_MESSAGE_SIZE: Maximum size in bytes of each notification
message. The default value is '1000000'.
JSON_FORMAT: The desired format of JSON encoded notifications
message.
If nested, records are returned as an array.
Otherwise, only a single record per messages is returned.
Supported values:
The default value is FLAT.
SKIP_VALIDATION: Bypass validation of connection to this data
sink.
Supported values:
The default value is FALSE.
SCHEMA_NAME: Updates the schema name. If schema_name
doesn't exist, an error will be thrown. If schema_name
is empty, then the user's
default schema will be used.
public AlterDatasinkRequest setDatasinkUpdatesMap(Map<String,String> datasinkUpdatesMap)
datasinkUpdatesMap - Map containing the properties of the data
sink to be updated. Error if empty.
DESTINATION: Destination for the output data
in format 'destination_type://path[:port]'.
Supported destination types are 'http',
'https' and 'kafka'.
CONNECTION_TIMEOUT: Timeout in seconds for
connecting to this sink
WAIT_TIMEOUT: Timeout in seconds for waiting
for a response from this sink
CREDENTIAL: Name of the credential object to be
used in this data sink
S3_BUCKET_NAME: Name of the Amazon S3 bucket
to use as the data sink
S3_REGION: Name of the Amazon S3 region
where the given bucket is located
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which
has required S3 permissions that can be
assumed for the given S3 IAM user
HDFS_KERBEROS_KEYTAB: Kerberos keytab file
location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for
the given HDFS user
HDFS_USE_KERBEROS: Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the
Azure storage account to use as the data
sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure
storage container to use as the data sink
AZURE_TENANT_ID: Active Directory tenant ID
(or directory ID)
AZURE_SAS_TOKEN: Shared access signature
token for Azure storage account to use as the
data sink
AZURE_OAUTH_TOKEN: Oauth token to access
given storage container
GCS_BUCKET_NAME: Name of the Google Cloud
Storage bucket to use as the data sink
GCS_PROJECT_ID: Name of the Google Cloud
project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud
service account keys to use for
authenticating the data sink
KAFKA_URL: The publicly-accessible full path
URL to the kafka broker, e.g.,
'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME: Name of the Kafka topic to
use for this data sink, if it references a
Kafka broker
ANONYMOUS: Create an anonymous connection to
the storage provider--DEPRECATED: this is now
the default. Specify use_managed_credentials
for non-anonymous connection
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasink
if true, otherwise use http
Supported values:
The default value is TRUE.
MAX_BATCH_SIZE: Maximum number of records
per notification message. The default value
is '1'.
MAX_MESSAGE_SIZE: Maximum size in bytes of
each notification message. The default value
is '1000000'.
JSON_FORMAT: The desired format of JSON
encoded notifications message.
If nested, records are returned as an
array.
Otherwise, only a single record per messages
is returned.
Supported values:
The default value is FLAT.
SKIP_VALIDATION: Bypass validation of
connection to this data sink.
Supported values:
The default value is FALSE.
SCHEMA_NAME: Updates the schema name. If
schema_name
doesn't exist, an error will be thrown. If
schema_name is empty, then the user's
default schema will be used.
this to mimic the builder pattern.public AlterDatasinkRequest setOptions(Map<String,String> options)
options - Optional parameters.this to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema in interface org.apache.avro.generic.GenericContainerpublic Object get(int index)
get in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to getIndexOutOfBoundsExceptionpublic void put(int index,
Object value)
put in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to setvalue - the value to setIndexOutOfBoundsExceptionCopyright © 2024. All rights reserved.