public class CreateDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.createDatasink
.
Creates a data sink, which contains the destination information for a data sink that is external to the database.
Modifier and Type | Class and Description |
---|---|
static class |
CreateDatasinkRequest.Options
A set of string constants for the
CreateDatasinkRequest
parameter options . |
Constructor and Description |
---|
CreateDatasinkRequest()
Constructs a CreateDatasinkRequest object with default parameters.
|
CreateDatasinkRequest(String name,
String destination,
Map<String,String> options)
Constructs a CreateDatasinkRequest object with the specified parameters.
|
Modifier and Type | Method and Description |
---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getDestination()
Destination for the output data in format
'storage_provider_type://path[:port]'.
|
String |
getName()
Name of the data sink to be created.
|
Map<String,String> |
getOptions()
Optional parameters.
|
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
CreateDatasinkRequest |
setDestination(String destination)
Destination for the output data in format
'storage_provider_type://path[:port]'.
|
CreateDatasinkRequest |
setName(String name)
Name of the data sink to be created.
|
CreateDatasinkRequest |
setOptions(Map<String,String> options)
Optional parameters.
|
String |
toString() |
public CreateDatasinkRequest()
public CreateDatasinkRequest(String name, String destination, Map<String,String> options)
name
- Name of the data sink to be created.destination
- Destination for the output data in format
'storage_provider_type://path[:port]'. Supported
storage provider types are 'azure', 'gcs', 'hdfs',
'http', 'https', 'jdbc', 'kafka', and 's3'.options
- Optional parameters.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this data sink
WAIT_TIMEOUT
:
Timeout in seconds for waiting for a response
from this data sink
CREDENTIAL
: Name of
the credential object to be used
in this data sink
S3_BUCKET_NAME
:
Name of the Amazon S3 bucket to use as the data
sink
S3_REGION
: Name of the
Amazon S3 region where the given bucket is
located
S3_VERIFY_SSL
:
Whether to verify SSL connections.
Supported values:
TRUE
: Connect with
SSL verification
FALSE
: Connect
without verifying the SSL connection;
for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use
virtual addressing when referencing the Amazon
S3 sink.
Supported values:
TRUE
: The requests
URI should be specified in
virtual-hosted-style format where the
bucket name is part of the domain name
in the URL.
FALSE
: Use
path-style URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
:
Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3
IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption
key to encrypt or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS
key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may be a
KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the
given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication
for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure
storage account to use as the data sink, this is
valid only if tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data sink
AZURE_TENANT_ID
:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
:
Shared access signature token for Azure storage
account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to access given
storage container
GCS_BUCKET_NAME
:
Name of the Google Cloud Storage bucket to use
as the data sink
GCS_PROJECT_ID
:
Name of the Google Cloud project to use as the
data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service
account keys to use for authenticating the data
sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver
class
KAFKA_TOPIC_NAME
: Name of the Kafka topic to
publish to if destination
is a Kafka
broker
MAX_BATCH_SIZE
:
Maximum number of records per notification
message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in bytes of each
notification message. The default value is
'1000000'.
JSON_FORMAT
: The
desired format of JSON encoded notifications
message.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https
to connect to datasink if true, otherwise use
http.
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
:
Bypass validation of connection to this data
sink.
Supported values:
The default value is FALSE
.
Map
.public static org.apache.avro.Schema getClassSchema()
public String getName()
name
.public CreateDatasinkRequest setName(String name)
name
- The new value for name
.this
to mimic the builder pattern.public String getDestination()
Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.
destination
.public CreateDatasinkRequest setDestination(String destination)
Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.
destination
- The new value for destination
.this
to mimic the builder pattern.public Map<String,String> getOptions()
CONNECTION_TIMEOUT
: Timeout
in seconds for connecting to this data sink
WAIT_TIMEOUT
: Timeout in seconds
for waiting for a response from this data sink
CREDENTIAL
: Name of the credential object to be used in this data sink
S3_BUCKET_NAME
: Name of the
Amazon S3 bucket to use as the data sink
S3_REGION
: Name of the Amazon S3
region where the given bucket is located
S3_VERIFY_SSL
: Whether to verify
SSL connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying
the SSL connection; for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
FALSE
: Use path-style URI for
requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role
ARN which has required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
S3_ENCRYPTION_TYPE
: Server
side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
:
Kerberos keytab file location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
:
Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use
kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name
of the Azure storage container to use as the data sink
AZURE_TENANT_ID
: Active
Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage account to use as the data
sink
AZURE_OAUTH_TOKEN
: Oauth token
to access given storage container
GCS_BUCKET_NAME
: Name of the
Google Cloud Storage bucket to use as the data sink
GCS_PROJECT_ID
: Name of the
Google Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC
driver jar file location
JDBC_DRIVER_CLASS_NAME
:
Name of the JDBC driver class
KAFKA_TOPIC_NAME
: Name of the
Kafka topic to publish to if destination
is a Kafka broker
MAX_BATCH_SIZE
: Maximum number of
records per notification message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size
in bytes of each notification message. The default value is
'1000000'.
JSON_FORMAT
: The desired format of
JSON encoded notifications message.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
:
When no credentials are supplied, we use anonymous access by
default. If this is set, we will use cloud provider user
settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass
validation of connection to this data sink.
Supported values:
The default value is FALSE
.
Map
.options
.public CreateDatasinkRequest setOptions(Map<String,String> options)
CONNECTION_TIMEOUT
: Timeout
in seconds for connecting to this data sink
WAIT_TIMEOUT
: Timeout in seconds
for waiting for a response from this data sink
CREDENTIAL
: Name of the credential object to be used in this data sink
S3_BUCKET_NAME
: Name of the
Amazon S3 bucket to use as the data sink
S3_REGION
: Name of the Amazon S3
region where the given bucket is located
S3_VERIFY_SSL
: Whether to verify
SSL connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying
the SSL connection; for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
FALSE
: Use path-style URI for
requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role
ARN which has required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
S3_ENCRYPTION_TYPE
: Server
side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
:
Kerberos keytab file location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
:
Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use
kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name
of the Azure storage container to use as the data sink
AZURE_TENANT_ID
: Active
Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage account to use as the data
sink
AZURE_OAUTH_TOKEN
: Oauth token
to access given storage container
GCS_BUCKET_NAME
: Name of the
Google Cloud Storage bucket to use as the data sink
GCS_PROJECT_ID
: Name of the
Google Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC
driver jar file location
JDBC_DRIVER_CLASS_NAME
:
Name of the JDBC driver class
KAFKA_TOPIC_NAME
: Name of the
Kafka topic to publish to if destination
is a Kafka broker
MAX_BATCH_SIZE
: Maximum number of
records per notification message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size
in bytes of each notification message. The default value is
'1000000'.
JSON_FORMAT
: The desired format of
JSON encoded notifications message.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
:
When no credentials are supplied, we use anonymous access by
default. If this is set, we will use cloud provider user
settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass
validation of connection to this data sink.
Supported values:
The default value is FALSE
.
Map
.options
- The new value for options
.this
to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema
in interface org.apache.avro.generic.GenericContainer
public Object get(int index)
get
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to getIndexOutOfBoundsException
public void put(int index, Object value)
put
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to setvalue
- the value to setIndexOutOfBoundsException
Copyright © 2025. All rights reserved.