public class CreateDatasinkRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.createDatasink(CreateDatasinkRequest)
.
Creates a data sink, which contains the destination information for a data sink that is external to the database.
Modifier and Type | Class and Description |
---|---|
static class |
CreateDatasinkRequest.Options
Optional parameters.
|
Constructor and Description |
---|
CreateDatasinkRequest()
Constructs a CreateDatasinkRequest object with default parameters.
|
CreateDatasinkRequest(String name,
String destination,
Map<String,String> options)
Constructs a CreateDatasinkRequest object with the specified parameters.
|
Modifier and Type | Method and Description |
---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getDestination() |
String |
getName() |
Map<String,String> |
getOptions() |
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
CreateDatasinkRequest |
setDestination(String destination) |
CreateDatasinkRequest |
setName(String name) |
CreateDatasinkRequest |
setOptions(Map<String,String> options) |
String |
toString() |
public CreateDatasinkRequest()
public CreateDatasinkRequest(String name, String destination, Map<String,String> options)
name
- Name of the data sink to be created.destination
- Destination for the output data in format
'storage_provider_type://path[:port]'.
Supported storage provider types are 'azure', 'gcs',
'hdfs', 'http', 'https', 'jdbc', 'kafka' and 's3'.options
- Optional parameters.
CONNECTION_TIMEOUT
: Timeout in seconds for connecting
to this data sink
WAIT_TIMEOUT
: Timeout in seconds for waiting for a
response from this data sink
CREDENTIAL
: Name of the credential object to be used in this
data sink
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to use as
the data sink
S3_REGION
: Name of the Amazon S3 region where the given
bucket is located
S3_VERIFY_SSL
: Set to false for testing purposes or
when necessary to bypass TLS errors (e.g. self-signed
certificates). This value is true by default.
Supported values:
The default value is TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: When true (default), the
requests URI should be specified in virtual-hosted-style
format where the bucket name is part of the domain name
in the URL.
Otherwise set to false to use path-style URI for
requests.
Supported values:
The default value is TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has required
S3 permissions that can be assumed for the given S3 IAM
user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption
algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to
encrypt or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for
the given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given
HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication for the
given HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage
account to use as the data sink, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data sink
AZURE_TENANT_ID
: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN
: Shared access signature token for
Azure storage account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to access given storage
container
GCS_BUCKET_NAME
: Name of the Google Cloud Storage
bucket to use as the data sink
GCS_PROJECT_ID
: Name of the Google Cloud project to use
as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account
keys to use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
KAFKA_TOPIC_NAME
: Name of the Kafka topic to publish to
if destination
is a Kafka broker
MAX_BATCH_SIZE
: Maximum number of records per
notification message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in bytes of each
notification message. The default value is '1000000'.
JSON_FORMAT
: The desired format of JSON encoded
notifications message.
If nested
, records are returned as an array.
Otherwise, only a single record per messages is
returned.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
: When no credentials are
supplied, we use anonymous access by default. If this
is set, we will use cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasink if true,
otherwise use http
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass validation of connection to
this data sink.
Supported values:
The default value is FALSE
.
Map
.public static org.apache.avro.Schema getClassSchema()
public String getName()
public CreateDatasinkRequest setName(String name)
name
- Name of the data sink to be created.this
to mimic the builder pattern.public String getDestination()
public CreateDatasinkRequest setDestination(String destination)
destination
- Destination for the output data in format
'storage_provider_type://path[:port]'.
Supported storage provider types are 'azure', 'gcs',
'hdfs', 'http', 'https', 'jdbc', 'kafka' and 's3'.this
to mimic the builder pattern.public Map<String,String> getOptions()
CONNECTION_TIMEOUT
: Timeout in seconds for connecting to this
data sink
WAIT_TIMEOUT
: Timeout in seconds for waiting for a response
from this data sink
CREDENTIAL
: Name of the credential object to be used in this data sink
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to use as the data
sink
S3_REGION
: Name of the Amazon S3 region where the given bucket
is located
S3_VERIFY_SSL
: Set to false for testing purposes or when
necessary to bypass TLS errors (e.g. self-signed certificates).
This value is true by default.
Supported values:
The default value is TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: When true (default), the requests
URI should be specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
Otherwise set to false to use path-style URI for requests.
Supported values:
The default value is TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication for the given
HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data sink, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name of the Azure storage container to
use as the data sink
AZURE_TENANT_ID
: Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access signature token for Azure
storage account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to access given storage
container
GCS_BUCKET_NAME
: Name of the Google Cloud Storage bucket to use
as the data sink
GCS_PROJECT_ID
: Name of the Google Cloud project to use as the
data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
KAFKA_TOPIC_NAME
: Name of the Kafka topic to publish to if
destination
is a Kafka broker
MAX_BATCH_SIZE
: Maximum number of records per notification
message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in bytes of each notification
message. The default value is '1000000'.
JSON_FORMAT
: The desired format of JSON encoded notifications
message.
If nested
, records are returned as an array. Otherwise,
only a single record per messages is returned.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasink if true, otherwise
use http
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass validation of connection to this data
sink.
Supported values:
The default value is FALSE
.
Map
.public CreateDatasinkRequest setOptions(Map<String,String> options)
options
- Optional parameters.
CONNECTION_TIMEOUT
: Timeout in seconds for connecting
to this data sink
WAIT_TIMEOUT
: Timeout in seconds for waiting for a
response from this data sink
CREDENTIAL
: Name of the credential object to be used in this
data sink
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to use as
the data sink
S3_REGION
: Name of the Amazon S3 region where the given
bucket is located
S3_VERIFY_SSL
: Set to false for testing purposes or
when necessary to bypass TLS errors (e.g. self-signed
certificates). This value is true by default.
Supported values:
The default value is TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: When true (default), the
requests URI should be specified in virtual-hosted-style
format where the bucket name is part of the domain name
in the URL.
Otherwise set to false to use path-style URI for
requests.
Supported values:
The default value is TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has required
S3 permissions that can be assumed for the given S3 IAM
user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption
algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to
encrypt or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file location for
the given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the given
HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication for the
given HDFS cluster
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage
account to use as the data sink, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data sink
AZURE_TENANT_ID
: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN
: Shared access signature token for
Azure storage account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to access given storage
container
GCS_BUCKET_NAME
: Name of the Google Cloud Storage
bucket to use as the data sink
GCS_PROJECT_ID
: Name of the Google Cloud project to use
as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account
keys to use for authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file location
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver class
KAFKA_TOPIC_NAME
: Name of the Kafka topic to publish to
if destination
is a Kafka broker
MAX_BATCH_SIZE
: Maximum number of records per
notification message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in bytes of each
notification message. The default value is '1000000'.
JSON_FORMAT
: The desired format of JSON encoded
notifications message.
If nested
, records are returned as an array.
Otherwise, only a single record per messages is
returned.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
: When no credentials are
supplied, we use anonymous access by default. If this
is set, we will use cloud provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasink if true,
otherwise use http
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass validation of connection to
this data sink.
Supported values:
The default value is FALSE
.
Map
.this
to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema
in interface org.apache.avro.generic.GenericContainer
public Object get(int index)
get
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to getIndexOutOfBoundsException
public void put(int index, Object value)
put
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to setvalue
- the value to setIndexOutOfBoundsException
Copyright © 2024. All rights reserved.