public class CreateDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.createDatasource
.
Creates a data source, which contains the location and connection information for a data store that is external to the database.
Modifier and Type | Class and Description |
---|---|
static class |
CreateDatasourceRequest.Options
A set of string constants for the
CreateDatasourceRequest
parameter options . |
Constructor and Description |
---|
CreateDatasourceRequest()
Constructs a CreateDatasourceRequest object with default parameters.
|
CreateDatasourceRequest(String name,
String location,
String userName,
String password,
Map<String,String> options)
Constructs a CreateDatasourceRequest object with the specified
parameters.
|
Modifier and Type | Method and Description |
---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getLocation()
Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]' format.
|
String |
getName()
Name of the data source to be created.
|
Map<String,String> |
getOptions()
Optional parameters.
|
String |
getPassword()
Password for the remote system user; may be an empty string
|
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getUserName()
Name of the remote system user; may be an empty string
|
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
CreateDatasourceRequest |
setLocation(String location)
Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]' format.
|
CreateDatasourceRequest |
setName(String name)
Name of the data source to be created.
|
CreateDatasourceRequest |
setOptions(Map<String,String> options)
Optional parameters.
|
CreateDatasourceRequest |
setPassword(String password)
Password for the remote system user; may be an empty string
|
CreateDatasourceRequest |
setUserName(String userName)
Name of the remote system user; may be an empty string
|
String |
toString() |
public CreateDatasourceRequest()
public CreateDatasourceRequest(String name, String location, String userName, String password, Map<String,String> options)
name
- Name of the data source to be created.location
- Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format. Supported storage provider types are 'azure',
'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.userName
- Name of the remote system user; may be an empty stringpassword
- Password for the remote system user; may be an empty
stringoptions
- Optional parameters.
SKIP_VALIDATION
:
Bypass validation of connection to remote
source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this storage provider
WAIT_TIMEOUT
:
Timeout in seconds for reading from this storage
provider
CREDENTIAL
: Name of
the credential object to be used
in data source
S3_BUCKET_NAME
:
Name of the Amazon S3 bucket to use as the data
source
S3_REGION
: Name of the
Amazon S3 region where the given bucket is
located
S3_VERIFY_SSL
:
Whether to verify SSL connections.
Supported values:
TRUE
: Connect with
SSL verification
FALSE
: Connect
without verifying the SSL connection;
for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use
virtual addressing when referencing the Amazon
S3 source.
Supported values:
TRUE
: The requests
URI should be specified in
virtual-hosted-style format where the
bucket name is part of the domain name
in the URL.
FALSE
: Use
path-style URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
:
Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3
IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption
key to encrypt or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may be a
KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the
given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication
for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure
storage account to use as the data source, this
is valid only if tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data source
AZURE_TENANT_ID
:
Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
:
Shared access signature token for Azure storage
account to use as the data source
AZURE_OAUTH_TOKEN
: OAuth token to access given
storage container
GCS_BUCKET_NAME
:
Name of the Google Cloud Storage bucket to use
as the data source
GCS_PROJECT_ID
:
Name of the Google Cloud project to use as the
data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service
account keys to use for authenticating the data
source
IS_STREAM
: To load
from Azure/GCS/S3 as a stream continuously.
Supported values:
The default value is FALSE
.
KAFKA_TOPIC_NAME
: Name of the Kafka topic to
use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver
class
ANONYMOUS
: Use
anonymous connection to storage
provider--DEPRECATED: this is now the default.
Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https
to connect to datasource if true, otherwise use
http.
Supported values:
The default value is TRUE
.
SCHEMA_REGISTRY_LOCATION
: Location of Confluent
Schema Registry in
'[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL
: Confluent Schema
Registry credential object name.
SCHEMA_REGISTRY_PORT
: Confluent Schema Registry
port (optional).
Map
.public static org.apache.avro.Schema getClassSchema()
public String getName()
name
.public CreateDatasourceRequest setName(String name)
name
- The new value for name
.this
to mimic the builder pattern.public String getLocation()
Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
location
.public CreateDatasourceRequest setLocation(String location)
Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
location
- The new value for location
.this
to mimic the builder pattern.public String getUserName()
userName
.public CreateDatasourceRequest setUserName(String userName)
userName
- The new value for userName
.this
to mimic the builder pattern.public String getPassword()
password
.public CreateDatasourceRequest setPassword(String password)
password
- The new value for password
.this
to mimic the builder pattern.public Map<String,String> getOptions()
SKIP_VALIDATION
: Bypass
validation of connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout
in seconds for connecting to this storage provider
WAIT_TIMEOUT
: Timeout in seconds
for reading from this storage provider
CREDENTIAL
: Name of the credential object to be used in data source
S3_BUCKET_NAME
: Name of the
Amazon S3 bucket to use as the data source
S3_REGION
: Name of the Amazon S3
region where the given bucket is located
S3_VERIFY_SSL
: Whether to verify
SSL connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying
the SSL connection; for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 source.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
FALSE
: Use path-style URI for
requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role
ARN which has required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB
:
Kerberos keytab file location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
:
Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use
kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name
of the Azure storage container to use as the data source
AZURE_TENANT_ID
: Active
Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage account to use as the data
source
AZURE_OAUTH_TOKEN
: OAuth token
to access given storage container
GCS_BUCKET_NAME
: Name of the
Google Cloud Storage bucket to use as the data source
GCS_PROJECT_ID
: Name of the
Google Cloud project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data source
IS_STREAM
: To load from Azure/GCS/S3
as a stream continuously.
Supported values:
The default value is FALSE
.
KAFKA_TOPIC_NAME
: Name of the
Kafka topic to use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC
driver jar file location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
:
Name of the JDBC driver class
ANONYMOUS
: Use anonymous connection to
storage provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
:
When no credentials are supplied, we use anonymous access by
default. If this is set, we will use cloud provider user
settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasource if true, otherwise use http.
Supported values:
The default value is TRUE
.
SCHEMA_REGISTRY_LOCATION
: Location of Confluent Schema Registry
in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL
: Confluent Schema Registry credential object name.
SCHEMA_REGISTRY_PORT
:
Confluent Schema Registry port (optional).
Map
.options
.public CreateDatasourceRequest setOptions(Map<String,String> options)
SKIP_VALIDATION
: Bypass
validation of connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout
in seconds for connecting to this storage provider
WAIT_TIMEOUT
: Timeout in seconds
for reading from this storage provider
CREDENTIAL
: Name of the credential object to be used in data source
S3_BUCKET_NAME
: Name of the
Amazon S3 bucket to use as the data source
S3_REGION
: Name of the Amazon S3
region where the given bucket is located
S3_VERIFY_SSL
: Whether to verify
SSL connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying
the SSL connection; for testing purposes, bypassing TLS
errors, self-signed certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use virtual addressing
when referencing the Amazon S3 source.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
FALSE
: Use path-style URI for
requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role
ARN which has required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB
:
Kerberos keytab file location for the given HDFS user. This may
be a KIFS file.
HDFS_DELEGATION_TOKEN
:
Delegation token for the given HDFS user
HDFS_USE_KERBEROS
: Use
kerberos authentication for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME
: Name
of the Azure storage container to use as the data source
AZURE_TENANT_ID
: Active
Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage account to use as the data
source
AZURE_OAUTH_TOKEN
: OAuth token
to access given storage container
GCS_BUCKET_NAME
: Name of the
Google Cloud Storage bucket to use as the data source
GCS_PROJECT_ID
: Name of the
Google Cloud project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service account keys to
use for authenticating the data source
IS_STREAM
: To load from Azure/GCS/S3
as a stream continuously.
Supported values:
The default value is FALSE
.
KAFKA_TOPIC_NAME
: Name of the
Kafka topic to use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC
driver jar file location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
:
Name of the JDBC driver class
ANONYMOUS
: Use anonymous connection to
storage provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
:
When no credentials are supplied, we use anonymous access by
default. If this is set, we will use cloud provider user
settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasource if true, otherwise use http.
Supported values:
The default value is TRUE
.
SCHEMA_REGISTRY_LOCATION
: Location of Confluent Schema Registry
in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL
: Confluent Schema Registry credential object name.
SCHEMA_REGISTRY_PORT
:
Confluent Schema Registry port (optional).
Map
.options
- The new value for options
.this
to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema
in interface org.apache.avro.generic.GenericContainer
public Object get(int index)
get
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to getIndexOutOfBoundsException
public void put(int index, Object value)
put
in interface org.apache.avro.generic.IndexedRecord
index
- the position of the field to setvalue
- the value to setIndexOutOfBoundsException
Copyright © 2025. All rights reserved.