public class CreateDatasourceRequest extends Object implements org.apache.avro.generic.IndexedRecord
GPUdb.createDatasource(CreateDatasourceRequest).
Creates a data source, which contains the location and connection information for a data store that is external to the database.
| Modifier and Type | Class and Description |
|---|---|
static class |
CreateDatasourceRequest.Options
Optional parameters.
|
| Constructor and Description |
|---|
CreateDatasourceRequest()
Constructs a CreateDatasourceRequest object with default parameters.
|
CreateDatasourceRequest(String name,
String location,
String userName,
String password,
Map<String,String> options)
Constructs a CreateDatasourceRequest object with the specified
parameters.
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
equals(Object obj) |
Object |
get(int index)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
static org.apache.avro.Schema |
getClassSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getLocation() |
String |
getName() |
Map<String,String> |
getOptions() |
String |
getPassword() |
org.apache.avro.Schema |
getSchema()
This method supports the Avro framework and is not intended to be called
directly by the user.
|
String |
getUserName() |
int |
hashCode() |
void |
put(int index,
Object value)
This method supports the Avro framework and is not intended to be called
directly by the user.
|
CreateDatasourceRequest |
setLocation(String location) |
CreateDatasourceRequest |
setName(String name) |
CreateDatasourceRequest |
setOptions(Map<String,String> options) |
CreateDatasourceRequest |
setPassword(String password) |
CreateDatasourceRequest |
setUserName(String userName) |
String |
toString() |
public CreateDatasourceRequest()
public CreateDatasourceRequest(String name, String location, String userName, String password, Map<String,String> options)
name - Name of the data source to be created.location - Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','jdbc','kafka', 'confluent' and
's3'.userName - Name of the remote system user; may be an empty stringpassword - Password for the remote system user; may be an empty
stringoptions - Optional parameters.
SKIP_VALIDATION: Bypass validation of connection to
remote source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in seconds for connecting
to this storage provider
WAIT_TIMEOUT: Timeout in seconds for reading from this
storage provider
CREDENTIAL: Name of the credential object to be used in data
source
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as
the data source
S3_REGION: Name of the Amazon S3 region where the given
bucket is located
S3_VERIFY_SSL: Set to false for testing purposes or
when necessary to bypass TLS errors (e.g. self-signed
certificates). This value is true by default.
Supported values:
The default value is TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual
addressing when referencing the Amazon S3 source
Supported values:
TRUE: The requests URI should be specified in
virtual-hosted-style format where the bucket name is
part of the domain name in the URL.
FALSE: Use path-style URI for requests.
TRUE.
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required
S3 permissions that can be assumed for the given S3 IAM
user
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption
algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to
encrypt or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for
the given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given
HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the
given HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage
account to use as the data source, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME: Name of the Azure storage
container to use as the data source
AZURE_TENANT_ID: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN: Shared access signature token for
Azure storage account to use as the data source
AZURE_OAUTH_TOKEN: OAuth token to access given storage
container
GCS_BUCKET_NAME: Name of the Google Cloud Storage
bucket to use as the data source
GCS_PROJECT_ID: Name of the Google Cloud project to use
as the data source
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account
keys to use for authenticating the data source
IS_STREAM: To load from Azure/GCS/S3 as a stream
continuously.
Supported values:
The default value is FALSE.
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the
data source
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location.
This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
ANONYMOUS: Use anonymous connection to storage
provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are
supplied, we use anonymous access by default. If this
is set, we will use cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasource if true,
otherwise use http
Supported values:
The default value is TRUE.
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema
Registry in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry
credential object name.
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port
(optional).
Map.public static org.apache.avro.Schema getClassSchema()
public String getName()
public CreateDatasourceRequest setName(String name)
name - Name of the data source to be created.this to mimic the builder pattern.public String getLocation()
public CreateDatasourceRequest setLocation(String location)
location - Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','jdbc','kafka', 'confluent' and
's3'.this to mimic the builder pattern.public String getUserName()
public CreateDatasourceRequest setUserName(String userName)
userName - Name of the remote system user; may be an empty stringthis to mimic the builder pattern.public String getPassword()
public CreateDatasourceRequest setPassword(String password)
password - Password for the remote system user; may be an empty
stringthis to mimic the builder pattern.public Map<String,String> getOptions()
SKIP_VALIDATION: Bypass validation of connection to remote
source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this
storage provider
WAIT_TIMEOUT: Timeout in seconds for reading from this storage
provider
CREDENTIAL: Name of the credential object to be used in data source
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data
source
S3_REGION: Name of the Amazon S3 region where the given bucket
is located
S3_VERIFY_SSL: Set to false for testing purposes or when
necessary to bypass TLS errors (e.g. self-signed certificates).
This value is true by default.
Supported values:
The default value is TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing
when referencing the Amazon S3 source
Supported values:
TRUE:
The requests URI should be specified in virtual-hosted-style
format where the bucket name is part of the domain name in the
URL.
FALSE:
Use path-style URI for requests.
TRUE.
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3
permissions that can be assumed for the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm
used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the
given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the given
HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account
to use as the data source, this is valid only if tenant_id is
specified
AZURE_CONTAINER_NAME: Name of the Azure storage container to
use as the data source
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
AZURE_SAS_TOKEN: Shared access signature token for Azure
storage account to use as the data source
AZURE_OAUTH_TOKEN: OAuth token to access given storage
container
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use
as the data source
GCS_PROJECT_ID: Name of the Google Cloud project to use as the
data source
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to
use for authenticating the data source
IS_STREAM: To load from Azure/GCS/S3 as a stream continuously.
Supported values:
The default value is FALSE.
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data
source
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may
be a KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
ANONYMOUS: Use anonymous connection to storage
provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we
use anonymous access by default. If this is set, we will use
cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasource if true,
otherwise use http
Supported values:
The default value is TRUE.
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry
in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port
(optional).
Map.public CreateDatasourceRequest setOptions(Map<String,String> options)
options - Optional parameters.
SKIP_VALIDATION: Bypass validation of connection to
remote source.
Supported values:
The default value is FALSE.
CONNECTION_TIMEOUT: Timeout in seconds for connecting
to this storage provider
WAIT_TIMEOUT: Timeout in seconds for reading from this
storage provider
CREDENTIAL: Name of the credential object to be used in data
source
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as
the data source
S3_REGION: Name of the Amazon S3 region where the given
bucket is located
S3_VERIFY_SSL: Set to false for testing purposes or
when necessary to bypass TLS errors (e.g. self-signed
certificates). This value is true by default.
Supported values:
The default value is TRUE.
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual
addressing when referencing the Amazon S3 source
Supported values:
TRUE: The requests URI should be specified in
virtual-hosted-style format where the bucket name is
part of the domain name in the URL.
FALSE: Use path-style URI for requests.
TRUE.
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required
S3 permissions that can be assumed for the given S3 IAM
user
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption
algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to
encrypt or decrypt data
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for
the given HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN: Delegation token for the given
HDFS user
HDFS_USE_KERBEROS: Use kerberos authentication for the
given HDFS cluster
Supported values:
The default value is FALSE.
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage
account to use as the data source, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME: Name of the Azure storage
container to use as the data source
AZURE_TENANT_ID: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN: Shared access signature token for
Azure storage account to use as the data source
AZURE_OAUTH_TOKEN: OAuth token to access given storage
container
GCS_BUCKET_NAME: Name of the Google Cloud Storage
bucket to use as the data source
GCS_PROJECT_ID: Name of the Google Cloud project to use
as the data source
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account
keys to use for authenticating the data source
IS_STREAM: To load from Azure/GCS/S3 as a stream
continuously.
Supported values:
The default value is FALSE.
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the
data source
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location.
This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
ANONYMOUS: Use anonymous connection to storage
provider--DEPRECATED: this is now the default. Specify
use_managed_credentials for non-anonymous connection.
Supported values:
The default value is TRUE.
USE_MANAGED_CREDENTIALS: When no credentials are
supplied, we use anonymous access by default. If this
is set, we will use cloud provider user settings.
Supported values:
The default value is FALSE.
USE_HTTPS: Use https to connect to datasource if true,
otherwise use http
Supported values:
The default value is TRUE.
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema
Registry in '[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry
credential object name.
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port
(optional).
Map.this to mimic the builder pattern.public org.apache.avro.Schema getSchema()
getSchema in interface org.apache.avro.generic.GenericContainerpublic Object get(int index)
get in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to getIndexOutOfBoundsExceptionpublic void put(int index,
Object value)
put in interface org.apache.avro.generic.IndexedRecordindex - the position of the field to setvalue - the value to setIndexOutOfBoundsExceptionCopyright © 2024. All rights reserved.