A set of parameters for Kinetica.createDatasource.
More...
|
string | name [get, set] |
| Name of the data source to be created. More...
|
|
string | location [get, set] |
| Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. More...
|
|
string | user_name [get, set] |
| Name of the remote system user; may be an empty string More...
|
|
string | password [get, set] |
| Password for the remote system user; may be an empty string More...
|
|
IDictionary< string, string > | options = new Dictionary<string, string>() [get, set] |
| Optional parameters. More...
|
|
Schema | Schema [get] |
| Avro Schema for this class More...
|
|
A set of parameters for Kinetica.createDatasource.
Creates a data source, which contains the location and connection information for a data store that is external to the database.
Definition at line 18 of file CreateDatasource.cs.
◆ CreateDatasourceRequest() [1/2]
kinetica.CreateDatasourceRequest.CreateDatasourceRequest |
( |
| ) |
|
|
inline |
◆ CreateDatasourceRequest() [2/2]
kinetica.CreateDatasourceRequest.CreateDatasourceRequest |
( |
string |
name, |
|
|
string |
location, |
|
|
string |
user_name, |
|
|
string |
password, |
|
|
IDictionary< string, string > |
options = null |
|
) |
| |
|
inline |
Constructs a CreateDatasourceRequest object with the specified parameters.
- Parameters
-
name | Name of the data source to be created. |
location | Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'. |
user_name | Name of the remote system user; may be an empty string |
password | Password for the remote system user; may be an empty string |
options | Optional parameters.
-
SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values:
The default value is FALSE.
-
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
-
WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
-
CREDENTIAL: Name of the credential object to be used in data source
-
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
-
S3_REGION: Name of the Amazon S3 region where the given bucket is located
-
S3_VERIFY_SSL: Whether to verify SSL connections. Supported values:
-
TRUE: Connect with SSL verification
-
FALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
The default value is TRUE.
-
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
-
TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
-
FALSE: Use path-style URI for requests.
The default value is TRUE.
-
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
-
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
-
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
-
HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values:
The default value is FALSE.
-
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
-
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
-
AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
-
AZURE_OAUTH_TOKEN: OAuth token to access given storage container
-
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
-
GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
-
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
-
IS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values:
The default value is FALSE.
-
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
-
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
-
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
-
ANONYMOUS: Use anonymous connection to storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values:
The default value is TRUE.
-
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is FALSE.
-
USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values:
The default value is TRUE.
-
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
-
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
-
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
The default value is an empty Dictionary. |
Definition at line 930 of file CreateDatasource.cs.
◆ location
string kinetica.CreateDatasourceRequest.location |
|
getset |
Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.
Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
Definition at line 262 of file CreateDatasource.cs.
◆ name
string kinetica.CreateDatasourceRequest.name |
|
getset |
◆ options
IDictionary<string, string> kinetica.CreateDatasourceRequest.options = new Dictionary<string, string>() |
|
getset |
Optional parameters.
-
SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values:
The default value is FALSE.
-
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
-
WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
-
CREDENTIAL: Name of the credential object to be used in data source
-
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
-
S3_REGION: Name of the Amazon S3 region where the given bucket is located
-
S3_VERIFY_SSL: Whether to verify SSL connections. Supported values:
-
TRUE: Connect with SSL verification
-
FALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
The default value is TRUE.
-
S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
-
TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
-
FALSE: Use path-style URI for requests.
The default value is TRUE.
-
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
-
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
-
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
-
HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values:
The default value is FALSE.
-
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
-
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
-
AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
-
AZURE_OAUTH_TOKEN: OAuth token to access given storage container
-
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
-
GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
-
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
-
IS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values:
The default value is FALSE.
-
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
-
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
-
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
-
ANONYMOUS: Use anonymous connection to storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values:
The default value is TRUE.
-
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is FALSE.
-
USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values:
The default value is TRUE.
-
SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
-
SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.
-
SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
The default value is an empty Dictionary.
Definition at line 592 of file CreateDatasource.cs.
◆ password
string kinetica.CreateDatasourceRequest.password |
|
getset |
Password for the remote system user; may be an empty string
Definition at line 270 of file CreateDatasource.cs.
◆ user_name
string kinetica.CreateDatasourceRequest.user_name |
|
getset |
The documentation for this class was generated from the following file: