location | Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are
'azure','gcs','hdfs','kafka' and 's3'. |
user_name | Name of the remote system user; may be an empty string |
password | Password for the remote system user; may be an empty string |
skip_validation | Bypass validation of connection to remote source. The default value is false. The supported values are: |
connection_timeout | Timeout in seconds for connecting to this storage provider |
wait_timeout | Timeout in seconds for reading from this storage provider |
credential | Name of the credential object to be used in data source |
s3_bucket_name | Name of the Amazon S3 bucket to use as the data source |
s3_region | Name of the Amazon S3 region where the given bucket is located |
s3_verify_ssl | Set to false for testing purposes or when necessary to bypass TLS errors (e.g. self-signed certificates). This value is true by default. The default value
is true. The supported values are: |
s3_use_virtual_addressing | Whether to use virtual addressing when referencing the Amazon S3 source The default value is true. Supported
Values | Description |
---|
true | The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL. | false | Use path-style URI for requests. |
|
s3_aws_role_arn | Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user |
s3_encryption_customer_algorithm | Customer encryption algorithm used encrypting data |
s3_encryption_customer_key | Customer encryption key to encrypt or decrypt data |
hdfs_kerberos_keytab | Kerberos keytab file location for the given HDFS user. This may be a KIFS file. |
hdfs_delegation_token | Delegation token for the given HDFS user |
hdfs_use_kerberos | Use kerberos authentication for the given HDFS cluster The default value is false. The supported values are: |
azure_storage_account_name | Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified |
azure_container_name | Name of the Azure storage container to use as the data source |
azure_tenant_id | Active Directory tenant ID (or directory ID) |
azure_sas_token | Shared access signature token for Azure storage account to use as the data source |
azure_oauth_token | OAuth token to access given storage container |
gcs_bucket_name | Name of the Google Cloud Storage bucket to use as the data source |
gcs_project_id | Name of the Google Cloud project to use as the data source |
gcs_service_account_keys | Google Cloud service account keys to use for authenticating the data source |
kafka_url | The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'. |
kafka_topic_name | Name of the Kafka topic to use as the data source |
jdbc_driver_jar_path | JDBC driver jar file location. This may be a KIFS file. |
jdbc_driver_class_name | Name of the JDBC driver class |
anonymous | Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection
The default value is true. The supported values are: |
use_managed_credentials | When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. The default value is
false. The supported values are: |
use_https | Use https to connect to datasource if true, otherwise use http The default value is true. The supported values are: |
schema_name | Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used. |
schema_registry_location | Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format. |
schema_registry_credential | Confluent Schema Registry credential object name. |
schema_registry_port | Confluent Schema Registry port (optional). |