connection_timeout | Timeout in seconds for connecting to this data sink |
wait_timeout | Timeout in seconds for waiting for a response from this data sink |
credential | Name of the credential object to be used in this data sink |
s3_bucket_name | Name of the Amazon S3 bucket to use as the data sink |
s3_region | Name of the Amazon S3 region where the given bucket is located |
s3_verify_ssl | Whether to verify SSL connections The default value is true. Supported
Values | Description |
---|
true | Connect with SSL verification | false | Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc. |
|
s3_use_virtual_addressing | Whether to use virtual addressing when referencing the Amazon S3 sink The default value is true. Supported
Values | Description |
---|
true | The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL. | false | Use path-style URI for requests. |
|
s3_aws_role_arn | Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user |
s3_encryption_customer_algorithm | Customer encryption algorithm used encrypting data |
s3_encryption_customer_key | Customer encryption key to encrypt or decrypt data |
s3_encryption_type | Server side encryption type |
s3_kms_key_id | KMS key |
hdfs_kerberos_keytab | Kerberos keytab file location for the given HDFS user. This may be a KIFS file. |
hdfs_delegation_token | Delegation token for the given HDFS user |
hdfs_use_kerberos | Use kerberos authentication for the given HDFS cluster The default value is false. The supported values are: |
azure_storage_account_name | Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified |
azure_container_name | Name of the Azure storage container to use as the data sink |
azure_tenant_id | Active Directory tenant ID (or directory ID) |
azure_sas_token | Shared access signature token for Azure storage account to use as the data sink |
azure_oauth_token | Oauth token to access given storage container |
gcs_bucket_name | Name of the Google Cloud Storage bucket to use as the data sink |
gcs_project_id | Name of the Google Cloud project to use as the data sink |
gcs_service_account_keys | Google Cloud service account keys to use for authenticating the data sink |
jdbc_driver_jar_path | JDBC driver jar file location |
jdbc_driver_class_name | Name of the JDBC driver class |
kafka_topic_name | Name of the Kafka topic to publish to if input parameter destination is a Kafka broker |
max_batch_size | Maximum number of records per notification message. The default value is '1'. |
max_message_size | Maximum size in bytes of each notification message. The default value is '1000000'. |
json_format | The desired format of JSON encoded notifications message. The default value is flat. Supported
Values | Description |
---|
flat | A single record is returned per message | nested | Records are returned as an array per message |
|
use_managed_credentials | When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. The default
value is false. The supported values are: |
use_https | Use https to connect to datasink if true, otherwise use http The default value is true. The supported values are: |
skip_validation | Bypass validation of connection to this data sink. The default value is false. The supported values are: |