destination | Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'. |
connection_timeout | Timeout in seconds for connecting to this sink |
wait_timeout | Timeout in seconds for waiting for a response from this sink |
credential | Name of the credential object to be used in this data sink |
s3_bucket_name | Name of the Amazon S3 bucket to use as the data sink |
s3_region | Name of the Amazon S3 region where the given bucket is located |
s3_verify_ssl | Set to false for testing purposes or when necessary to bypass TLS errors (e.g. self-signed certificates). This value is true by default. The default value
is true. The supported values are: |
s3_use_virtual_addressing | When true (default), the requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.
Otherwise set to false to use path-style URI for requests. The default value is true. The supported values are: |
s3_aws_role_arn | Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user |
s3_encryption_customer_algorithm | Customer encryption algorithm used encrypting data |
s3_encryption_customer_key | Customer encryption key to encrypt or decrypt data |
s3_encryption_type | Server side encryption type |
s3_kms_key_id | KMS key |
hdfs_kerberos_keytab | Kerberos keytab file location for the given HDFS user. This may be a KIFS file. |
hdfs_delegation_token | Delegation token for the given HDFS user |
hdfs_use_kerberos | Use kerberos authentication for the given HDFS cluster The default value is false. The supported values are: |
azure_storage_account_name | Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified |
azure_container_name | Name of the Azure storage container to use as the data sink |
azure_tenant_id | Active Directory tenant ID (or directory ID) |
azure_sas_token | Shared access signature token for Azure storage account to use as the data sink |
azure_oauth_token | Oauth token to access given storage container |
gcs_bucket_name | Name of the Google Cloud Storage bucket to use as the data sink |
gcs_project_id | Name of the Google Cloud project to use as the data sink |
gcs_service_account_keys | Google Cloud service account keys to use for authenticating the data sink |
kafka_url | The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'. |
kafka_topic_name | Name of the Kafka topic to use for this data sink, if it references a Kafka broker |
anonymous | Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection
The default value is true. The supported values are: |
use_managed_credentials | When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. The default value is
false. The supported values are: |
use_https | Use https to connect to datasink if true, otherwise use http The default value is true. The supported values are: |
max_batch_size | Maximum number of records per notification message. The default value is '1'. |
max_message_size | Maximum size in bytes of each notification message. The default value is '1000000'. |
json_format | The desired format of JSON encoded notifications message. If nested, records are returned as an array. Otherwise, only a single record per messages is
returned. The default value is flat. The supported values are: |
skip_validation | Bypass validation of connection to this data sink. The default value is false. The supported values are: |
schema_name | Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used. |