Alter Data Sink

Alters the properties of an existing data sink

Input Parameter Description

NameTypeDescription
namestringName of the data sink to be altered. Must be an existing data sink.
datasink_updates_mapmap of string to strings

Map containing the properties of the data sink to be updated. Error if empty.

Supported Parameters (keys)Parameter Description
destinationDestination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'.
connection_timeoutTimeout in seconds for connecting to this sink
wait_timeoutTimeout in seconds for waiting for a response from this sink
credentialName of the credential object to be used in this data sink
s3_bucket_nameName of the Amazon S3 bucket to use as the data sink
s3_regionName of the Amazon S3 region where the given bucket is located
s3_verify_ssl

Set to false for testing purposes or when necessary to bypass TLS errors (e.g. self-signed certificates). This value is true by default. The default value is true. The supported values are:

  • true
  • false
s3_use_virtual_addressing

When true (default), the requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL. Otherwise set to false to use path-style URI for requests. The default value is true. The supported values are:

  • true
  • false
s3_aws_role_arnAmazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
s3_encryption_customer_algorithmCustomer encryption algorithm used encrypting data
s3_encryption_customer_keyCustomer encryption key to encrypt or decrypt data
s3_encryption_typeServer side encryption type
s3_kms_key_idKMS key
hdfs_kerberos_keytabKerberos keytab file location for the given HDFS user. This may be a KIFS file.
hdfs_delegation_tokenDelegation token for the given HDFS user
hdfs_use_kerberos

Use kerberos authentication for the given HDFS cluster The default value is false. The supported values are:

  • true
  • false
azure_storage_account_nameName of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified
azure_container_nameName of the Azure storage container to use as the data sink
azure_tenant_idActive Directory tenant ID (or directory ID)
azure_sas_tokenShared access signature token for Azure storage account to use as the data sink
azure_oauth_tokenOauth token to access given storage container
gcs_bucket_nameName of the Google Cloud Storage bucket to use as the data sink
gcs_project_idName of the Google Cloud project to use as the data sink
gcs_service_account_keysGoogle Cloud service account keys to use for authenticating the data sink
kafka_urlThe publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.
kafka_topic_nameName of the Kafka topic to use for this data sink, if it references a Kafka broker
anonymous

Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection The default value is true. The supported values are:

  • true
  • false
use_managed_credentials

When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. The default value is false. The supported values are:

  • true
  • false
use_https

Use https to connect to datasink if true, otherwise use http The default value is true. The supported values are:

  • true
  • false
max_batch_sizeMaximum number of records per notification message. The default value is '1'.
max_message_sizeMaximum size in bytes of each notification message. The default value is '1000000'.
json_format

The desired format of JSON encoded notifications message. If nested, records are returned as an array. Otherwise, only a single record per messages is returned. The default value is flat. The supported values are:

  • flat
  • nested
skip_validation

Bypass validation of connection to this data sink. The default value is false. The supported values are:

  • true
  • false
schema_nameUpdates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
optionsmap of string to stringsOptional parameters.

Output Parameter Description

NameTypeDescription
updated_properties_mapmap of string to stringsMap of values updated
infomap of string to stringsAdditional information.