Kinetica C# API  Version 7.1.10.0
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Properties Pages
kinetica.AlterDatasinkRequest Class Reference

A set of parameters for Kinetica.alterDatasink(string,IDictionary{string, string},IDictionary{string, string}). More...

+ Inheritance diagram for kinetica.AlterDatasinkRequest:
+ Collaboration diagram for kinetica.AlterDatasinkRequest:

Classes

struct  DatasinkUpdatesMap
 Map containing the properties of the data sink to be updated. More...
 

Public Member Functions

 AlterDatasinkRequest ()
 Constructs an AlterDatasinkRequest object with default parameters. More...
 
 AlterDatasinkRequest (string name, IDictionary< string, string > datasink_updates_map, IDictionary< string, string > options)
 Constructs an AlterDatasinkRequest object with the specified parameters. More...
 
- Public Member Functions inherited from kinetica.KineticaData
 KineticaData (KineticaType type)
 Constructor from Kinetica Type More...
 
 KineticaData (System.Type type=null)
 Default constructor, with optional System.Type More...
 
object Get (int fieldPos)
 Retrieve a specific property from this object More...
 
void Put (int fieldPos, object fieldValue)
 Write a specific property to this object More...
 

Properties

string name [get, set]
 Name of the data sink to be altered. More...
 
IDictionary< string, string > datasink_updates_map [get, set]
 Map containing the properties of the data sink to be updated. More...
 
IDictionary< string, string > options = new Dictionary<string, string>() [get, set]
 Optional parameters. More...
 
- Properties inherited from kinetica.KineticaData
Schema Schema [get]
 Avro Schema for this class More...
 

Additional Inherited Members

- Static Public Member Functions inherited from kinetica.KineticaData
static RecordSchema SchemaFromType (System.Type t, KineticaType ktype=null)
 Create an Avro Schema from a System.Type and a KineticaType. More...
 

Detailed Description

A set of parameters for Kinetica.alterDatasink(string,IDictionary{string, string},IDictionary{string, string}).


Alters the properties of an existing data sink

Definition at line 21 of file AlterDatasink.cs.

Constructor & Destructor Documentation

kinetica.AlterDatasinkRequest.AlterDatasinkRequest ( )
inline

Constructs an AlterDatasinkRequest object with default parameters.

Definition at line 774 of file AlterDatasink.cs.

kinetica.AlterDatasinkRequest.AlterDatasinkRequest ( string  name,
IDictionary< string, string >  datasink_updates_map,
IDictionary< string, string >  options 
)
inline

Constructs an AlterDatasinkRequest object with the specified parameters.

Parameters
nameName of the data sink to be altered. Must be an existing data sink.
datasink_updates_mapMap containing the properties of the data sink to be updated. Error if empty.
  • DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'.
  • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sink
  • WAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sink
  • CREDENTIAL: Name of the credential object to be used in this data sink
  • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sink
  • S3_REGION: Name of the Amazon S3 region where the given bucket is located
  • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
  • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
  • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
  • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values: The default value is FALSE.
  • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified
  • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sink
  • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
  • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sink
  • AZURE_OAUTH_TOKEN: Oauth token to access given storage container
  • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sink
  • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data sink
  • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sink
  • KAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.
  • KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka broker
  • ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values: The default value is TRUE.
  • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
  • USE_HTTPS: Use https to connect to datasink if true, otherwise use http Supported values: The default value is TRUE.
  • MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.
  • MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.
  • JSON_FORMAT: The desired format of JSON encoded notifications message. If nested, records are returned as an array. Otherwise, only a single record per messages is returned. Supported values: The default value is FLAT.
  • SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value is FALSE.
  • SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
optionsOptional parameters.

Definition at line 1046 of file AlterDatasink.cs.

Property Documentation

IDictionary<string, string> kinetica.AlterDatasinkRequest.datasink_updates_map
getset

Map containing the properties of the data sink to be updated.

Error if empty.

  • DESTINATION: Destination for the output data in format 'destination_type://path[:port]'.
    Supported destination types are 'http', 'https' and 'kafka'.
  • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sink
  • WAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sink
  • CREDENTIAL: Name of the credential object to be used in this data sink
  • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sink
  • S3_REGION: Name of the Amazon S3 region where the given bucket is located
  • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
  • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
  • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
  • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values: The default value is FALSE.
  • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified
  • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sink
  • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
  • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sink
  • AZURE_OAUTH_TOKEN: Oauth token to access given storage container
  • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sink
  • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data sink
  • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sink
  • KAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.
  • KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka broker
  • ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values: The default value is TRUE.
  • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
  • USE_HTTPS: Use https to connect to datasink if true, otherwise use http Supported values: The default value is TRUE.
  • MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.
  • MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.
  • JSON_FORMAT: The desired format of JSON encoded notifications message.
    If nested, records are returned as an array. Otherwise, only a single record per messages is returned. Supported values: The default value is FLAT.
  • SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value is FALSE.
  • SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.

Definition at line 766 of file AlterDatasink.cs.

string kinetica.AlterDatasinkRequest.name
getset

Name of the data sink to be altered.

Must be an existing data sink.

Definition at line 499 of file AlterDatasink.cs.

IDictionary<string, string> kinetica.AlterDatasinkRequest.options = new Dictionary<string, string>()
getset

Optional parameters.

Definition at line 769 of file AlterDatasink.cs.


The documentation for this class was generated from the following file: