A set of parameters for Kinetica.alterDatasource(string,IDictionary{string, string},IDictionary{string, string}).
More...
|
string | name [get, set] |
| Name of the data source to be altered. More...
|
|
IDictionary< string, string > | datasource_updates_map [get, set] |
| Map containing the properties of the data source to be updated. More...
|
|
IDictionary< string, string > | options = new Dictionary<string, string>() [get, set] |
| Optional parameters. More...
|
|
Schema | Schema [get] |
| Avro Schema for this class More...
|
|
A set of parameters for Kinetica.alterDatasource(string,IDictionary{string, string},IDictionary{string, string}).
Alters the properties of an existing data source
Definition at line 21 of file AlterDatasource.cs.
kinetica.AlterDatasourceRequest.AlterDatasourceRequest |
( |
| ) |
|
|
inline |
kinetica.AlterDatasourceRequest.AlterDatasourceRequest |
( |
string |
name, |
|
|
IDictionary< string, string > |
datasource_updates_map, |
|
|
IDictionary< string, string > |
options |
|
) |
| |
|
inline |
Constructs an AlterDatasourceRequest object with the specified parameters.
- Parameters
-
name | Name of the data source to be altered. Must be an existing data source. |
datasource_updates_map | Map containing the properties of the data source to be updated. Error if empty.
-
LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure','gcs','hdfs','kafka' and 's3'.
-
USER_NAME: Name of the remote system user; may be an empty string
-
PASSWORD: Password for the remote system user; may be an empty string
-
SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values:
The default value is FALSE.
-
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
-
WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
-
CREDENTIAL: Name of the credential object to be used in data source
-
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
-
S3_REGION: Name of the Amazon S3 region where the given bucket is located
-
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
-
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
-
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
-
HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values:
The default value is FALSE.
-
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
-
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
-
AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
-
AZURE_OAUTH_TOKEN: OAuth token to access given storage container
-
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
-
GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
-
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
-
KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
-
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
-
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
-
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
-
ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values:
The default value is TRUE.
-
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is FALSE.
-
USE_HTTPS: Use https to connect to datasource if true, otherwise use http Supported values:
The default value is TRUE.
-
SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
|
options | Optional parameters. |
Definition at line 1043 of file AlterDatasource.cs.
IDictionary<string, string> kinetica.AlterDatasourceRequest.datasource_updates_map |
|
getset |
Map containing the properties of the data source to be updated.
Error if empty.
-
LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format.
Supported storage provider types are 'azure','gcs','hdfs','kafka' and 's3'.
-
USER_NAME: Name of the remote system user; may be an empty string
-
PASSWORD: Password for the remote system user; may be an empty string
-
SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values:
The default value is FALSE.
-
CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
-
WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
-
CREDENTIAL: Name of the credential object to be used in data source
-
S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
-
S3_REGION: Name of the Amazon S3 region where the given bucket is located
-
S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
-
S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
-
HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
-
HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values:
The default value is FALSE.
-
AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
-
AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
-
AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
-
AZURE_OAUTH_TOKEN: OAuth token to access given storage container
-
GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
-
GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
-
GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
-
KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
-
KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
-
JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
-
JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
-
ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values:
The default value is TRUE.
-
USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is FALSE.
-
USE_HTTPS: Use https to connect to datasource if true, otherwise use http Supported values:
The default value is TRUE.
-
SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
Definition at line 761 of file AlterDatasource.cs.
string kinetica.AlterDatasourceRequest.name |
|
getset |
Name of the data source to be altered.
Must be an existing data source.
Definition at line 493 of file AlterDatasource.cs.
IDictionary<string, string> kinetica.AlterDatasourceRequest.options = new Dictionary<string, string>() |
|
getset |
The documentation for this class was generated from the following file: