A set of parameters for GPUdb::alterDatasource.
More...
#include <gpudb/protocol/alter_datasource.h>
|
std::string | name |
| Name of the data source to be altered. More...
|
|
std::map< std::string, std::string > | datasourceUpdatesMap |
| Map containing the properties of the data source to be updated. More...
|
|
std::map< std::string, std::string > | options |
| Optional parameters. More...
|
|
A set of parameters for GPUdb::alterDatasource.
Alters the properties of an existing data source
Definition at line 19 of file alter_datasource.h.
◆ AlterDatasourceRequest() [1/2]
gpudb::AlterDatasourceRequest::AlterDatasourceRequest |
( |
| ) |
|
|
inline |
◆ AlterDatasourceRequest() [2/2]
gpudb::AlterDatasourceRequest::AlterDatasourceRequest |
( |
const std::string & |
name_, |
|
|
const std::map< std::string, std::string > & |
datasourceUpdatesMap_, |
|
|
const std::map< std::string, std::string > & |
options_ |
|
) |
| |
|
inline |
Constructs an AlterDatasourceRequest object with the specified parameters.
- Parameters
-
[in] | name_ | Name of the data source to be altered. Must be an existing data source. |
[in] | datasourceUpdatesMap_ | Map containing the properties of the data source to be updated. Error if empty.
-
alter_datasource_location: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
-
alter_datasource_user_name: Name of the remote system user; may be an empty string
-
alter_datasource_password: Password for the remote system user; may be an empty string
-
alter_datasource_skip_validation: Bypass validation of connection to remote source. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_connection_timeout: Timeout in seconds for connecting to this storage provider
-
alter_datasource_wait_timeout: Timeout in seconds for reading from this storage provider
-
alter_datasource_credential: Name of the credential object to be used in data source
-
alter_datasource_s3_bucket_name: Name of the Amazon S3 bucket to use as the data source
-
alter_datasource_s3_region: Name of the Amazon S3 region where the given bucket is located
-
alter_datasource_s3_verify_ssl: Whether to verify SSL connections. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_s3_use_virtual_addressing: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_s3_aws_role_arn: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
alter_datasource_s3_encryption_customer_algorithm: Customer encryption algorithm used encrypting data
-
alter_datasource_s3_encryption_customer_key: Customer encryption key to encrypt or decrypt data
-
alter_datasource_hdfs_kerberos_keytab: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
alter_datasource_hdfs_delegation_token: Delegation token for the given HDFS user
-
alter_datasource_hdfs_use_kerberos: Use kerberos authentication for the given HDFS cluster. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_azure_storage_account_name: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
alter_datasource_azure_container_name: Name of the Azure storage container to use as the data source
-
alter_datasource_azure_tenant_id: Active Directory tenant ID (or directory ID)
-
alter_datasource_azure_sas_token: Shared access signature token for Azure storage account to use as the data source
-
alter_datasource_azure_oauth_token: OAuth token to access given storage container
-
alter_datasource_gcs_bucket_name: Name of the Google Cloud Storage bucket to use as the data source
-
alter_datasource_gcs_project_id: Name of the Google Cloud project to use as the data source
-
alter_datasource_gcs_service_account_keys: Google Cloud service account keys to use for authenticating the data source
-
alter_datasource_jdbc_driver_jar_path: JDBC driver jar file location. This may be a KIFS file.
-
alter_datasource_jdbc_driver_class_name: Name of the JDBC driver class
-
alter_datasource_kafka_url: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
-
alter_datasource_kafka_topic_name: Name of the Kafka topic to use as the data source
-
alter_datasource_anonymous: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_use_managed_credentials: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_use_https: Use https to connect to datasource if true, otherwise use http. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_schema_name: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
-
alter_datasource_schema_registry_location: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
-
alter_datasource_schema_registry_credential: Confluent Schema Registry credential object name.
-
alter_datasource_schema_registry_port: Confluent Schema Registry port (optional).
|
[in] | options_ | Optional parameters. |
Definition at line 389 of file alter_datasource.h.
◆ datasourceUpdatesMap
std::map<std::string, std::string> gpudb::AlterDatasourceRequest::datasourceUpdatesMap |
Map containing the properties of the data source to be updated.
Error if empty.
-
alter_datasource_location: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.
-
alter_datasource_user_name: Name of the remote system user; may be an empty string
-
alter_datasource_password: Password for the remote system user; may be an empty string
-
alter_datasource_skip_validation: Bypass validation of connection to remote source. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_connection_timeout: Timeout in seconds for connecting to this storage provider
-
alter_datasource_wait_timeout: Timeout in seconds for reading from this storage provider
-
alter_datasource_credential: Name of the credential object to be used in data source
-
alter_datasource_s3_bucket_name: Name of the Amazon S3 bucket to use as the data source
-
alter_datasource_s3_region: Name of the Amazon S3 region where the given bucket is located
-
alter_datasource_s3_verify_ssl: Whether to verify SSL connections. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_s3_use_virtual_addressing: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_s3_aws_role_arn: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
-
alter_datasource_s3_encryption_customer_algorithm: Customer encryption algorithm used encrypting data
-
alter_datasource_s3_encryption_customer_key: Customer encryption key to encrypt or decrypt data
-
alter_datasource_hdfs_kerberos_keytab: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
-
alter_datasource_hdfs_delegation_token: Delegation token for the given HDFS user
-
alter_datasource_hdfs_use_kerberos: Use kerberos authentication for the given HDFS cluster. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_azure_storage_account_name: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
-
alter_datasource_azure_container_name: Name of the Azure storage container to use as the data source
-
alter_datasource_azure_tenant_id: Active Directory tenant ID (or directory ID)
-
alter_datasource_azure_sas_token: Shared access signature token for Azure storage account to use as the data source
-
alter_datasource_azure_oauth_token: OAuth token to access given storage container
-
alter_datasource_gcs_bucket_name: Name of the Google Cloud Storage bucket to use as the data source
-
alter_datasource_gcs_project_id: Name of the Google Cloud project to use as the data source
-
alter_datasource_gcs_service_account_keys: Google Cloud service account keys to use for authenticating the data source
-
alter_datasource_jdbc_driver_jar_path: JDBC driver jar file location. This may be a KIFS file.
-
alter_datasource_jdbc_driver_class_name: Name of the JDBC driver class
-
alter_datasource_kafka_url: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
-
alter_datasource_kafka_topic_name: Name of the Kafka topic to use as the data source
-
alter_datasource_anonymous: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_use_managed_credentials: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values:
The default value is alter_datasource_false.
-
alter_datasource_use_https: Use https to connect to datasource if true, otherwise use http. Supported values:
The default value is alter_datasource_true.
-
alter_datasource_schema_name: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
-
alter_datasource_schema_registry_location: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.
-
alter_datasource_schema_registry_credential: Confluent Schema Registry credential object name.
-
alter_datasource_schema_registry_port: Confluent Schema Registry port (optional).
Definition at line 604 of file alter_datasource.h.
◆ name
std::string gpudb::AlterDatasourceRequest::name |
Name of the data source to be altered.
Must be an existing data source.
Definition at line 400 of file alter_datasource.h.
◆ options
std::map<std::string, std::string> gpudb::AlterDatasourceRequest::options |
The documentation for this struct was generated from the following file: