Kinetica C# API  Version 7.1.10.0
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Properties Pages
kinetica.Kinetica Class Reference

API to talk to Kinetica Database More...

Classes

class  Options
 Connection Options More...
 

Public Member Functions

 Kinetica (string url_str, Options options=null)
 API Constructor More...
 
void AddTableType (string table_name, Type obj_type)
 Given a table name, add its record type to enable proper encoding of records for insertion or updates. More...
 
void SetKineticaSourceClassToTypeMapping (Type objectType, KineticaType kineticaType)
 Saves an object class type to a KineticaType association. More...
 
void DecodeRawBinaryDataUsingRecordType< T > (KineticaType record_type, IList< byte[]> records_binary, IList< T > records)
 Given a KineticaType object for a certain record type, decode binary data into distinct records (objects). More...
 
void DecodeRawBinaryDataUsingSchemaString< T > (string schema_string, IList< byte[]> records_binary, IList< T > records)
 Given a schema string for a certain record type, decode binary data into distinct records (objects). More...
 
void DecodeRawBinaryDataUsingSchemaString< T > (IList< string > schema_strings, IList< IList< byte[]>> lists_records_binary, IList< IList< T >> record_lists)
 Given a list of schema strings, decode binary data into distinct records (objects). More...
 
void DecodeRawBinaryDataUsingTypeIDs< T > (IList< string > type_ids, IList< byte[]> records_binary, IList< T > records)
 Given IDs of records types registered with Kinetica, decode binary data into distinct records (objects). More...
 
void DecodeRawBinaryDataUsingTypeIDs< T > (IList< string > type_ids, IList< IList< byte[]>> lists_records_binary, IList< IList< T >> record_lists)
 Given IDs of records types registered with Kinetica, decode binary data into distinct records (objects). More...
 
AdminAddHostResponse adminAddHost (AdminAddHostRequest request_)
 Adds a host to an existing cluster. More...
 
AdminAddHostResponse adminAddHost (string host_address, IDictionary< string, string > options=null)
 Adds a host to an existing cluster. More...
 
AdminAddRanksResponse adminAddRanks (AdminAddRanksRequest request_)
 Add one or more ranks to an existing Kinetica cluster. More...
 
AdminAddRanksResponse adminAddRanks (IList< string > hosts, IList< IDictionary< string, string >> config_params, IDictionary< string, string > options=null)
 Add one or more ranks to an existing Kinetica cluster. More...
 
AdminAlterHostResponse adminAlterHost (AdminAlterHostRequest request_)
 Alter properties on an existing host in the cluster. More...
 
AdminAlterHostResponse adminAlterHost (string host, IDictionary< string, string > options=null)
 Alter properties on an existing host in the cluster. More...
 
AdminAlterJobsResponse adminAlterJobs (AdminAlterJobsRequest request_)
 Perform the requested action on a list of one or more job(s). More...
 
AdminAlterJobsResponse adminAlterJobs (IList< long > job_ids, string action, IDictionary< string, string > options=null)
 Perform the requested action on a list of one or more job(s). More...
 
AdminBackupBeginResponse adminBackupBegin (AdminBackupBeginRequest request_)
 Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete. More...
 
AdminBackupBeginResponse adminBackupBegin (IDictionary< string, string > options=null)
 Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete. More...
 
AdminBackupEndResponse adminBackupEnd (AdminBackupEndRequest request_)
 Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete. More...
 
AdminBackupEndResponse adminBackupEnd (IDictionary< string, string > options=null)
 Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete. More...
 
AdminHaRefreshResponse adminHaRefresh (AdminHaRefreshRequest request_)
 Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes. More...
 
AdminHaRefreshResponse adminHaRefresh (IDictionary< string, string > options=null)
 Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes. More...
 
AdminOfflineResponse adminOffline (AdminOfflineRequest request_)
 Take the system offline. More...
 
AdminOfflineResponse adminOffline (bool offline, IDictionary< string, string > options=null)
 Take the system offline. More...
 
AdminRebalanceResponse adminRebalance (AdminRebalanceRequest request_)
 Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks. More...
 
AdminRebalanceResponse adminRebalance (IDictionary< string, string > options=null)
 Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks. More...
 
AdminRemoveHostResponse adminRemoveHost (AdminRemoveHostRequest request_)
 Removes a host from an existing cluster. More...
 
AdminRemoveHostResponse adminRemoveHost (string host, IDictionary< string, string > options=null)
 Removes a host from an existing cluster. More...
 
AdminRemoveRanksResponse adminRemoveRanks (AdminRemoveRanksRequest request_)
 Remove one or more ranks from an existing Kinetica cluster. More...
 
AdminRemoveRanksResponse adminRemoveRanks (IList< string > ranks, IDictionary< string, string > options=null)
 Remove one or more ranks from an existing Kinetica cluster. More...
 
AdminShowAlertsResponse adminShowAlerts (AdminShowAlertsRequest request_)
 Requests a list of the most recent alerts. More...
 
AdminShowAlertsResponse adminShowAlerts (int num_alerts, IDictionary< string, string > options=null)
 Requests a list of the most recent alerts. More...
 
AdminShowClusterOperationsResponse adminShowClusterOperations (AdminShowClusterOperationsRequest request_)
 Requests the detailed status of the current operation (by default) or a prior cluster operation specified by . More...
 
AdminShowClusterOperationsResponse adminShowClusterOperations (int history_index=0, IDictionary< string, string > options=null)
 Requests the detailed status of the current operation (by default) or a prior cluster operation specified by history_index . More...
 
AdminShowJobsResponse adminShowJobs (AdminShowJobsRequest request_)
 Get a list of the current jobs in GPUdb. More...
 
AdminShowJobsResponse adminShowJobs (IDictionary< string, string > options=null)
 Get a list of the current jobs in GPUdb. More...
 
AdminShowShardsResponse adminShowShards (AdminShowShardsRequest request_)
 Show the mapping of shards to the corresponding rank and tom. More...
 
AdminShowShardsResponse adminShowShards (IDictionary< string, string > options=null)
 Show the mapping of shards to the corresponding rank and tom. More...
 
AdminShutdownResponse adminShutdown (AdminShutdownRequest request_)
 Exits the database server application. More...
 
AdminShutdownResponse adminShutdown (string exit_type, string authorization, IDictionary< string, string > options=null)
 Exits the database server application. More...
 
AdminSwitchoverResponse adminSwitchover (AdminSwitchoverRequest request_)
 Manually switch over one or more processes to another host. More...
 
AdminSwitchoverResponse adminSwitchover (IList< string > processes, IList< string > destinations, IDictionary< string, string > options=null)
 Manually switch over one or more processes to another host. More...
 
AdminVerifyDbResponse adminVerifyDb (AdminVerifyDbRequest request_)
 Verify database is in a consistent state. More...
 
AdminVerifyDbResponse adminVerifyDb (IDictionary< string, string > options=null)
 Verify database is in a consistent state. More...
 
AggregateConvexHullResponse aggregateConvexHull (AggregateConvexHullRequest request_)
 Calculates and returns the convex hull for the values in a table specified by . More...
 
AggregateConvexHullResponse aggregateConvexHull (string table_name, string x_column_name, string y_column_name, IDictionary< string, string > options=null)
 Calculates and returns the convex hull for the values in a table specified by table_name . More...
 
AggregateGroupByResponse aggregateGroupBy (AggregateGroupByRequest request_)
 Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. More...
 
AggregateGroupByResponse aggregateGroupBy (string table_name, IList< string > column_names, long offset=0, long limit=-9999, IDictionary< string, string > options=null)
 Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. More...
 
AggregateHistogramResponse aggregateHistogram (AggregateHistogramRequest request_)
 Performs a histogram calculation given a table, a column, and an interval function. More...
 
AggregateHistogramResponse aggregateHistogram (string table_name, string column_name, double start, double end, double interval, IDictionary< string, string > options=null)
 Performs a histogram calculation given a table, a column, and an interval function. More...
 
AggregateKMeansResponse aggregateKMeans (AggregateKMeansRequest request_)
 This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering. More...
 
AggregateKMeansResponse aggregateKMeans (string table_name, IList< string > column_names, int k, double tolerance, IDictionary< string, string > options=null)
 This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering. More...
 
AggregateMinMaxResponse aggregateMinMax (AggregateMinMaxRequest request_)
 Calculates and returns the minimum and maximum values of a particular column in a table. More...
 
AggregateMinMaxResponse aggregateMinMax (string table_name, string column_name, IDictionary< string, string > options=null)
 Calculates and returns the minimum and maximum values of a particular column in a table. More...
 
AggregateMinMaxGeometryResponse aggregateMinMaxGeometry (AggregateMinMaxGeometryRequest request_)
 Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table. More...
 
AggregateMinMaxGeometryResponse aggregateMinMaxGeometry (string table_name, string column_name, IDictionary< string, string > options=null)
 Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table. More...
 
AggregateStatisticsResponse aggregateStatistics (AggregateStatisticsRequest request_)
 Calculates the requested statistics of the given column(s) in a given table. More...
 
AggregateStatisticsResponse aggregateStatistics (string table_name, string column_name, string stats, IDictionary< string, string > options=null)
 Calculates the requested statistics of the given column(s) in a given table. More...
 
AggregateStatisticsByRangeResponse aggregateStatisticsByRange (AggregateStatisticsByRangeRequest request_)
 Divides the given set into bins and calculates statistics of the values of a value-column in each bin. More...
 
AggregateStatisticsByRangeResponse aggregateStatisticsByRange (string table_name, string select_expression, string column_name, string value_column_name, string stats, double start, double end, double interval, IDictionary< string, string > options=null)
 Divides the given set into bins and calculates statistics of the values of a value-column in each bin. More...
 
AggregateUniqueResponse aggregateUnique (AggregateUniqueRequest request_)
 Returns all the unique values from a particular column (specified by ) of a particular table or view (specified by ). More...
 
AggregateUniqueResponse aggregateUnique (string table_name, string column_name, long offset=0, long limit=-9999, IDictionary< string, string > options=null)
 Returns all the unique values from a particular column (specified by column_name ) of a particular table or view (specified by table_name ). More...
 
AggregateUnpivotResponse aggregateUnpivot (AggregateUnpivotRequest request_)
 Rotate the column values into rows values. More...
 
AggregateUnpivotResponse aggregateUnpivot (string table_name, IList< string > column_names, string variable_column_name, string value_column_name, IList< string > pivoted_columns, IDictionary< string, string > options=null)
 Rotate the column values into rows values. More...
 
AlterCredentialResponse alterCredential (AlterCredentialRequest request_)
 Alter the properties of an existing credential. More...
 
AlterCredentialResponse alterCredential (string credential_name, IDictionary< string, string > credential_updates_map, IDictionary< string, string > options)
 Alter the properties of an existing credential. More...
 
AlterDatasinkResponse alterDatasink (AlterDatasinkRequest request_)
 Alters the properties of an existing data sink More...
 
AlterDatasinkResponse alterDatasink (string name, IDictionary< string, string > datasink_updates_map, IDictionary< string, string > options)
 Alters the properties of an existing data sink More...
 
AlterDatasourceResponse alterDatasource (AlterDatasourceRequest request_)
 Alters the properties of an existing data source More...
 
AlterDatasourceResponse alterDatasource (string name, IDictionary< string, string > datasource_updates_map, IDictionary< string, string > options)
 Alters the properties of an existing data source More...
 
AlterDirectoryResponse alterDirectory (AlterDirectoryRequest request_)
 Alters an existing directory in KiFS. More...
 
AlterDirectoryResponse alterDirectory (string directory_name, IDictionary< string, string > directory_updates_map, IDictionary< string, string > options=null)
 Alters an existing directory in KiFS. More...
 
AlterEnvironmentResponse alterEnvironment (AlterEnvironmentRequest request_)
 Alters an existing environment which can be referenced by a user-defined function (UDF). More...
 
AlterEnvironmentResponse alterEnvironment (string environment_name, string action, string _value, IDictionary< string, string > options=null)
 Alters an existing environment which can be referenced by a user-defined function (UDF). More...
 
AlterResourceGroupResponse alterResourceGroup (AlterResourceGroupRequest request_)
 Alters the properties of an exisiting resource group to facilitate resource management. More...
 
AlterResourceGroupResponse alterResourceGroup (string name, IDictionary< string, IDictionary< string, string >> tier_attributes=null, string ranking=AlterResourceGroupRequest.Ranking.EMPTY_STRING, string adjoining_resource_group="", IDictionary< string, string > options=null)
 Alters the properties of an exisiting resource group to facilitate resource management. More...
 
AlterRoleResponse alterRole (AlterRoleRequest request_)
 Alters a Role. More...
 
AlterRoleResponse alterRole (string name, string action, string _value, IDictionary< string, string > options=null)
 Alters a Role. More...
 
AlterSchemaResponse alterSchema (AlterSchemaRequest request_)
 Used to change the name of a SQL-style schema, specified in . More...
 
AlterSchemaResponse alterSchema (string schema_name, string action, string _value, IDictionary< string, string > options=null)
 Used to change the name of a SQL-style schema, specified in schema_name . More...
 
AlterSystemPropertiesResponse alterSystemProperties (AlterSystemPropertiesRequest request_)
 The Kinetica.alterSystemProperties(IDictionary{string, string},IDictionary{string, string}) endpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution. More...
 
AlterSystemPropertiesResponse alterSystemProperties (IDictionary< string, string > property_updates_map, IDictionary< string, string > options=null)
 The Kinetica.alterSystemProperties(IDictionary{string, string},IDictionary{string, string}) endpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution. More...
 
AlterTableResponse alterTable (AlterTableRequest request_)
 Apply various modifications to a table or view. More...
 
AlterTableResponse alterTable (string table_name, string action, string _value, IDictionary< string, string > options=null)
 Apply various modifications to a table or view. More...
 
AlterTableColumnsResponse alterTableColumns (AlterTableColumnsRequest request_)
 Apply various modifications to columns in a table, view. More...
 
AlterTableColumnsResponse alterTableColumns (string table_name, IList< IDictionary< string, string >> column_alterations, IDictionary< string, string > options)
 Apply various modifications to columns in a table, view. More...
 
AlterTableMetadataResponse alterTableMetadata (AlterTableMetadataRequest request_)
 Updates (adds or changes) metadata for tables. More...
 
AlterTableMetadataResponse alterTableMetadata (IList< string > table_names, IDictionary< string, string > metadata_map, IDictionary< string, string > options=null)
 Updates (adds or changes) metadata for tables. More...
 
AlterTableMonitorResponse alterTableMonitor (AlterTableMonitorRequest request_)
 Alters a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}). More...
 
AlterTableMonitorResponse alterTableMonitor (string topic_id, IDictionary< string, string > monitor_updates_map, IDictionary< string, string > options)
 Alters a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}). More...
 
AlterTierResponse alterTier (AlterTierRequest request_)
 Alters properties of an exisiting tier to facilitate resource management. More...
 
AlterTierResponse alterTier (string name, IDictionary< string, string > options=null)
 Alters properties of an exisiting tier to facilitate resource management. More...
 
AlterUserResponse alterUser (AlterUserRequest request_)
 Alters a user. More...
 
AlterUserResponse alterUser (string name, string action, string _value, IDictionary< string, string > options=null)
 Alters a user. More...
 
AlterVideoResponse alterVideo (AlterVideoRequest request_)
 Alters a video. More...
 
AlterVideoResponse alterVideo (string path, IDictionary< string, string > options=null)
 Alters a video. More...
 
AppendRecordsResponse appendRecords (AppendRecordsRequest request_)
 Append (or insert) all records from a source table (specified by ) to a particular target table (specified by ). More...
 
AppendRecordsResponse appendRecords (string table_name, string source_table_name, IDictionary< string, string > field_map, IDictionary< string, string > options=null)
 Append (or insert) all records from a source table (specified by source_table_name ) to a particular target table (specified by table_name ). More...
 
ClearStatisticsResponse clearStatistics (ClearStatisticsRequest request_)
 Clears statistics (cardinality, mean value, etc.) for a column in a specified table. More...
 
ClearStatisticsResponse clearStatistics (string table_name="", string column_name="", IDictionary< string, string > options=null)
 Clears statistics (cardinality, mean value, etc.) for a column in a specified table. More...
 
ClearTableResponse clearTable (ClearTableRequest request_)
 Clears (drops) one or all tables in the database cluster. More...
 
ClearTableResponse clearTable (string table_name="", string authorization="", IDictionary< string, string > options=null)
 Clears (drops) one or all tables in the database cluster. More...
 
ClearTableMonitorResponse clearTableMonitor (ClearTableMonitorRequest request_)
 Deactivates a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}). More...
 
ClearTableMonitorResponse clearTableMonitor (string topic_id, IDictionary< string, string > options=null)
 Deactivates a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}). More...
 
ClearTriggerResponse clearTrigger (ClearTriggerRequest request_)
 Clears or cancels the trigger identified by the specified handle. More...
 
ClearTriggerResponse clearTrigger (string trigger_id, IDictionary< string, string > options=null)
 Clears or cancels the trigger identified by the specified handle. More...
 
CollectStatisticsResponse collectStatistics (CollectStatisticsRequest request_)
 Collect statistics for a column(s) in a specified table. More...
 
CollectStatisticsResponse collectStatistics (string table_name, IList< string > column_names, IDictionary< string, string > options=null)
 Collect statistics for a column(s) in a specified table. More...
 
CreateCredentialResponse createCredential (CreateCredentialRequest request_)
 Create a new credential. More...
 
CreateCredentialResponse createCredential (string credential_name, string type, string identity, string secret, IDictionary< string, string > options=null)
 Create a new credential. More...
 
CreateDatasinkResponse createDatasink (CreateDatasinkRequest request_)
 Creates a data sink, which contains the destination information for a data sink that is external to the database. More...
 
CreateDatasinkResponse createDatasink (string name, string destination, IDictionary< string, string > options=null)
 Creates a data sink, which contains the destination information for a data sink that is external to the database. More...
 
CreateDatasourceResponse createDatasource (CreateDatasourceRequest request_)
 Creates a data source, which contains the location and connection information for a data store that is external to the database. More...
 
CreateDatasourceResponse createDatasource (string name, string location, string user_name, string password, IDictionary< string, string > options=null)
 Creates a data source, which contains the location and connection information for a data store that is external to the database. More...
 
CreateDirectoryResponse createDirectory (CreateDirectoryRequest request_)
 Creates a new directory in KiFS. More...
 
CreateDirectoryResponse createDirectory (string directory_name, IDictionary< string, string > options=null)
 Creates a new directory in KiFS. More...
 
CreateEnvironmentResponse createEnvironment (CreateEnvironmentRequest request_)
 Creates a new environment which can be used by user-defined functions (UDF). More...
 
CreateEnvironmentResponse createEnvironment (string environment_name, IDictionary< string, string > options=null)
 Creates a new environment which can be used by user-defined functions (UDF). More...
 
CreateGraphResponse createGraph (CreateGraphRequest request_)
 Creates a new graph network using given nodes, edges, weights, and restrictions. More...
 
CreateGraphResponse createGraph (string graph_name, bool directed_graph, IList< string > nodes, IList< string > edges, IList< string > weights, IList< string > restrictions, IDictionary< string, string > options=null)
 Creates a new graph network using given nodes, edges, weights, and restrictions. More...
 
CreateJobResponse createJob (CreateJobRequest request_)
 Create a job which will run asynchronously. More...
 
CreateJobResponse createJob (string endpoint, string request_encoding, byte[] data, string data_str, IDictionary< string, string > options=null)
 Create a job which will run asynchronously. More...
 
CreateJoinTableResponse createJoinTable (CreateJoinTableRequest request_)
 Creates a table that is the result of a SQL JOIN. More...
 
CreateJoinTableResponse createJoinTable (string join_table_name, IList< string > table_names, IList< string > column_names, IList< string > expressions=null, IDictionary< string, string > options=null)
 Creates a table that is the result of a SQL JOIN. More...
 
CreateMaterializedViewResponse createMaterializedView (CreateMaterializedViewRequest request_)
 Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name. More...
 
CreateMaterializedViewResponse createMaterializedView (string table_name, IDictionary< string, string > options=null)
 Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name. More...
 
CreateProcResponse createProc (CreateProcRequest request_)
 Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution. More...
 
CreateProcResponse createProc (string proc_name, string execution_mode=CreateProcRequest.ExecutionMode.DISTRIBUTED, IDictionary< string, byte[]> files=null, string command="", IList< string > args=null, IDictionary< string, string > options=null)
 Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution. More...
 
CreateProjectionResponse createProjection (CreateProjectionRequest request_)
 Creates a new projection of an existing table. More...
 
CreateProjectionResponse createProjection (string table_name, string projection_name, IList< string > column_names, IDictionary< string, string > options=null)
 Creates a new projection of an existing table. More...
 
CreateResourceGroupResponse createResourceGroup (CreateResourceGroupRequest request_)
 Creates a new resource group to facilitate resource management. More...
 
CreateResourceGroupResponse createResourceGroup (string name, IDictionary< string, IDictionary< string, string >> tier_attributes, string ranking, string adjoining_resource_group="", IDictionary< string, string > options=null)
 Creates a new resource group to facilitate resource management. More...
 
CreateRoleResponse createRole (CreateRoleRequest request_)
 Creates a new role. More...
 
CreateRoleResponse createRole (string name, IDictionary< string, string > options=null)
 Creates a new role. More...
 
CreateSchemaResponse createSchema (CreateSchemaRequest request_)
 Creates a SQL-style schema. More...
 
CreateSchemaResponse createSchema (string schema_name, IDictionary< string, string > options=null)
 Creates a SQL-style schema. More...
 
CreateTableResponse createTable (CreateTableRequest request_)
 Creates a new table. More...
 
CreateTableResponse createTable (string table_name, string type_id, IDictionary< string, string > options=null)
 Creates a new table. More...
 
CreateTableExternalResponse createTableExternal (CreateTableExternalRequest request_)
 Creates a new external table, which is a local database object whose source data is located externally to the database. More...
 
CreateTableExternalResponse createTableExternal (string table_name, IList< string > filepaths, IDictionary< string, IDictionary< string, string >> modify_columns=null, IDictionary< string, string > create_table_options=null, IDictionary< string, string > options=null)
 Creates a new external table, which is a local database object whose source data is located externally to the database. More...
 
CreateTableMonitorResponse createTableMonitor (CreateTableMonitorRequest request_)
 Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified by ) and forwards event notifications to subscribers via ZMQ. More...
 
CreateTableMonitorResponse createTableMonitor (string table_name, IDictionary< string, string > options=null)
 Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified by table_name ) and forwards event notifications to subscribers via ZMQ. More...
 
CreateTriggerByAreaResponse createTriggerByArea (CreateTriggerByAreaRequest request_)
 Sets up an area trigger mechanism for two column_names for one or more tables. More...
 
CreateTriggerByAreaResponse createTriggerByArea (string request_id, IList< string > table_names, string x_column_name, IList< double > x_vector, string y_column_name, IList< double > y_vector, IDictionary< string, string > options=null)
 Sets up an area trigger mechanism for two column_names for one or more tables. More...
 
CreateTriggerByRangeResponse createTriggerByRange (CreateTriggerByRangeRequest request_)
 Sets up a simple range trigger for a column_name for one or more tables. More...
 
CreateTriggerByRangeResponse createTriggerByRange (string request_id, IList< string > table_names, string column_name, double min, double max, IDictionary< string, string > options=null)
 Sets up a simple range trigger for a column_name for one or more tables. More...
 
CreateTypeResponse createType (CreateTypeRequest request_)
 Creates a new type describing the layout of a table. More...
 
CreateTypeResponse createType (string type_definition, string label, IDictionary< string, IList< string >> properties=null, IDictionary< string, string > options=null)
 Creates a new type describing the layout of a table. More...
 
CreateUnionResponse createUnion (CreateUnionRequest request_)
 Merges data from one or more tables with comparable data types into a new table. More...
 
CreateUnionResponse createUnion (string table_name, IList< string > table_names, IList< IList< string >> input_column_names, IList< string > output_column_names, IDictionary< string, string > options=null)
 Merges data from one or more tables with comparable data types into a new table. More...
 
CreateUserExternalResponse createUserExternal (CreateUserExternalRequest request_)
 Creates a new external user (a user whose credentials are managed by an external LDAP). More...
 
CreateUserExternalResponse createUserExternal (string name, IDictionary< string, string > options=null)
 Creates a new external user (a user whose credentials are managed by an external LDAP). More...
 
CreateUserInternalResponse createUserInternal (CreateUserInternalRequest request_)
 Creates a new internal user (a user whose credentials are managed by the database system). More...
 
CreateUserInternalResponse createUserInternal (string name, string password, IDictionary< string, string > options=null)
 Creates a new internal user (a user whose credentials are managed by the database system). More...
 
CreateVideoResponse createVideo (CreateVideoRequest request_)
 Creates a job to generate a sequence of raster images that visualize data over a specified time. More...
 
CreateVideoResponse createVideo (string attribute, string begin, double duration_seconds, string end, double frames_per_second, string style, string path, string style_parameters, IDictionary< string, string > options=null)
 Creates a job to generate a sequence of raster images that visualize data over a specified time. More...
 
DeleteDirectoryResponse deleteDirectory (DeleteDirectoryRequest request_)
 Deletes a directory from KiFS. More...
 
DeleteDirectoryResponse deleteDirectory (string directory_name, IDictionary< string, string > options=null)
 Deletes a directory from KiFS. More...
 
DeleteFilesResponse deleteFiles (DeleteFilesRequest request_)
 Deletes one or more files from KiFS. More...
 
DeleteFilesResponse deleteFiles (IList< string > file_names, IDictionary< string, string > options=null)
 Deletes one or more files from KiFS. More...
 
DeleteGraphResponse deleteGraph (DeleteGraphRequest request_)
 Deletes an existing graph from the graph server and/or persist. More...
 
DeleteGraphResponse deleteGraph (string graph_name, IDictionary< string, string > options=null)
 Deletes an existing graph from the graph server and/or persist. More...
 
DeleteProcResponse deleteProc (DeleteProcRequest request_)
 Deletes a proc. More...
 
DeleteProcResponse deleteProc (string proc_name, IDictionary< string, string > options=null)
 Deletes a proc. More...
 
DeleteRecordsResponse deleteRecords (DeleteRecordsRequest request_)
 Deletes record(s) matching the provided criteria from the given table. More...
 
DeleteRecordsResponse deleteRecords (string table_name, IList< string > expressions, IDictionary< string, string > options=null)
 Deletes record(s) matching the provided criteria from the given table. More...
 
DeleteResourceGroupResponse deleteResourceGroup (DeleteResourceGroupRequest request_)
 Deletes a resource group. More...
 
DeleteResourceGroupResponse deleteResourceGroup (string name, IDictionary< string, string > options=null)
 Deletes a resource group. More...
 
DeleteRoleResponse deleteRole (DeleteRoleRequest request_)
 Deletes an existing role. More...
 
DeleteRoleResponse deleteRole (string name, IDictionary< string, string > options=null)
 Deletes an existing role. More...
 
DeleteUserResponse deleteUser (DeleteUserRequest request_)
 Deletes an existing user. More...
 
DeleteUserResponse deleteUser (string name, IDictionary< string, string > options=null)
 Deletes an existing user. More...
 
DownloadFilesResponse downloadFiles (DownloadFilesRequest request_)
 Downloads one or more files from KiFS. More...
 
DownloadFilesResponse downloadFiles (IList< string > file_names, IList< long > read_offsets, IList< long > read_lengths, IDictionary< string, string > options=null)
 Downloads one or more files from KiFS. More...
 
DropCredentialResponse dropCredential (DropCredentialRequest request_)
 Drop an existing credential. More...
 
DropCredentialResponse dropCredential (string credential_name, IDictionary< string, string > options=null)
 Drop an existing credential. More...
 
DropDatasinkResponse dropDatasink (DropDatasinkRequest request_)
 Drops an existing data sink. More...
 
DropDatasinkResponse dropDatasink (string name, IDictionary< string, string > options=null)
 Drops an existing data sink. More...
 
DropDatasourceResponse dropDatasource (DropDatasourceRequest request_)
 Drops an existing data source. More...
 
DropDatasourceResponse dropDatasource (string name, IDictionary< string, string > options=null)
 Drops an existing data source. More...
 
DropEnvironmentResponse dropEnvironment (DropEnvironmentRequest request_)
 Drop an existing user-defined function (UDF) environment. More...
 
DropEnvironmentResponse dropEnvironment (string environment_name, IDictionary< string, string > options=null)
 Drop an existing user-defined function (UDF) environment. More...
 
DropSchemaResponse dropSchema (DropSchemaRequest request_)
 Drops an existing SQL-style schema, specified in . More...
 
DropSchemaResponse dropSchema (string schema_name, IDictionary< string, string > options=null)
 Drops an existing SQL-style schema, specified in schema_name . More...
 
ExecuteProcResponse executeProc (ExecuteProcRequest request_)
 Executes a proc. More...
 
ExecuteProcResponse executeProc (string proc_name, IDictionary< string, string > _params=null, IDictionary< string, byte[]> bin_params=null, IList< string > input_table_names=null, IDictionary< string, IList< string >> input_column_names=null, IList< string > output_table_names=null, IDictionary< string, string > options=null)
 Executes a proc. More...
 
ExecuteSqlResponse executeSql (ExecuteSqlRequest request_)
 Execute a SQL statement (query, DML, or DDL). More...
 
ExecuteSqlResponse executeSql (string statement, long offset=0, long limit=-9999, string request_schema_str="", IList< byte[]> data=null, IDictionary< string, string > options=null)
 Execute a SQL statement (query, DML, or DDL). More...
 
ExportRecordsToFilesResponse exportRecordsToFiles (ExportRecordsToFilesRequest request_)
 Export records from a table to files. More...
 
ExportRecordsToFilesResponse exportRecordsToFiles (string table_name, string filepath, IDictionary< string, string > options=null)
 Export records from a table to files. More...
 
ExportRecordsToTableResponse exportRecordsToTable (ExportRecordsToTableRequest request_)
 Exports records from source table to the specified target table in an external database More...
 
ExportRecordsToTableResponse exportRecordsToTable (string table_name, string remote_query="", IDictionary< string, string > options=null)
 Exports records from source table to the specified target table in an external database More...
 
FilterResponse filter (FilterRequest request_)
 Filters data based on the specified expression. More...
 
FilterResponse filter (string table_name, string view_name, string expression, IDictionary< string, string > options=null)
 Filters data based on the specified expression. More...
 
FilterByAreaResponse filterByArea (FilterByAreaRequest request_)
 Calculates which objects from a table are within a named area of interest (NAI/polygon). More...
 
FilterByAreaResponse filterByArea (string table_name, string view_name, string x_column_name, IList< double > x_vector, string y_column_name, IList< double > y_vector, IDictionary< string, string > options=null)
 Calculates which objects from a table are within a named area of interest (NAI/polygon). More...
 
FilterByAreaGeometryResponse filterByAreaGeometry (FilterByAreaGeometryRequest request_)
 Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon). More...
 
FilterByAreaGeometryResponse filterByAreaGeometry (string table_name, string view_name, string column_name, IList< double > x_vector, IList< double > y_vector, IDictionary< string, string > options=null)
 Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon). More...
 
FilterByBoxResponse filterByBox (FilterByBoxRequest request_)
 Calculates how many objects within the given table lie in a rectangular box. More...
 
FilterByBoxResponse filterByBox (string table_name, string view_name, string x_column_name, double min_x, double max_x, string y_column_name, double min_y, double max_y, IDictionary< string, string > options=null)
 Calculates how many objects within the given table lie in a rectangular box. More...
 
FilterByBoxGeometryResponse filterByBoxGeometry (FilterByBoxGeometryRequest request_)
 Calculates which geospatial geometry objects from a table intersect a rectangular box. More...
 
FilterByBoxGeometryResponse filterByBoxGeometry (string table_name, string view_name, string column_name, double min_x, double max_x, double min_y, double max_y, IDictionary< string, string > options=null)
 Calculates which geospatial geometry objects from a table intersect a rectangular box. More...
 
FilterByGeometryResponse filterByGeometry (FilterByGeometryRequest request_)
 Applies a geometry filter against a geospatial geometry column in a given table or view. More...
 
FilterByGeometryResponse filterByGeometry (string table_name, string view_name, string column_name, string input_wkt, string operation, IDictionary< string, string > options=null)
 Applies a geometry filter against a geospatial geometry column in a given table or view. More...
 
FilterByListResponse filterByList (FilterByListRequest request_)
 Calculates which records from a table have values in the given list for the corresponding column. More...
 
FilterByListResponse filterByList (string table_name, string view_name, IDictionary< string, IList< string >> column_values_map, IDictionary< string, string > options=null)
 Calculates which records from a table have values in the given list for the corresponding column. More...
 
FilterByRadiusResponse filterByRadius (FilterByRadiusRequest request_)
 Calculates which objects from a table lie within a circle with the given radius and center point (i.e. More...
 
FilterByRadiusResponse filterByRadius (string table_name, string view_name, string x_column_name, double x_center, string y_column_name, double y_center, double radius, IDictionary< string, string > options=null)
 Calculates which objects from a table lie within a circle with the given radius and center point (i.e. More...
 
FilterByRadiusGeometryResponse filterByRadiusGeometry (FilterByRadiusGeometryRequest request_)
 Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. More...
 
FilterByRadiusGeometryResponse filterByRadiusGeometry (string table_name, string view_name, string column_name, double x_center, double y_center, double radius, IDictionary< string, string > options=null)
 Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. More...
 
FilterByRangeResponse filterByRange (FilterByRangeRequest request_)
 Calculates which objects from a table have a column that is within the given bounds. More...
 
FilterByRangeResponse filterByRange (string table_name, string view_name, string column_name, double lower_bound, double upper_bound, IDictionary< string, string > options=null)
 Calculates which objects from a table have a column that is within the given bounds. More...
 
FilterBySeriesResponse filterBySeries (FilterBySeriesRequest request_)
 Filters objects matching all points of the given track (works only on track type data). More...
 
FilterBySeriesResponse filterBySeries (string table_name, string view_name, string track_id, IList< string > target_track_ids, IDictionary< string, string > options=null)
 Filters objects matching all points of the given track (works only on track type data). More...
 
FilterByStringResponse filterByString (FilterByStringRequest request_)
 Calculates which objects from a table or view match a string expression for the given string columns. More...
 
FilterByStringResponse filterByString (string table_name, string view_name, string expression, string mode, IList< string > column_names, IDictionary< string, string > options=null)
 Calculates which objects from a table or view match a string expression for the given string columns. More...
 
FilterByTableResponse filterByTable (FilterByTableRequest request_)
 Filters objects in one table based on objects in another table. More...
 
FilterByTableResponse filterByTable (string table_name, string view_name, string column_name, string source_table_name, string source_table_column_name, IDictionary< string, string > options=null)
 Filters objects in one table based on objects in another table. More...
 
FilterByValueResponse filterByValue (FilterByValueRequest request_)
 Calculates which objects from a table has a particular value for a particular column. More...
 
FilterByValueResponse filterByValue (string table_name, string view_name, bool is_string, double _value, string value_str, string column_name, IDictionary< string, string > options=null)
 Calculates which objects from a table has a particular value for a particular column. More...
 
GetJobResponse getJob (GetJobRequest request_)
 Get the status and result of asynchronously running job. More...
 
GetJobResponse getJob (long job_id, IDictionary< string, string > options=null)
 Get the status and result of asynchronously running job. More...
 
GetRecordsResponse< T > getRecords< T > (GetRecordsRequest request_)
 Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. More...
 
GetRecordsResponse< T > getRecords< T > (string table_name, long offset=0, long limit=-9999, IDictionary< string, string > options=null)
 Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. More...
 
GetRecordsByColumnResponse getRecordsByColumn (GetRecordsByColumnRequest request_)
 For a given table, retrieves the values from the requested column(s). More...
 
GetRecordsByColumnResponse getRecordsByColumn (string table_name, IList< string > column_names, long offset=0, long limit=-9999, IDictionary< string, string > options=null)
 For a given table, retrieves the values from the requested column(s). More...
 
GetRecordsBySeriesResponse< T > getRecordsBySeries< T > (GetRecordsBySeriesRequest request_)
 Retrieves the complete series/track records from the given based on the partial track information contained in the . More...
 
GetRecordsBySeriesResponse< T > getRecordsBySeries< T > (string table_name, string world_table_name, int offset=0, int limit=250, IDictionary< string, string > options=null)
 Retrieves the complete series/track records from the given world_table_name based on the partial track information contained in the table_name . More...
 
GetRecordsFromCollectionResponse
< T > 
getRecordsFromCollection< T > (GetRecordsFromCollectionRequest request_)
 Retrieves records from a collection. More...
 
GetRecordsFromCollectionResponse
< T > 
getRecordsFromCollection< T > (string table_name, long offset=0, long limit=-9999, IDictionary< string, string > options=null)
 Retrieves records from a collection. More...
 
GrantPermissionResponse grantPermission (GrantPermissionRequest request_)
 Grant user or role the specified permission on the specified object. More...
 
GrantPermissionResponse grantPermission (string principal, string _object, string object_type, string permission, IDictionary< string, string > options=null)
 Grant user or role the specified permission on the specified object. More...
 
GrantPermissionCredentialResponse grantPermissionCredential (GrantPermissionCredentialRequest request_)
 Grants a credential-level permission to a user or role. More...
 
GrantPermissionCredentialResponse grantPermissionCredential (string name, string permission, string credential_name, IDictionary< string, string > options=null)
 Grants a credential-level permission to a user or role. More...
 
GrantPermissionDatasourceResponse grantPermissionDatasource (GrantPermissionDatasourceRequest request_)
 Grants a data source permission to a user or role. More...
 
GrantPermissionDatasourceResponse grantPermissionDatasource (string name, string permission, string datasource_name, IDictionary< string, string > options=null)
 Grants a data source permission to a user or role. More...
 
GrantPermissionDirectoryResponse grantPermissionDirectory (GrantPermissionDirectoryRequest request_)
 Grants a KiFS directory-level permission to a user or role. More...
 
GrantPermissionDirectoryResponse grantPermissionDirectory (string name, string permission, string directory_name, IDictionary< string, string > options=null)
 Grants a KiFS directory-level permission to a user or role. More...
 
GrantPermissionProcResponse grantPermissionProc (GrantPermissionProcRequest request_)
 Grants a proc-level permission to a user or role. More...
 
GrantPermissionProcResponse grantPermissionProc (string name, string permission, string proc_name, IDictionary< string, string > options=null)
 Grants a proc-level permission to a user or role. More...
 
GrantPermissionSystemResponse grantPermissionSystem (GrantPermissionSystemRequest request_)
 Grants a system-level permission to a user or role. More...
 
GrantPermissionSystemResponse grantPermissionSystem (string name, string permission, IDictionary< string, string > options=null)
 Grants a system-level permission to a user or role. More...
 
GrantPermissionTableResponse grantPermissionTable (GrantPermissionTableRequest request_)
 Grants a table-level permission to a user or role. More...
 
GrantPermissionTableResponse grantPermissionTable (string name, string permission, string table_name, string filter_expression="", IDictionary< string, string > options=null)
 Grants a table-level permission to a user or role. More...
 
GrantRoleResponse grantRole (GrantRoleRequest request_)
 Grants membership in a role to a user or role. More...
 
GrantRoleResponse grantRole (string role, string member, IDictionary< string, string > options=null)
 Grants membership in a role to a user or role. More...
 
HasPermissionResponse hasPermission (HasPermissionRequest request_)
 Checks if the specified user has the specified permission on the specified object. More...
 
HasPermissionResponse hasPermission (string principal, string _object, string object_type, string permission, IDictionary< string, string > options=null)
 Checks if the specified user has the specified permission on the specified object. More...
 
HasProcResponse hasProc (HasProcRequest request_)
 Checks the existence of a proc with the given name. More...
 
HasProcResponse hasProc (string proc_name, IDictionary< string, string > options=null)
 Checks the existence of a proc with the given name. More...
 
HasRoleResponse hasRole (HasRoleRequest request_)
 Checks if the specified user has the specified role. More...
 
HasRoleResponse hasRole (string principal, string role, IDictionary< string, string > options=null)
 Checks if the specified user has the specified role. More...
 
HasSchemaResponse hasSchema (HasSchemaRequest request_)
 Checks for the existence of a schema with the given name. More...
 
HasSchemaResponse hasSchema (string schema_name, IDictionary< string, string > options=null)
 Checks for the existence of a schema with the given name. More...
 
HasTableResponse hasTable (HasTableRequest request_)
 Checks for the existence of a table with the given name. More...
 
HasTableResponse hasTable (string table_name, IDictionary< string, string > options=null)
 Checks for the existence of a table with the given name. More...
 
HasTypeResponse hasType (HasTypeRequest request_)
 Check for the existence of a type. More...
 
HasTypeResponse hasType (string type_id, IDictionary< string, string > options=null)
 Check for the existence of a type. More...
 
InsertRecordsResponse insertRecordsRaw (RawInsertRecordsRequest request_)
 Adds multiple records to the specified table. More...
 
InsertRecordsResponse insertRecords< T > (InsertRecordsRequest< T > request_)
 Adds multiple records to the specified table. More...
 
InsertRecordsResponse insertRecords< T > (string table_name, IList< T > data, IDictionary< string, string > options=null)
 Adds multiple records to the specified table. More...
 
InsertRecordsFromFilesResponse insertRecordsFromFiles (InsertRecordsFromFilesRequest request_)
 Reads from one or more files and inserts the data into a new or existing table. More...
 
InsertRecordsFromFilesResponse insertRecordsFromFiles (string table_name, IList< string > filepaths, IDictionary< string, IDictionary< string, string >> modify_columns=null, IDictionary< string, string > create_table_options=null, IDictionary< string, string > options=null)
 Reads from one or more files and inserts the data into a new or existing table. More...
 
InsertRecordsFromPayloadResponse insertRecordsFromPayload (InsertRecordsFromPayloadRequest request_)
 Reads from the given text-based or binary payload and inserts the data into a new or existing table. More...
 
InsertRecordsFromPayloadResponse insertRecordsFromPayload (string table_name, string data_text, byte[] data_bytes, IDictionary< string, IDictionary< string, string >> modify_columns=null, IDictionary< string, string > create_table_options=null, IDictionary< string, string > options=null)
 Reads from the given text-based or binary payload and inserts the data into a new or existing table. More...
 
InsertRecordsFromQueryResponse insertRecordsFromQuery (InsertRecordsFromQueryRequest request_)
 Computes remote query result and inserts the result data into a new or existing table More...
 
InsertRecordsFromQueryResponse insertRecordsFromQuery (string table_name, string remote_query, IDictionary< string, IDictionary< string, string >> modify_columns=null, IDictionary< string, string > create_table_options=null, IDictionary< string, string > options=null)
 Computes remote query result and inserts the result data into a new or existing table More...
 
InsertRecordsRandomResponse insertRecordsRandom (InsertRecordsRandomRequest request_)
 Generates a specified number of random records and adds them to the given table. More...
 
InsertRecordsRandomResponse insertRecordsRandom (string table_name, long count, IDictionary< string, IDictionary< string, double >> options=null)
 Generates a specified number of random records and adds them to the given table. More...
 
InsertSymbolResponse insertSymbol (InsertSymbolRequest request_)
 Adds a symbol or icon (i.e. More...
 
InsertSymbolResponse insertSymbol (string symbol_id, string symbol_format, byte[] symbol_data, IDictionary< string, string > options=null)
 Adds a symbol or icon (i.e. More...
 
KillProcResponse killProc (KillProcRequest request_)
 Kills a running proc instance. More...
 
KillProcResponse killProc (string run_id="", IDictionary< string, string > options=null)
 Kills a running proc instance. More...
 
LockTableResponse lockTable (LockTableRequest request_)
 Manages global access to a table's data. More...
 
LockTableResponse lockTable (string table_name, string lock_type=LockTableRequest.LockType.STATUS, IDictionary< string, string > options=null)
 Manages global access to a table's data. More...
 
MatchGraphResponse matchGraph (MatchGraphRequest request_)
 Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type. More...
 
MatchGraphResponse matchGraph (string graph_name, IList< string > sample_points, string solve_method=MatchGraphRequest.SolveMethod.MARKOV_CHAIN, string solution_table="", IDictionary< string, string > options=null)
 Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type. More...
 
MergeRecordsResponse mergeRecords (MergeRecordsRequest request_)
 Create a new empty result table (specified by ), and insert all records from source tables (specified by ) based on the field mapping information (specified by ). More...
 
MergeRecordsResponse mergeRecords (string table_name, IList< string > source_table_names, IList< IDictionary< string, string >> field_maps, IDictionary< string, string > options=null)
 Create a new empty result table (specified by table_name ), and insert all records from source tables (specified by source_table_names ) based on the field mapping information (specified by field_maps ). More...
 
ModifyGraphResponse modifyGraph (ModifyGraphRequest request_)
 Update an existing graph network using given nodes, edges, weights, restrictions, and options. More...
 
ModifyGraphResponse modifyGraph (string graph_name, IList< string > nodes, IList< string > edges, IList< string > weights, IList< string > restrictions, IDictionary< string, string > options=null)
 Update an existing graph network using given nodes, edges, weights, restrictions, and options. More...
 
QueryGraphResponse queryGraph (QueryGraphRequest request_)
 Employs a topological query on a network graph generated a-priori by Kinetica.createGraph(string,bool,IList{string},IList{string},IList{string},IList{string},IDictionary{string, string}) and returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges. More...
 
QueryGraphResponse queryGraph (string graph_name, IList< string > queries, IList< string > restrictions=null, string adjacency_table="", int rings=1, IDictionary< string, string > options=null)
 Employs a topological query on a network graph generated a-priori by Kinetica.createGraph(string,bool,IList{string},IList{string},IList{string},IList{string},IDictionary{string, string}) and returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges. More...
 
RepartitionGraphResponse repartitionGraph (RepartitionGraphRequest request_)
 Rebalances an existing partitioned graph. More...
 
RepartitionGraphResponse repartitionGraph (string graph_name, IDictionary< string, string > options=null)
 Rebalances an existing partitioned graph. More...
 
RevokePermissionResponse revokePermission (RevokePermissionRequest request_)
 Revoke user or role the specified permission on the specified object. More...
 
RevokePermissionResponse revokePermission (string principal, string _object, string object_type, string permission, IDictionary< string, string > options=null)
 Revoke user or role the specified permission on the specified object. More...
 
RevokePermissionCredentialResponse revokePermissionCredential (RevokePermissionCredentialRequest request_)
 Revokes a credential-level permission from a user or role. More...
 
RevokePermissionCredentialResponse revokePermissionCredential (string name, string permission, string credential_name, IDictionary< string, string > options=null)
 Revokes a credential-level permission from a user or role. More...
 
RevokePermissionDatasourceResponse revokePermissionDatasource (RevokePermissionDatasourceRequest request_)
 Revokes a data source permission from a user or role. More...
 
RevokePermissionDatasourceResponse revokePermissionDatasource (string name, string permission, string datasource_name, IDictionary< string, string > options=null)
 Revokes a data source permission from a user or role. More...
 
RevokePermissionDirectoryResponse revokePermissionDirectory (RevokePermissionDirectoryRequest request_)
 Revokes a KiFS directory-level permission from a user or role. More...
 
RevokePermissionDirectoryResponse revokePermissionDirectory (string name, string permission, string directory_name, IDictionary< string, string > options=null)
 Revokes a KiFS directory-level permission from a user or role. More...
 
RevokePermissionProcResponse revokePermissionProc (RevokePermissionProcRequest request_)
 Revokes a proc-level permission from a user or role. More...
 
RevokePermissionProcResponse revokePermissionProc (string name, string permission, string proc_name, IDictionary< string, string > options=null)
 Revokes a proc-level permission from a user or role. More...
 
RevokePermissionSystemResponse revokePermissionSystem (RevokePermissionSystemRequest request_)
 Revokes a system-level permission from a user or role. More...
 
RevokePermissionSystemResponse revokePermissionSystem (string name, string permission, IDictionary< string, string > options=null)
 Revokes a system-level permission from a user or role. More...
 
RevokePermissionTableResponse revokePermissionTable (RevokePermissionTableRequest request_)
 Revokes a table-level permission from a user or role. More...
 
RevokePermissionTableResponse revokePermissionTable (string name, string permission, string table_name, IDictionary< string, string > options=null)
 Revokes a table-level permission from a user or role. More...
 
RevokeRoleResponse revokeRole (RevokeRoleRequest request_)
 Revokes membership in a role from a user or role. More...
 
RevokeRoleResponse revokeRole (string role, string member, IDictionary< string, string > options=null)
 Revokes membership in a role from a user or role. More...
 
ShowCredentialResponse showCredential (ShowCredentialRequest request_)
 Shows information about a specified credential or all credentials. More...
 
ShowCredentialResponse showCredential (string credential_name, IDictionary< string, string > options=null)
 Shows information about a specified credential or all credentials. More...
 
ShowDatasinkResponse showDatasink (ShowDatasinkRequest request_)
 Shows information about a specified data sink or all data sinks. More...
 
ShowDatasinkResponse showDatasink (string name, IDictionary< string, string > options=null)
 Shows information about a specified data sink or all data sinks. More...
 
ShowDatasourceResponse showDatasource (ShowDatasourceRequest request_)
 Shows information about a specified data source or all data sources. More...
 
ShowDatasourceResponse showDatasource (string name, IDictionary< string, string > options=null)
 Shows information about a specified data source or all data sources. More...
 
ShowDirectoriesResponse showDirectories (ShowDirectoriesRequest request_)
 Shows information about directories in KiFS. More...
 
ShowDirectoriesResponse showDirectories (string directory_name="", IDictionary< string, string > options=null)
 Shows information about directories in KiFS. More...
 
ShowEnvironmentResponse showEnvironment (ShowEnvironmentRequest request_)
 Shows information about a specified user-defined function (UDF) environment or all environments. More...
 
ShowEnvironmentResponse showEnvironment (string environment_name="", IDictionary< string, string > options=null)
 Shows information about a specified user-defined function (UDF) environment or all environments. More...
 
ShowFilesResponse showFiles (ShowFilesRequest request_)
 Shows information about files in KiFS. More...
 
ShowFilesResponse showFiles (IList< string > paths, IDictionary< string, string > options=null)
 Shows information about files in KiFS. More...
 
ShowGraphResponse showGraph (ShowGraphRequest request_)
 Shows information and characteristics of graphs that exist on the graph server. More...
 
ShowGraphResponse showGraph (string graph_name="", IDictionary< string, string > options=null)
 Shows information and characteristics of graphs that exist on the graph server. More...
 
ShowProcResponse showProc (ShowProcRequest request_)
 Shows information about a proc. More...
 
ShowProcResponse showProc (string proc_name="", IDictionary< string, string > options=null)
 Shows information about a proc. More...
 
ShowProcStatusResponse showProcStatus (ShowProcStatusRequest request_)
 Shows the statuses of running or completed proc instances. More...
 
ShowProcStatusResponse showProcStatus (string run_id="", IDictionary< string, string > options=null)
 Shows the statuses of running or completed proc instances. More...
 
ShowResourceObjectsResponse showResourceObjects (ShowResourceObjectsRequest request_)
 Returns information about the internal sub-components (tiered objects) which use resources of the system. More...
 
ShowResourceObjectsResponse showResourceObjects (IDictionary< string, string > options=null)
 Returns information about the internal sub-components (tiered objects) which use resources of the system. More...
 
ShowResourceStatisticsResponse showResourceStatistics (ShowResourceStatisticsRequest request_)
 Requests various statistics for storage/memory tiers and resource groups. More...
 
ShowResourceStatisticsResponse showResourceStatistics (IDictionary< string, string > options=null)
 Requests various statistics for storage/memory tiers and resource groups. More...
 
ShowResourceGroupsResponse showResourceGroups (ShowResourceGroupsRequest request_)
 Requests resource group properties. More...
 
ShowResourceGroupsResponse showResourceGroups (IList< string > names, IDictionary< string, string > options=null)
 Requests resource group properties. More...
 
ShowSchemaResponse showSchema (ShowSchemaRequest request_)
 Retrieves information about a schema (or all schemas), as specified in . More...
 
ShowSchemaResponse showSchema (string schema_name, IDictionary< string, string > options=null)
 Retrieves information about a schema (or all schemas), as specified in schema_name . More...
 
ShowSecurityResponse showSecurity (ShowSecurityRequest request_)
 Shows security information relating to users and/or roles. More...
 
ShowSecurityResponse showSecurity (IList< string > names, IDictionary< string, string > options=null)
 Shows security information relating to users and/or roles. More...
 
ShowSqlProcResponse showSqlProc (ShowSqlProcRequest request_)
 Shows information about SQL procedures, including the full definition of each requested procedure. More...
 
ShowSqlProcResponse showSqlProc (string procedure_name="", IDictionary< string, string > options=null)
 Shows information about SQL procedures, including the full definition of each requested procedure. More...
 
ShowStatisticsResponse showStatistics (ShowStatisticsRequest request_)
 Retrieves the collected column statistics for the specified table(s). More...
 
ShowStatisticsResponse showStatistics (IList< string > table_names, IDictionary< string, string > options=null)
 Retrieves the collected column statistics for the specified table(s). More...
 
ShowSystemPropertiesResponse showSystemProperties (ShowSystemPropertiesRequest request_)
 Returns server configuration and version related information to the caller. More...
 
ShowSystemPropertiesResponse showSystemProperties (IDictionary< string, string > options=null)
 Returns server configuration and version related information to the caller. More...
 
ShowSystemStatusResponse showSystemStatus (ShowSystemStatusRequest request_)
 Provides server configuration and health related status to the caller. More...
 
ShowSystemStatusResponse showSystemStatus (IDictionary< string, string > options=null)
 Provides server configuration and health related status to the caller. More...
 
ShowSystemTimingResponse showSystemTiming (ShowSystemTimingRequest request_)
 Returns the last 100 database requests along with the request timing and internal job id. More...
 
ShowSystemTimingResponse showSystemTiming (IDictionary< string, string > options=null)
 Returns the last 100 database requests along with the request timing and internal job id. More...
 
ShowTableResponse showTable (ShowTableRequest request_)
 Retrieves detailed information about a table, view, or schema, specified in . More...
 
ShowTableResponse showTable (string table_name, IDictionary< string, string > options=null)
 Retrieves detailed information about a table, view, or schema, specified in table_name . More...
 
ShowTableMetadataResponse showTableMetadata (ShowTableMetadataRequest request_)
 Retrieves the user provided metadata for the specified tables. More...
 
ShowTableMetadataResponse showTableMetadata (IList< string > table_names, IDictionary< string, string > options=null)
 Retrieves the user provided metadata for the specified tables. More...
 
ShowTableMonitorsResponse showTableMonitors (ShowTableMonitorsRequest request_)
 Show table monitors and their properties. More...
 
ShowTableMonitorsResponse showTableMonitors (IList< string > monitor_ids, IDictionary< string, string > options=null)
 Show table monitors and their properties. More...
 
ShowTablesByTypeResponse showTablesByType (ShowTablesByTypeRequest request_)
 Gets names of the tables whose type matches the given criteria. More...
 
ShowTablesByTypeResponse showTablesByType (string type_id, string label, IDictionary< string, string > options=null)
 Gets names of the tables whose type matches the given criteria. More...
 
ShowTriggersResponse showTriggers (ShowTriggersRequest request_)
 Retrieves information regarding the specified triggers or all existing triggers currently active. More...
 
ShowTriggersResponse showTriggers (IList< string > trigger_ids, IDictionary< string, string > options=null)
 Retrieves information regarding the specified triggers or all existing triggers currently active. More...
 
ShowTypesResponse showTypes (ShowTypesRequest request_)
 Retrieves information for the specified data type ID or type label. More...
 
ShowTypesResponse showTypes (string type_id, string label, IDictionary< string, string > options=null)
 Retrieves information for the specified data type ID or type label. More...
 
ShowVideoResponse showVideo (ShowVideoRequest request_)
 Retrieves information about rendered videos. More...
 
ShowVideoResponse showVideo (IList< string > paths, IDictionary< string, string > options=null)
 Retrieves information about rendered videos. More...
 
SolveGraphResponse solveGraph (SolveGraphRequest request_)
 Solves an existing graph for a type of problem (e.g., shortest path, page rank, travelling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions. More...
 
SolveGraphResponse solveGraph (string graph_name, IList< string > weights_on_edges=null, IList< string > restrictions=null, string solver_type=SolveGraphRequest.SolverType.SHORTEST_PATH, IList< string > source_nodes=null, IList< string > destination_nodes=null, string solution_table="graph_solutions", IDictionary< string, string > options=null)
 Solves an existing graph for a type of problem (e.g., shortest path, page rank, travelling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions. More...
 
UpdateRecordsResponse updateRecordsRaw (RawUpdateRecordsRequest request_)
 Runs multiple predicate-based updates in a single call. More...
 
UpdateRecordsResponse updateRecords< T > (UpdateRecordsRequest< T > request_)
 Runs multiple predicate-based updates in a single call. More...
 
UpdateRecordsResponse updateRecords< T > (string table_name, IList< string > expressions, IList< IDictionary< string, string >> new_values_maps, IList< T > data=null, IDictionary< string, string > options=null)
 Runs multiple predicate-based updates in a single call. More...
 
UpdateRecordsBySeriesResponse updateRecordsBySeries (UpdateRecordsBySeriesRequest request_)
 Updates the view specified by to include full series (track) information from the for the series (tracks) present in the . More...
 
UpdateRecordsBySeriesResponse updateRecordsBySeries (string table_name, string world_table_name, string view_name="", IList< string > reserved=null, IDictionary< string, string > options=null)
 Updates the view specified by table_name to include full series (track) information from the world_table_name for the series (tracks) present in the view_name . More...
 
UploadFilesResponse uploadFiles (UploadFilesRequest request_)
 Uploads one or more files to KiFS. More...
 
UploadFilesResponse uploadFiles (IList< string > file_names, IList< byte[]> file_data, IDictionary< string, string > options=null)
 Uploads one or more files to KiFS. More...
 
UploadFilesFromurlResponse uploadFilesFromurl (UploadFilesFromurlRequest request_)
 Uploads one or more files to KiFS. More...
 
UploadFilesFromurlResponse uploadFilesFromurl (IList< string > file_names, IList< string > urls, IDictionary< string, string > options=null)
 Uploads one or more files to KiFS. More...
 
VisualizeImageChartResponse visualizeImageChart (VisualizeImageChartRequest request_)
 Scatter plot is the only plot type currently supported. More...
 
VisualizeImageChartResponse visualizeImageChart (string table_name, IList< string > x_column_names, IList< string > y_column_names, double min_x, double max_x, double min_y, double max_y, int width, int height, string bg_color, IDictionary< string, IList< string >> style_options, IDictionary< string, string > options=null)
 Scatter plot is the only plot type currently supported. More...
 
VisualizeIsochroneResponse visualizeIsochrone (VisualizeIsochroneRequest request_)
 Generate an image containing isolines for travel results using an existing graph. More...
 
VisualizeIsochroneResponse visualizeIsochrone (string graph_name, string source_node, double max_solution_radius, IList< string > weights_on_edges, IList< string > restrictions, int num_levels, bool generate_image, string levels_table, IDictionary< string, string > style_options, IDictionary< string, string > solve_options=null, IDictionary< string, string > contour_options=null, IDictionary< string, string > options=null)
 Generate an image containing isolines for travel results using an existing graph. More...
 

Static Public Member Functions

static string GetApiVersion ()
 API Version More...
 

Public Attributes

const int END_OF_SET = -9999
 No Limit More...
 
const string API_VERSION = "7.1.10.0"
 

Properties

string Url [get, set]
 URL for Kinetica Server (including "http:" and port) as a string More...
 
Uri URL [get, set]
 URL for Kinetica Server (including "http:" and port) More...
 
string Username [get, set]
 Optional: User Name for Kinetica security More...
 
bool UseSnappy = null [get, set]
 Use Snappy More...
 
int ThreadCount = false [get, set]
 Thread Count More...
 

Detailed Description

API to talk to Kinetica Database

Definition at line 40 of file Kinetica.cs.

Constructor & Destructor Documentation

kinetica.Kinetica.Kinetica ( string  url_str,
Options  options = null 
)
inline

API Constructor

Parameters
url_strURL for Kinetica Server (including "http:" and port)
optionsOptional connection options

Definition at line 128 of file Kinetica.cs.

Member Function Documentation

void kinetica.Kinetica.AddTableType ( string  table_name,
Type  obj_type 
)
inline

Given a table name, add its record type to enable proper encoding of records for insertion or updates.

Parameters
table_nameName of the table.
obj_typeThe type associated with the table.

Definition at line 158 of file Kinetica.cs.

AdminAddHostResponse kinetica.Kinetica.adminAddHost ( AdminAddHostRequest  request_)
inline

Adds a host to an existing cluster.

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 33 of file KineticaFunctions.cs.

AdminAddHostResponse kinetica.Kinetica.adminAddHost ( string  host_address,
IDictionary< string, string >  options = null 
)
inline

Adds a host to an existing cluster.

This method should be used for on-premise deployments only.

Parameters
host_addressIP address of the host that will be added to the cluster. This host must have installed the same version of Kinetica as the cluster to which it is being added.
optionsOptional parameters.
  • DRY_RUN: If set to true, only validation checks will be performed. No host is added. Supported values: The default value is FALSE.
  • ACCEPTS_FAILOVER: If set to true, the host will accept processes (ranks, graph server, etc.) in the event of a failover on another node in the cluster. Supported values: The default value is FALSE.
  • PUBLIC_ADDRESS: The publicly-accessible IP address for the host being added, typically specified for clients using multi-head operations. This setting is required if any other host(s) in the cluster specify a public address.
  • HOST_MANAGER_PUBLIC_URL: The publicly-accessible full path URL to the host manager on the host being added, e.g., 'http://172.123.45.67:9300'. The default host manager port can be found in the list of ports used by Kinetica.
  • RAM_LIMIT: The desired RAM limit for the host being added, i.e. the sum of RAM usage for all processes on the host will not be able to exceed this value. Supported units: K (thousand), KB (kilobytes), M (million), MB (megabytes), G (billion), GB (gigabytes); if no unit is provided, the value is assumed to be in bytes. For example, if ram_limit is set to 10M, the resulting RAM limit is 10 million bytes. Set ram_limit to -1 to have no RAM limit.
  • GPUS: Comma-delimited list of GPU indices (starting at 1) that are eligible for running worker processes. If left blank, all GPUs on the host being added will be eligible.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 133 of file KineticaFunctions.cs.

AdminAddRanksResponse kinetica.Kinetica.adminAddRanks ( AdminAddRanksRequest  request_)
inline

Add one or more ranks to an existing Kinetica cluster.

The new ranks will not contain any data initially (other than replicated tables) and will not be assigned any shards. To rebalance data and shards across the cluster, use Kinetica.adminRebalance(IDictionary{string, string}).
The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})
For example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:

  • would be an array including 172.123.45.67 in the first two indices (signifying two ranks being added to host 172.123.45.67) and 172.123.45.68 in the last index (signifying one rank being added to host 172.123.45.67)
  • would be an array of maps, with each map corresponding to the ranks being added in . The key of each map would be the configuration parameter name and the value would be the parameter's value, e.g. '{"rank.gpu":"1"}'
    This endpoint's processing includes copying all replicated table data to the new rank(s) and therefore could take a long time. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 182 of file KineticaFunctions.cs.

AdminAddRanksResponse kinetica.Kinetica.adminAddRanks ( IList< string >  hosts,
IList< IDictionary< string, string >>  config_params,
IDictionary< string, string >  options = null 
)
inline

Add one or more ranks to an existing Kinetica cluster.

The new ranks will not contain any data initially (other than replicated tables) and will not be assigned any shards. To rebalance data and shards across the cluster, use Kinetica.adminRebalance(IDictionary{string, string}).
The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})
For example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:

  • hosts would be an array including 172.123.45.67 in the first two indices (signifying two ranks being added to host 172.123.45.67) and 172.123.45.68 in the last index (signifying one rank being added to host 172.123.45.67)
  • config_params would be an array of maps, with each map corresponding to the ranks being added in hosts . The key of each map would be the configuration parameter name and the value would be the parameter's value, e.g. '{"rank.gpu":"1"}'
    This endpoint's processing includes copying all replicated table data to the new rank(s) and therefore could take a long time. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
hostsArray of host IP addresses (matching a hostN.address from the gpudb.conf file), or host identifiers (e.g. 'host0' from the gpudb.conf file), on which to add ranks to the cluster. The hosts must already be in the cluster. If needed beforehand, to add a new host to the cluster use /admin/add/host. Include the same entry as many times as there are ranks to add to the cluster, e.g., if two ranks on host 172.123.45.67 should be added, could look like '["172.123.45.67", "172.123.45.67"]'. All ranks will be added simultaneously, i.e. they're not added in the order of this array. Each entry in this array corresponds to the entry at the same index in the .
config_paramsArray of maps containing configuration parameters to apply to the new ranks found in . For example, '{"rank.gpu":"2", "tier.ram.rank.limit":"10000000000"}'. Currently, the available parameters are rank-specific parameters in the Network, Hardware, Text Search, and RAM Tiered Storage sections in the gpudb.conf file, with the key exception of the 'rankN.host' settings in the Network section that will be determined by instead. Though many of these configuration parameters typically are affixed with 'rankN' in the gpudb.conf file (where N is the rank number), the 'N' should be omitted in as the new rank number(s) are not allocated until the ranks have been added to the cluster. Each entry in this array corresponds to the entry at the same index in the . This array must either be completely empty or have the same number of elements as the . An empty array will result in the new ranks being set with default parameters.
optionsOptional parameters.
  • DRY_RUN: If true, only validation checks will be performed. No ranks are added. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 297 of file KineticaFunctions.cs.

AdminAlterHostResponse kinetica.Kinetica.adminAlterHost ( AdminAlterHostRequest  request_)
inline

Alter properties on an existing host in the cluster.

Currently, the only property that can be altered is a hosts ability to accept failover processes.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 315 of file KineticaFunctions.cs.

AdminAlterHostResponse kinetica.Kinetica.adminAlterHost ( string  host,
IDictionary< string, string >  options = null 
)
inline

Alter properties on an existing host in the cluster.

Currently, the only property that can be altered is a hosts ability to accept failover processes.

Parameters
hostIdentifies the host this applies to. Can be the host address, or formatted as 'hostN' where N is the host number as specified in gpudb.conf
optionsOptional parameters
  • ACCEPTS_FAILOVER: If set to true, the host will accept processes (ranks, graph server, etc.) in the event of a failover on another node in the cluster. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 358 of file KineticaFunctions.cs.

AdminAlterJobsResponse kinetica.Kinetica.adminAlterJobs ( AdminAlterJobsRequest  request_)
inline

Perform the requested action on a list of one or more job(s).

Based on the type of job and the current state of execution, the action may not be successfully executed. The final result of the attempted actions for each specified job is returned in the status array of the response. See Job Manager for more information.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 381 of file KineticaFunctions.cs.

AdminAlterJobsResponse kinetica.Kinetica.adminAlterJobs ( IList< long >  job_ids,
string  action,
IDictionary< string, string >  options = null 
)
inline

Perform the requested action on a list of one or more job(s).

Based on the type of job and the current state of execution, the action may not be successfully executed. The final result of the attempted actions for each specified job is returned in the status array of the response. See Job Manager for more information.

Parameters
job_idsJobs to be modified.
actionAction to be performed on the jobs specified by job_ids. Supported values:
optionsOptional parameters.
  • JOB_TAG: Job tag returned in call to create the job
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 423 of file KineticaFunctions.cs.

AdminBackupBeginResponse kinetica.Kinetica.adminBackupBegin ( AdminBackupBeginRequest  request_)
inline

Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete.

When the database is in backup mode, queries that result in a disk write operation will be blocked until backup mode has been completed by using Kinetica.adminBackupEnd(IDictionary{string, string}).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 445 of file KineticaFunctions.cs.

AdminBackupBeginResponse kinetica.Kinetica.adminBackupBegin ( IDictionary< string, string >  options = null)
inline

Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete.

When the database is in backup mode, queries that result in a disk write operation will be blocked until backup mode has been completed by using Kinetica.adminBackupEnd(IDictionary{string, string}).

Parameters
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 467 of file KineticaFunctions.cs.

AdminBackupEndResponse kinetica.Kinetica.adminBackupEnd ( AdminBackupEndRequest  request_)
inline

Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 483 of file KineticaFunctions.cs.

AdminBackupEndResponse kinetica.Kinetica.adminBackupEnd ( IDictionary< string, string >  options = null)
inline

Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.

Parameters
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 501 of file KineticaFunctions.cs.

AdminHaRefreshResponse kinetica.Kinetica.adminHaRefresh ( AdminHaRefreshRequest  request_)
inline

Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes.

Additionally the cluster is put into read-only while HA is restarting.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 517 of file KineticaFunctions.cs.

AdminHaRefreshResponse kinetica.Kinetica.adminHaRefresh ( IDictionary< string, string >  options = null)
inline

Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes.

Additionally the cluster is put into read-only while HA is restarting.

Parameters
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 535 of file KineticaFunctions.cs.

AdminOfflineResponse kinetica.Kinetica.adminOffline ( AdminOfflineRequest  request_)
inline

Take the system offline.

When the system is offline, no user operations can be performed with the exception of a system shutdown.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 551 of file KineticaFunctions.cs.

AdminOfflineResponse kinetica.Kinetica.adminOffline ( bool  offline,
IDictionary< string, string >  options = null 
)
inline

Take the system offline.

When the system is offline, no user operations can be performed with the exception of a system shutdown.

Parameters
offlineSet to true if desired state is offline. Supported values:
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 599 of file KineticaFunctions.cs.

AdminRebalanceResponse kinetica.Kinetica.adminRebalance ( AdminRebalanceRequest  request_)
inline

Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.


The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})

  • If Kinetica.adminRebalance(IDictionary{string, string}) is invoked after a change is made to the cluster, e.g., a host was added or removed, sharded data will be evenly redistributed across the cluster by number of shards per rank while unsharded data will be redistributed across the cluster by data size per rank
  • If Kinetica.adminRebalance(IDictionary{string, string}) is invoked at some point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly distributed over time, sharded data will not move while unsharded data will be redistributed across the cluster by data size per rank
    NOTE: Replicated data will not move as a result of this call
    This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 652 of file KineticaFunctions.cs.

AdminRebalanceResponse kinetica.Kinetica.adminRebalance ( IDictionary< string, string >  options = null)
inline

Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.


The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})

  • If Kinetica.adminRebalance(IDictionary{string, string}) is invoked after a change is made to the cluster, e.g., a host was added or removed, sharded data will be evenly redistributed across the cluster by number of shards per rank while unsharded data will be redistributed across the cluster by data size per rank
  • If Kinetica.adminRebalance(IDictionary{string, string}) is invoked at some point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly distributed over time, sharded data will not move while unsharded data will be redistributed across the cluster by data size per rank
    NOTE: Replicated data will not move as a result of this call
    This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).
Parameters
optionsOptional parameters.
  • REBALANCE_SHARDED_DATA: If true, sharded data will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of sharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value is TRUE.
  • REBALANCE_UNSHARDED_DATA: If true, unsharded data (a.k.a. randomly-sharded) will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of unsharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value is TRUE.
  • TABLE_INCLUDES: Comma-separated list of unsharded table names to rebalance. Not applicable to sharded tables because they are always rebalanced. Cannot be used simultaneously with table_excludes. This parameter is ignored if rebalance_unsharded_data is false.
  • TABLE_EXCLUDES: Comma-separated list of unsharded table names to not rebalance. Not applicable to sharded tables because they are always rebalanced. Cannot be used simultaneously with table_includes. This parameter is ignored if rebalance_unsharded_data is false.
  • AGGRESSIVENESS: Influences how much data is moved at a time during rebalance. A higher aggressiveness will complete the rebalance faster. A lower aggressiveness will take longer but allow for better interleaving between the rebalance and other queries. Valid values are constants from 1 (lowest) to 10 (highest). The default value is '10'.
  • COMPACT_AFTER_REBALANCE: Perform compaction of deleted records once the rebalance completes to reclaim memory and disk space. Default is true, unless repair_incorrectly_sharded_data is set to true. Supported values: The default value is TRUE.
  • COMPACT_ONLY: If set to true, ignore rebalance options and attempt to perform compaction of deleted records to reclaim memory and disk space without rebalancing first. Supported values: The default value is FALSE.
  • REPAIR_INCORRECTLY_SHARDED_DATA: Scans for any data sharded incorrectly and re-routes the data to the correct location. Only necessary if /admin/verifydb reports an error in sharding alignment. This can be done as part of a typical rebalance after expanding the cluster or in a standalone fashion when it is believed that data is sharded incorrectly somewhere in the cluster. Compaction will not be performed by default when this is enabled. If this option is set to true, the time necessary to rebalance and the memory used by the rebalance may increase. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 849 of file KineticaFunctions.cs.

AdminRemoveHostResponse kinetica.Kinetica.adminRemoveHost ( AdminRemoveHostRequest  request_)
inline

Removes a host from an existing cluster.

If the host to be removed has any ranks running on it, the ranks must be removed using Kinetica.adminRemoveRanks(IList{string},IDictionary{string, string}) or manually switched over to a new host using Kinetica.adminSwitchover(IList{string},IList{string},IDictionary{string, string}) prior to host removal. If the host to be removed has the graph server or SQL planner running on it, these must be manually switched over to a new host using Kinetica.adminSwitchover(IList{string},IList{string},IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 875 of file KineticaFunctions.cs.

AdminRemoveHostResponse kinetica.Kinetica.adminRemoveHost ( string  host,
IDictionary< string, string >  options = null 
)
inline

Removes a host from an existing cluster.

If the host to be removed has any ranks running on it, the ranks must be removed using Kinetica.adminRemoveRanks(IList{string},IDictionary{string, string}) or manually switched over to a new host using Kinetica.adminSwitchover(IList{string},IList{string},IDictionary{string, string}) prior to host removal. If the host to be removed has the graph server or SQL planner running on it, these must be manually switched over to a new host using Kinetica.adminSwitchover(IList{string},IList{string},IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
hostIdentifies the host this applies to. Can be the host address, or formatted as 'hostN' where N is the host number as specified in gpudb.conf
optionsOptional parameters.
  • DRY_RUN: If set to true, only validation checks will be performed. No host is removed. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 927 of file KineticaFunctions.cs.

AdminRemoveRanksResponse kinetica.Kinetica.adminRemoveRanks ( AdminRemoveRanksRequest  request_)
inline

Remove one or more ranks from an existing Kinetica cluster.

All data will be rebalanced to other ranks before the rank(s) is removed unless the rebalance_sharded_data or rebalance_unsharded_data parameters are set to false in the , in which case the corresponding sharded data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.
The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})
This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 967 of file KineticaFunctions.cs.

AdminRemoveRanksResponse kinetica.Kinetica.adminRemoveRanks ( IList< string >  ranks,
IDictionary< string, string >  options = null 
)
inline

Remove one or more ranks from an existing Kinetica cluster.

All data will be rebalanced to other ranks before the rank(s) is removed unless the rebalance_sharded_data or rebalance_unsharded_data parameters are set to false in the options , in which case the corresponding sharded data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.
The database must be offline for this operation, see Kinetica.adminOffline(bool,IDictionary{string, string})
This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}).

This method should be used for on-premise deployments only.

Parameters
ranksEach array value designates one or more ranks to remove from the cluster. Values can be formatted as 'rankN' for a specific rank, 'hostN' (from the gpudb.conf file) to remove all ranks on that host, or the host IP address (hostN.address from the gpub.conf file) which also removes all ranks on that host. Rank 0 (the head rank) cannot be removed (but can be moved to another host using /admin/switchover). At least one worker rank must be left in the cluster after the operation.
optionsOptional parameters.
  • REBALANCE_SHARDED_DATA: If true, sharded data will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of sharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value is TRUE.
  • REBALANCE_UNSHARDED_DATA: If true, unsharded data (a.k.a. randomly-sharded) will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of unsharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value is TRUE.
  • AGGRESSIVENESS: Influences how much data is moved at a time during rebalance. A higher aggressiveness will complete the rebalance faster. A lower aggressiveness will take longer but allow for better interleaving between the rebalance and other queries. Valid values are constants from 1 (lowest) to 10 (highest). The default value is '10'.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1073 of file KineticaFunctions.cs.

AdminShowAlertsResponse kinetica.Kinetica.adminShowAlerts ( AdminShowAlertsRequest  request_)
inline

Requests a list of the most recent alerts.

Returns lists of alert data, including timestamp and type.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1090 of file KineticaFunctions.cs.

AdminShowAlertsResponse kinetica.Kinetica.adminShowAlerts ( int  num_alerts,
IDictionary< string, string >  options = null 
)
inline

Requests a list of the most recent alerts.

Returns lists of alert data, including timestamp and type.

Parameters
num_alertsNumber of most recent alerts to request. The response will include up to depending on how many alerts there are in the system. A value of 0 returns all stored alerts.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1113 of file KineticaFunctions.cs.

AdminShowClusterOperationsResponse kinetica.Kinetica.adminShowClusterOperations ( AdminShowClusterOperationsRequest  request_)
inline

Requests the detailed status of the current operation (by default) or a prior cluster operation specified by .

Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in the history.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1134 of file KineticaFunctions.cs.

AdminShowClusterOperationsResponse kinetica.Kinetica.adminShowClusterOperations ( int  history_index = 0,
IDictionary< string, string >  options = null 
)
inline

Requests the detailed status of the current operation (by default) or a prior cluster operation specified by history_index .

Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in the history.

Parameters
history_indexIndicates which cluster operation to retrieve. Use 0 for the most recent. The default value is 0.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1159 of file KineticaFunctions.cs.

AdminShowJobsResponse kinetica.Kinetica.adminShowJobs ( AdminShowJobsRequest  request_)
inline

Get a list of the current jobs in GPUdb.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1176 of file KineticaFunctions.cs.

AdminShowJobsResponse kinetica.Kinetica.adminShowJobs ( IDictionary< string, string >  options = null)
inline

Get a list of the current jobs in GPUdb.

Parameters
optionsOptional parameters.
  • SHOW_ASYNC_JOBS: If true, then the completed async jobs are also included in the response. By default, once the async jobs are completed they are no longer included in the jobs list. Supported values: The default value is FALSE.
  • SHOW_WORKER_INFO: If true, then information is also returned from worker ranks. By default only status from the head rank is returned. Supported values:
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1232 of file KineticaFunctions.cs.

AdminShowShardsResponse kinetica.Kinetica.adminShowShards ( AdminShowShardsRequest  request_)
inline

Show the mapping of shards to the corresponding rank and tom.

The response message contains list of 16384 (total number of shards in the system) Rank and TOM numbers corresponding to each shard.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1249 of file KineticaFunctions.cs.

AdminShowShardsResponse kinetica.Kinetica.adminShowShards ( IDictionary< string, string >  options = null)
inline

Show the mapping of shards to the corresponding rank and tom.

The response message contains list of 16384 (total number of shards in the system) Rank and TOM numbers corresponding to each shard.

Parameters
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1268 of file KineticaFunctions.cs.

AdminShutdownResponse kinetica.Kinetica.adminShutdown ( AdminShutdownRequest  request_)
inline

Exits the database server application.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1282 of file KineticaFunctions.cs.

AdminShutdownResponse kinetica.Kinetica.adminShutdown ( string  exit_type,
string  authorization,
IDictionary< string, string >  options = null 
)
inline

Exits the database server application.

Parameters
exit_typeReserved for future use. User can pass an empty string.
authorizationNo longer used. User can pass an empty string.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1302 of file KineticaFunctions.cs.

AdminSwitchoverResponse kinetica.Kinetica.adminSwitchover ( AdminSwitchoverRequest  request_)
inline

Manually switch over one or more processes to another host.

Individual ranks or entire hosts may be moved to another host.

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1323 of file KineticaFunctions.cs.

AdminSwitchoverResponse kinetica.Kinetica.adminSwitchover ( IList< string >  processes,
IList< string >  destinations,
IDictionary< string, string >  options = null 
)
inline

Manually switch over one or more processes to another host.

Individual ranks or entire hosts may be moved to another host.

This method should be used for on-premise deployments only.

Parameters
processesIndicates the process identifier to switch over to another host. Options are 'hostN' and 'rankN' where 'N' corresponds to the number associated with a host or rank in the Network section of the gpudb.conf file; e.g., 'host[N].address' or 'rank[N].host'. If 'hostN' is provided, all processes on that host will be moved to another host. Each entry in this array will be switched over to the corresponding host entry at the same index in .
destinationsIndicates to which host to switch over each corresponding process given in . Each index must be specified as 'hostN' where 'N' corresponds to the number associated with a host or rank in the Network section of the gpudb.conf file; e.g., 'host[N].address'. Each entry in this array will receive the corresponding process entry at the same index in .
optionsOptional parameters.
  • DRY_RUN: If set to true, only validation checks will be performed. Nothing is switched over. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1387 of file KineticaFunctions.cs.

AdminVerifyDbResponse kinetica.Kinetica.adminVerifyDb ( AdminVerifyDbRequest  request_)
inline

Verify database is in a consistent state.

When inconsistencies or errors are found, the verified_ok flag in the response is set to false and the list of errors found is provided in the error_list.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1407 of file KineticaFunctions.cs.

AdminVerifyDbResponse kinetica.Kinetica.adminVerifyDb ( IDictionary< string, string >  options = null)
inline

Verify database is in a consistent state.

When inconsistencies or errors are found, the verified_ok flag in the response is set to false and the list of errors found is provided in the error_list.

Parameters
optionsOptional parameters.
  • REBUILD_ON_ERROR: [DEPRECATED – Use the Rebuild DB feature of GAdmin instead.] Supported values: The default value is FALSE.
  • VERIFY_NULLS: When true, verifies that null values are set to zero Supported values: The default value is FALSE.
  • VERIFY_PERSIST: When true, persistent objects will be compared against their state in memory and workers will be checked for orphaned table data in persist. To check for orphaned worker data, either set concurrent_safe in to true or place the database offline. Supported values: The default value is FALSE.
  • CONCURRENT_SAFE: When true, allows this endpoint to be run safely with other concurrent database operations. Other operations may be slower while this is running. Supported values: The default value is TRUE.
  • VERIFY_RANK0: If true, compare rank0 table metadata against workers' metadata Supported values: The default value is FALSE.
  • DELETE_ORPHANED_TABLES: If true, orphaned table directories found on workers for which there is no corresponding metadata will be deleted. Must set verify_persist in to true. It is recommended to run this while the database is offline OR set concurrent_safe in to true Supported values: The default value is FALSE.
  • VERIFY_ORPHANED_TABLES_ONLY: If true, only the presence of orphaned table directories will be checked, all persistence checks will be skipped Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1572 of file KineticaFunctions.cs.

AggregateConvexHullResponse kinetica.Kinetica.aggregateConvexHull ( AggregateConvexHullRequest  request_)
inline

Calculates and returns the convex hull for the values in a table specified by .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1588 of file KineticaFunctions.cs.

AggregateConvexHullResponse kinetica.Kinetica.aggregateConvexHull ( string  table_name,
string  x_column_name,
string  y_column_name,
IDictionary< string, string >  options = null 
)
inline

Calculates and returns the convex hull for the values in a table specified by table_name .

Parameters
table_nameName of table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.
x_column_nameName of the column containing the x coordinates of the points for the operation being performed.
y_column_nameName of the column containing the y coordinates of the points for the operation being performed.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 1616 of file KineticaFunctions.cs.

AggregateGroupByResponse kinetica.Kinetica.aggregateGroupBy ( AggregateGroupByRequest  request_)
inline

Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination.

This is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the and parameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
can be used to customize behavior of this call e.g. filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the , the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected ( is 0 and is -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values of is an unrestricted-length string.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 1710 of file KineticaFunctions.cs.

AggregateGroupByResponse kinetica.Kinetica.aggregateGroupBy ( string  table_name,
IList< string >  column_names,
long  offset = 0,
long  limit = -9999,
IDictionary< string, string >  options = null 
)
inline

Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination.

This is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the offset and limit parameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
options can be used to customize behavior of this call e.g. filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the options , the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected (offset is 0 and limit is -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values of column_names is an unrestricted-length string.

Parameters
table_nameName of an existing table or view on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.
column_namesList of one or more column names, expressions, and aggregate expressions.
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use <member name="has_more_records"> to see if more records exist in the result to be fetched, and & to request subsequent pages of results. The default value is -9999.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of result_table. If result_table_persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_result_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema as part of result_table and use /create/schema to create the schema if non-existent] Name of a schema which is to contain the table specified in result_table. If the schema provided is non-existent, it will be automatically created.
  • EXPRESSION: Filter expression to apply to the table prior to computing the aggregate group by.
  • HAVING: Filter expression to apply to the aggregated results.
  • SORT_ORDER: String indicating how the returned values should be sorted - ascending or descending. Supported values:
    • ASCENDING: Indicates that the returned values should be sorted in ascending order.
    • DESCENDING: Indicates that the returned values should be sorted in descending order.
    The default value is ASCENDING.
  • SORT_BY: String determining how the results are sorted. Supported values:
    • KEY: Indicates that the returned values should be sorted by key, which corresponds to the grouping columns. If you have multiple grouping columns (and are sorting by key), it will first sort the first grouping column, then the second grouping column, etc.
    • VALUE: Indicates that the returned values should be sorted by value, which corresponds to the aggregates. If you have multiple aggregates (and are sorting by value), it will first sort by the first aggregate, then the second aggregate, etc.
    The default value is VALUE.
  • STRATEGY_DEFINITION: The tier strategy for the table and its columns.
  • RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Column names (group-by and aggregate fields) need to be given aliases e.g. ["FChar256 as fchar256", "sum(FDouble) as sfd"]. If present, no results are returned in the response. This option is not available if one of the grouping attributes is an unrestricted string (i.e.; not charN) type.
  • RESULT_TABLE_PERSIST: If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Supported values: The default value is FALSE.
  • RESULT_TABLE_GENERATE_PK: If true then set a primary key for the result table. Must be used in combination with the result_table option. Supported values: The default value is FALSE.
  • TTL: Sets the TTL of the table specified in result_table.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
  • CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the result table. Must be used in combination with the result_table option.
  • VIEW_ID: ID of view of which the result table will be a member. The default value is ''.
  • PIVOT: pivot column
  • PIVOT_VALUES: The value list provided will become the column headers in the output. Should be the values from the pivot_column.
  • GROUPING_SETS: Customize the grouping attribute sets to compute the aggregates. These sets can include ROLLUP or CUBE operartors. The attribute sets should be enclosed in paranthesis and can include composite attributes. All attributes specified in the grouping sets must present in the groupby attributes.
  • ROLLUP: This option is used to specify the multilevel aggregates.
  • CUBE: This option is used to specify the multidimensional aggregates.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2069 of file KineticaFunctions.cs.

AggregateHistogramResponse kinetica.Kinetica.aggregateHistogram ( AggregateHistogramRequest  request_)
inline

Performs a histogram calculation given a table, a column, and an interval function.

The is used to produce bins of that size and the result, computed over the records falling within each bin, is returned. For each bin, the start value is inclusive, but the end value is exclusive–except for the very last bin for which the end value is also inclusive. The value returned for each bin is the number of records in it, except when a column name is provided as a value_column. In this latter case the sum of the values corresponding to the value_column is used as the result instead. The total number of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service a request that specifies a value_column.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2110 of file KineticaFunctions.cs.

AggregateHistogramResponse kinetica.Kinetica.aggregateHistogram ( string  table_name,
string  column_name,
double  start,
double  end,
double  interval,
IDictionary< string, string >  options = null 
)
inline

Performs a histogram calculation given a table, a column, and an interval function.

The interval is used to produce bins of that size and the result, computed over the records falling within each bin, is returned. For each bin, the start value is inclusive, but the end value is exclusive–except for the very last bin for which the end value is also inclusive. The value returned for each bin is the number of records in it, except when a column name is provided as a value_column. In this latter case the sum of the values corresponding to the value_column is used as the result instead. The total number of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service a request that specifies a value_column.

Parameters
table_nameName of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of a column or an expression of one or more column names over which the histogram will be calculated.
startLower end value of the histogram interval, inclusive.
endUpper end value of the histogram interval, inclusive.
intervalThe size of each bin within the start and end parameters.
optionsOptional parameters.
  • VALUE_COLUMN: The name of the column to use when calculating the bin values (values are summed). The column must be a numerical type (int, double, long, float).
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2169 of file KineticaFunctions.cs.

AggregateKMeansResponse kinetica.Kinetica.aggregateKMeans ( AggregateKMeansRequest  request_)
inline

This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering.

An ideal k-means clustering algorithm selects k points such that the sum of the mean squared distances of each member of the set to the nearest of the k points is minimized. The k-means algorithm however does not necessarily produce such an ideal cluster. It begins with a randomly selected set of k points and then refines the location of the points iteratively and settles to a local minimum. Various parameters and options are provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2209 of file KineticaFunctions.cs.

AggregateKMeansResponse kinetica.Kinetica.aggregateKMeans ( string  table_name,
IList< string >  column_names,
int  k,
double  tolerance,
IDictionary< string, string >  options = null 
)
inline

This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering.

An ideal k-means clustering algorithm selects k points such that the sum of the mean squared distances of each member of the set to the nearest of the k points is minimized. The k-means algorithm however does not necessarily produce such an ideal cluster. It begins with a randomly selected set of k points and then refines the location of the points iteratively and settles to a local minimum. Various parameters and options are provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.

Parameters
table_nameName of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.
column_namesList of column names on which the operation would be performed. If n columns are provided then each of the k result points will have n dimensions corresponding to the n columns.
kThe number of mean points to be determined by the algorithm.
toleranceStop iterating when the distances between successive points is less than the given tolerance.
optionsOptional parameters.
  • WHITEN: When set to 1 each of the columns is first normalized by its stdv - default is not to whiten.
  • MAX_ITERS: Number of times to try to hit the tolerance limit before giving up - default is 10.
  • NUM_TRIES: Number of times to run the k-means algorithm with a different randomly selected starting points - helps avoid local minimum. Default is 1.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of result_table. If result_table_persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_result_table_name. Supported values: The default value is FALSE.
  • RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If this option is specified, the results are not returned in the response.
  • RESULT_TABLE_PERSIST: If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • TTL: Sets the TTL of the table specified in result_table.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2341 of file KineticaFunctions.cs.

AggregateMinMaxResponse kinetica.Kinetica.aggregateMinMax ( AggregateMinMaxRequest  request_)
inline

Calculates and returns the minimum and maximum values of a particular column in a table.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2361 of file KineticaFunctions.cs.

AggregateMinMaxResponse kinetica.Kinetica.aggregateMinMax ( string  table_name,
string  column_name,
IDictionary< string, string >  options = null 
)
inline

Calculates and returns the minimum and maximum values of a particular column in a table.

Parameters
table_nameName of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of a column or an expression of one or more column on which the min-max will be calculated.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2385 of file KineticaFunctions.cs.

AggregateMinMaxGeometryResponse kinetica.Kinetica.aggregateMinMaxGeometry ( AggregateMinMaxGeometryRequest  request_)
inline

Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2404 of file KineticaFunctions.cs.

AggregateMinMaxGeometryResponse kinetica.Kinetica.aggregateMinMaxGeometry ( string  table_name,
string  column_name,
IDictionary< string, string >  options = null 
)
inline

Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.

Parameters
table_nameName of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of a geospatial geometry column on which the min-max will be calculated.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2429 of file KineticaFunctions.cs.

AggregateStatisticsResponse kinetica.Kinetica.aggregateStatistics ( AggregateStatisticsRequest  request_)
inline

Calculates the requested statistics of the given column(s) in a given table.


The available statistics are: count (number of total objects), mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, weighted_average, cardinality (unique count), estimated_cardinality, percentile, and percentile_rank.
Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated using the t-digest algorithm. They must include the desired percentile/percentile_rank. To compute multiple percentiles each value must be specified separately (i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the percentile statistic to calculate percentile resolution, e.g., a 50th percentile with 200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified in weight_column_name. The weighted average is then defined as the sum of the products of times the weight_column_name values divided by the sum of the weight_column_name values.
Additional columns can be used in the calculation of statistics via additional_column_names. Values in these columns will be included in the overall aggregate calculation–individual aggregates will not be calculated per additional column. For instance, requesting the count & mean of x and additional_column_names y & z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30, would return the total number of x, y, & z values (30), and the single average value across all x, y, & z values (15.5).
The response includes a list of key/value pairs of each statistic requested and its corresponding value.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2510 of file KineticaFunctions.cs.

AggregateStatisticsResponse kinetica.Kinetica.aggregateStatistics ( string  table_name,
string  column_name,
string  stats,
IDictionary< string, string >  options = null 
)
inline

Calculates the requested statistics of the given column(s) in a given table.


The available statistics are: count (number of total objects), mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, weighted_average, cardinality (unique count), estimated_cardinality, percentile, and percentile_rank.
Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated using the t-digest algorithm. They must include the desired percentile/percentile_rank. To compute multiple percentiles each value must be specified separately (i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the percentile statistic to calculate percentile resolution, e.g., a 50th percentile with 200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified in weight_column_name. The weighted average is then defined as the sum of the products of column_name times the weight_column_name values divided by the sum of the weight_column_name values.
Additional columns can be used in the calculation of statistics via additional_column_names. Values in these columns will be included in the overall aggregate calculation–individual aggregates will not be calculated per additional column. For instance, requesting the count & mean of column_name x and additional_column_names y & z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30, would return the total number of x, y, & z values (30), and the single average value across all x, y, & z values (15.5).
The response includes a list of key/value pairs of each statistic requested and its corresponding value.

Parameters
table_nameName of the table on which the statistics operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of the primary column for which the statistics are to be calculated.
statsComma separated list of the statistics to calculate, e.g. "sum,mean". Supported values:
  • COUNT: Number of objects (independent of the given column(s)).
  • MEAN: Arithmetic mean (average), equivalent to sum/count.
  • STDV: Sample standard deviation (denominator is count-1).
  • VARIANCE: Unbiased sample variance (denominator is count-1).
  • SKEW: Skewness (third standardized moment).
  • KURTOSIS: Kurtosis (fourth standardized moment).
  • SUM: Sum of all values in the column(s).
  • MIN: Minimum value of the column(s).
  • MAX: Maximum value of the column(s).
  • WEIGHTED_AVERAGE: Weighted arithmetic mean (using the option weight_column_name as the weighting column).
  • CARDINALITY: Number of unique values in the column(s).
  • ESTIMATED_CARDINALITY: Estimate (via hyperloglog technique) of the number of unique values in the column(s).
  • PERCENTILE: Estimate (via t-digest) of the given percentile of the column(s) (percentile(50.0) will be an approximation of the median). Add a second, comma-separated value to calculate percentile resolution, e.g., 'percentile(75,150)'
  • PERCENTILE_RANK: Estimate (via t-digest) of the percentile rank of the given value in the column(s) (if the given value is the median of the column(s), percentile_rank(<median>) will return approximately 50.0).
optionsOptional parameters.
  • ADDITIONAL_COLUMN_NAMES: A list of comma separated column names over which statistics can be accumulated along with the primary column. All columns listed and must be of the same type. Must not include the column specified in and no column can be listed twice.
  • WEIGHT_COLUMN_NAME: Name of column used as weighting attribute for the weighted average statistic.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2706 of file KineticaFunctions.cs.

AggregateStatisticsByRangeResponse kinetica.Kinetica.aggregateStatisticsByRange ( AggregateStatisticsByRangeRequest  request_)
inline

Divides the given set into bins and calculates statistics of the values of a value-column in each bin.

The bins are based on the values of a given binning-column. The statistics that may be requested are mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, first, last and weighted average. In addition to the requested statistics the count of total samples in each bin is returned. This counts vector is just the histogram of the column used to divide the set members into bins. The weighted average statistic requires a weight column to be specified in weight_column_name. The weighted average is then defined as the sum of the products of the value column times the weight column divided by the sum of the weight column.
There are two methods for binning the set members. In the first, which can be used for numeric valued binning-columns, a min, max and interval are specified. The number of bins, nbins, is the integer upper bound of (max-min)/interval. Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second method, bin_values specifies a list of binning column values. Binning-columns whose value matches the nth member of the bin_values list are placed in the nth bin. When a list is provided, the binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2762 of file KineticaFunctions.cs.

AggregateStatisticsByRangeResponse kinetica.Kinetica.aggregateStatisticsByRange ( string  table_name,
string  select_expression,
string  column_name,
string  value_column_name,
string  stats,
double  start,
double  end,
double  interval,
IDictionary< string, string >  options = null 
)
inline

Divides the given set into bins and calculates statistics of the values of a value-column in each bin.

The bins are based on the values of a given binning-column. The statistics that may be requested are mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, first, last and weighted average. In addition to the requested statistics the count of total samples in each bin is returned. This counts vector is just the histogram of the column used to divide the set members into bins. The weighted average statistic requires a weight column to be specified in weight_column_name. The weighted average is then defined as the sum of the products of the value column times the weight column divided by the sum of the weight column.
There are two methods for binning the set members. In the first, which can be used for numeric valued binning-columns, a min, max and interval are specified. The number of bins, nbins, is the integer upper bound of (max-min)/interval. Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second method, bin_values specifies a list of binning column values. Binning-columns whose value matches the nth member of the bin_values list are placed in the nth bin. When a list is provided, the binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.

Parameters
table_nameName of the table on which the ranged-statistics operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.
select_expressionFor a non-empty expression statistics are calculated for those records for which the expression is true. The default value is ''.
column_nameName of the binning-column used to divide the set samples into bins.
value_column_nameName of the value-column for which statistics are to be computed.
statsA string of comma separated list of the statistics to calculate, e.g. 'sum,mean'. Available statistics: mean, stdv (standard deviation), variance, skew, kurtosis, sum.
startThe lower bound of the binning-column.
endThe upper bound of the binning-column.
intervalThe interval of a bin. Set members fall into bin i if the binning-column falls in the range [start+interval*i, start+interval*(i+1)).
optionsMap of optional parameters:
  • ADDITIONAL_COLUMN_NAMES: A list of comma separated value-column names over which statistics can be accumulated along with the primary value_column.
  • BIN_VALUES: A list of comma separated binning-column values. Values that match the nth bin_values value are placed in the nth bin.
  • WEIGHT_COLUMN_NAME: Name of the column used as weighting column for the weighted_average statistic.
  • ORDER_COLUMN_NAME: Name of the column used for candlestick charting techniques.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 2865 of file KineticaFunctions.cs.

AggregateUniqueResponse kinetica.Kinetica.aggregateUnique ( AggregateUniqueRequest  request_)
inline

Returns all the unique values from a particular column (specified by ) of a particular table or view (specified by ).

If is a numeric column, the values will be in . Otherwise if is a string column, the values will be in . The results can be paged via and parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the , the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table's shard key is used as the , the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value of is an unrestricted-length string.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 2946 of file KineticaFunctions.cs.

AggregateUniqueResponse kinetica.Kinetica.aggregateUnique ( string  table_name,
string  column_name,
long  offset = 0,
long  limit = -9999,
IDictionary< string, string >  options = null 
)
inline

Returns all the unique values from a particular column (specified by column_name ) of a particular table or view (specified by table_name ).

If column_name is a numeric column, the values will be in . Otherwise if column_name is a string column, the values will be in . The results can be paged via offset and limit parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order options would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the options , the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table's shard key is used as the column_name , the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value of column_name is an unrestricted-length string.

Parameters
table_nameName of an existing table or view on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of the column or an expression containing one or more column names on which the unique function would be applied.
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use <member name="has_more_records"> to see if more records exist in the result to be fetched, and & to request subsequent pages of results. The default value is -9999.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of result_table. If result_table_persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_result_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema as part of result_table and use /create/schema to create the schema if non-existent] Name of a schema which is to contain the table specified in result_table. If the schema provided is non-existent, it will be automatically created.
  • EXPRESSION: Optional filter expression to apply to the table.
  • SORT_ORDER: String indicating how the returned values should be sorted. Supported values: The default value is ASCENDING.
  • RESULT_TABLE: The name of the table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If present, no results are returned in the response. Not available if is an unrestricted-length string.
  • RESULT_TABLE_PERSIST: If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Supported values: The default value is FALSE.
  • RESULT_TABLE_GENERATE_PK: If true then set a primary key for the result table. Must be used in combination with the result_table option. Supported values: The default value is FALSE.
  • TTL: Sets the TTL of the table specified in result_table.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
  • VIEW_ID: ID of view of which the result table will be a member. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 3196 of file KineticaFunctions.cs.

AggregateUnpivotResponse kinetica.Kinetica.aggregateUnpivot ( AggregateUnpivotRequest  request_)
inline

Rotate the column values into rows values.


For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 3237 of file KineticaFunctions.cs.

AggregateUnpivotResponse kinetica.Kinetica.aggregateUnpivot ( string  table_name,
IList< string >  column_names,
string  variable_column_name,
string  value_column_name,
IList< string >  pivoted_columns,
IDictionary< string, string >  options = null 
)
inline

Rotate the column values into rows values.


For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.

Parameters
table_nameName of the table on which the operation will be performed. Must be an existing table/view, in [schema_name.]table_name format, using standard name resolution rules.
column_namesList of column names or expressions. A wildcard '*' can be used to include all the non-pivoted columns from the source table.
variable_column_nameSpecifies the variable/parameter column name. The default value is ''.
value_column_nameSpecifies the value column name. The default value is ''.
pivoted_columnsList of one or more values typically the column names of the input table. All the columns in the source table must have the same data type.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of result_table. If result_table_persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_result_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema as part of result_table and use /create/schema to create the schema if non-existent] Name of a schema which is to contain the table specified in result_table. If the schema is non-existent, it will be automatically created.
  • RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If present, no results are returned in the response.
  • RESULT_TABLE_PERSIST: If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • EXPRESSION: Filter expression to apply to the table prior to unpivot processing.
  • ORDER_BY: Comma-separated list of the columns to be sorted by; e.g. 'timestamp asc, x desc'. The columns specified must be present in input table. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
  • LIMIT: The number of records to keep. The default value is ''.
  • TTL: Sets the TTL of the table specified in result_table.
  • VIEW_ID: view this result table is part of. The default value is ''.
  • CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the table specified in result_table. The columns specified must be present in output column names. If any alias is given for any column name, the alias must be used, rather than the original column name.
  • RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 3433 of file KineticaFunctions.cs.

AlterCredentialResponse kinetica.Kinetica.alterCredential ( AlterCredentialRequest  request_)
inline

Alter the properties of an existing credential.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 3458 of file KineticaFunctions.cs.

AlterCredentialResponse kinetica.Kinetica.alterCredential ( string  credential_name,
IDictionary< string, string >  credential_updates_map,
IDictionary< string, string >  options 
)
inline

Alter the properties of an existing credential.

Parameters
credential_nameName of the credential to be altered. Must be an existing credential.
credential_updates_mapMap containing the properties of the credential to be updated. Error if empty.
optionsOptional parameters.
Returns
Response object containing the result of the operation.

Definition at line 3553 of file KineticaFunctions.cs.

AlterDatasinkResponse kinetica.Kinetica.alterDatasink ( AlterDatasinkRequest  request_)
inline

Alters the properties of an existing data sink

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 3573 of file KineticaFunctions.cs.

AlterDatasinkResponse kinetica.Kinetica.alterDatasink ( string  name,
IDictionary< string, string >  datasink_updates_map,
IDictionary< string, string >  options 
)
inline

Alters the properties of an existing data sink

Parameters
nameName of the data sink to be altered. Must be an existing data sink.
datasink_updates_mapMap containing the properties of the data sink to be updated. Error if empty.
  • DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'.
  • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sink
  • WAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sink
  • CREDENTIAL: Name of the credential object to be used in this data sink
  • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sink
  • S3_REGION: Name of the Amazon S3 region where the given bucket is located
  • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
  • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
  • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
  • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values: The default value is FALSE.
  • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specified
  • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sink
  • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
  • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sink
  • AZURE_OAUTH_TOKEN: Oauth token to access given storage container
  • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sink
  • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data sink
  • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sink
  • KAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.
  • KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka broker
  • ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values: The default value is TRUE.
  • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
  • USE_HTTPS: Use https to connect to datasink if true, otherwise use http Supported values: The default value is TRUE.
  • MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.
  • MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.
  • JSON_FORMAT: The desired format of JSON encoded notifications message. If nested, records are returned as an array. Otherwise, only a single record per messages is returned. Supported values: The default value is FLAT.
  • SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value is FALSE.
  • SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
optionsOptional parameters.
Returns
Response object containing the result of the operation.

Definition at line 3855 of file KineticaFunctions.cs.

AlterDatasourceResponse kinetica.Kinetica.alterDatasource ( AlterDatasourceRequest  request_)
inline

Alters the properties of an existing data source

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 3874 of file KineticaFunctions.cs.

AlterDatasourceResponse kinetica.Kinetica.alterDatasource ( string  name,
IDictionary< string, string >  datasource_updates_map,
IDictionary< string, string >  options 
)
inline

Alters the properties of an existing data source

Parameters
nameName of the data source to be altered. Must be an existing data source.
datasource_updates_mapMap containing the properties of the data source to be updated. Error if empty.
  • LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure','gcs','hdfs','kafka' and 's3'.
  • USER_NAME: Name of the remote system user; may be an empty string
  • PASSWORD: Password for the remote system user; may be an empty string
  • SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value is FALSE.
  • CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage provider
  • WAIT_TIMEOUT: Timeout in seconds for reading from this storage provider
  • CREDENTIAL: Name of the credential object to be used in data source
  • S3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data source
  • S3_REGION: Name of the Amazon S3 region where the given bucket is located
  • S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM user
  • S3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting data
  • S3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt data
  • HDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.
  • HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS user
  • HDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster Supported values: The default value is FALSE.
  • AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specified
  • AZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data source
  • AZURE_TENANT_ID: Active Directory tenant ID (or directory ID)
  • AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data source
  • AZURE_OAUTH_TOKEN: OAuth token to access given storage container
  • GCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data source
  • GCS_PROJECT_ID: Name of the Google Cloud project to use as the data source
  • GCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data source
  • KAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.
  • KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data source
  • JDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.
  • JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver class
  • ANONYMOUS: Create an anonymous connection to the storage provider–DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection Supported values: The default value is TRUE.
  • USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value is FALSE.
  • USE_HTTPS: Use https to connect to datasource if true, otherwise use http Supported values: The default value is TRUE.
  • SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
optionsOptional parameters.
Returns
Response object containing the result of the operation.

Definition at line 4158 of file KineticaFunctions.cs.

AlterDirectoryResponse kinetica.Kinetica.alterDirectory ( AlterDirectoryRequest  request_)
inline

Alters an existing directory in KiFS.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4177 of file KineticaFunctions.cs.

AlterDirectoryResponse kinetica.Kinetica.alterDirectory ( string  directory_name,
IDictionary< string, string >  directory_updates_map,
IDictionary< string, string >  options = null 
)
inline

Alters an existing directory in KiFS.

Parameters
directory_nameName of the directory in KiFS to be altered.
directory_updates_mapMap containing the properties of the directory to be altered. Error if empty.
  • DATA_LIMIT: The maximum capacity, in bytes, to apply to the directory. Set to -1 to indicate no upper limit.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 4207 of file KineticaFunctions.cs.

AlterEnvironmentResponse kinetica.Kinetica.alterEnvironment ( AlterEnvironmentRequest  request_)
inline

Alters an existing environment which can be referenced by a user-defined function (UDF).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4227 of file KineticaFunctions.cs.

AlterEnvironmentResponse kinetica.Kinetica.alterEnvironment ( string  environment_name,
string  action,
string  _value,
IDictionary< string, string >  options = null 
)
inline

Alters an existing environment which can be referenced by a user-defined function (UDF).

Parameters
environment_nameName of the environment to be altered.
actionModification operation to be applied Supported values:
  • INSTALL_PACKAGE: Install a python package from PyPI, an external data source or KiFS
  • INSTALL_REQUIREMENTS: Install packages from a requirements file
  • UNINSTALL_PACKAGE: Uninstall a python package.
  • UNINSTALL_REQUIREMENTS: Uninstall packages from a requirements file
  • RESET: Uninstalls all packages in the environment and resets it to the original state at time of creation
  • REBUILD: Recreates the environment and re-installs all packages, upgrades the packages if necessary based on dependencies
_valueThe value of the modification, depending on . For example, if is install_package, this would be the python package name. If is install_requirements, this would be the path of a requirements file from which to install packages. If an external data source is specified in datasource_name, this can be the path to a wheel file or source archive. Alternatively, if installing from a file (wheel or source archive), the value may be a reference to a file in KiFS.
optionsOptional parameters.
  • DATASOURCE_NAME: Name of an existing external data source from which packages specified in can be loaded
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 4309 of file KineticaFunctions.cs.

AlterResourceGroupResponse kinetica.Kinetica.alterResourceGroup ( AlterResourceGroupRequest  request_)
inline

Alters the properties of an exisiting resource group to facilitate resource management.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4465 of file KineticaFunctions.cs.

AlterResourceGroupResponse kinetica.Kinetica.alterResourceGroup ( string  name,
IDictionary< string, IDictionary< string, string >>  tier_attributes = null,
string  ranking = AlterResourceGroupRequest.Ranking.EMPTY_STRING,
string  adjoining_resource_group = "",
IDictionary< string, string >  options = null 
)
inline

Alters the properties of an exisiting resource group to facilitate resource management.

Parameters
nameName of the group to be altered. Must be an existing resource group name or an empty string when used inconjunction with the is_default_group option.
tier_attributesOptional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM & RAM tiers. For instance, to set max VRAM capacity to 1GB and max RAM capacity to 10GB, use: {'VRAM':{'max_memory':'1000000000'}, 'RAM':{'max_memory':'10000000000'}}
  • MAX_MEMORY: Maximum amount of memory usable in the given tier at one time for this group.
The default value is an empty Dictionary.
rankingIf the resource group ranking is to be updated, this indicates the relative ranking among existing resource groups where this resource group will be moved; leave blank if not changing the ranking. When using before or after, specify which resource group this one will be inserted before or after in . Supported values: The default value is EMPTY_STRING.
adjoining_resource_groupIf is before or after, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ''.
optionsOptional parameters.
  • MAX_CPU_CONCURRENCY: Maximum number of simultaneous threads that will be used to execute a request for this group.
  • MAX_DATA: Maximum amount of cumulative ram usage regardless of tier status for this group.
  • MAX_SCHEDULING_PRIORITY: Maximum priority of a scheduled task for this group.
  • MAX_TIER_PRIORITY: Maximum priority of a tiered object for this group.
  • IS_DEFAULT_GROUP: If true, this request applies to the global default resource group. It is an error for this field to be true when the field is also populated. Supported values: The default value is FALSE.
  • PERSIST: If true and a system-level change was requested, the system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 4608 of file KineticaFunctions.cs.

AlterRoleResponse kinetica.Kinetica.alterRole ( AlterRoleRequest  request_)
inline

Alters a Role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4630 of file KineticaFunctions.cs.

AlterRoleResponse kinetica.Kinetica.alterRole ( string  name,
string  action,
string  _value,
IDictionary< string, string >  options = null 
)
inline

Alters a Role.

Parameters
nameName of the role to be altered. Must be an existing role.
actionModification operation to be applied to the role. Supported values:
  • SET_RESOURCE_GROUP: Sets the resource group for an internal role. The resource group must exist, otherwise, an empty string assigns the role to the default resource group.
_valueThe value of the modification, depending on .
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 4662 of file KineticaFunctions.cs.

AlterSchemaResponse kinetica.Kinetica.alterSchema ( AlterSchemaRequest  request_)
inline

Used to change the name of a SQL-style schema, specified in .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4682 of file KineticaFunctions.cs.

AlterSchemaResponse kinetica.Kinetica.alterSchema ( string  schema_name,
string  action,
string  _value,
IDictionary< string, string >  options = null 
)
inline

Used to change the name of a SQL-style schema, specified in schema_name .

Parameters
schema_nameName of the schema to be altered.
actionModification operation to be applied Supported values:
_valueThe value of the modification, depending on . For now the only value of is rename_schema. In this case the value is the new name of the schema.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 4719 of file KineticaFunctions.cs.

AlterSystemPropertiesResponse kinetica.Kinetica.alterSystemProperties ( AlterSystemPropertiesRequest  request_)
inline

The Kinetica.alterSystemProperties(IDictionary{string, string},IDictionary{string, string}) endpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution.

Commands are given through the whose keys are commands and values are strings representing integer values (for example '8000') or boolean values ('true' or 'false').

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 4745 of file KineticaFunctions.cs.

AlterSystemPropertiesResponse kinetica.Kinetica.alterSystemProperties ( IDictionary< string, string >  property_updates_map,
IDictionary< string, string >  options = null 
)
inline

The Kinetica.alterSystemProperties(IDictionary{string, string},IDictionary{string, string}) endpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution.

Commands are given through the property_updates_map whose keys are commands and values are strings representing integer values (for example '8000') or boolean values ('true' or 'false').

Parameters
property_updates_mapMap containing the properties of the system to be updated. Error if empty.
  • SM_OMP_THREADS: Set the number of OpenMP threads that will be used to service filter & aggregation requests to the specified integer value.
  • KERNEL_OMP_THREADS: Set the number of kernel OpenMP threads to the specified integer value.
  • CONCURRENT_KERNEL_EXECUTION: Enables concurrent kernel execution if the value is true and disables it if the value is false. Supported values:
  • SUBTASK_CONCURRENCY_LIMIT: Sets the maximum number of simultaneous threads allocated to a given request, on each rank. Note that thread allocation may also be limted by resource group limits and/or system load.
  • CHUNK_SIZE: Sets the number of records per chunk to be used for all new tables.
  • EVICT_COLUMNS: Attempts to evict columns from memory to the persistent store. Value string is a semicolon separated list of entries, each entry being a table name optionally followed by a comma and a comma separated list of column names to attempt to evict. An empty value string will attempt to evict all tables and columns.
  • EXECUTION_MODE: Sets the execution_mode for kernel executions to the specified string value. Possible values are host, device, default (engine decides) or an integer value that indicates max chunk size to exec on host
  • EXTERNAL_FILES_DIRECTORY: Sets the root directory path where external table data files are accessed from. Path must exist on the head node
  • FLUSH_TO_DISK: Flushes any changes to any tables to the persistent store. These changes include updates to the vector store, object store, and text search store, Value string is ignored
  • CLEAR_CACHE: Clears cached results. Useful to allow repeated timing of endpoints. Value string is the name of the table for which to clear the cached results, or an empty string to clear the cached results for all tables.
  • COMMUNICATOR_TEST: Invoke the communicator test and report timing results. Value string is a semicolon separated list of [key]=[value] expressions. Expressions are: num_transactions=[num] where num is the number of request reply transactions to invoke per test; message_size=[bytes] where bytes is the size in bytes of the messages to send; check_values=[enabled] where if enabled is true the value of the messages received are verified.
  • NETWORK_SPEED: Invoke the network speed test and report timing results. Value string is a semicolon-separated list of [key]=[value] expressions. Valid expressions are: seconds=[time] where time is the time in seconds to run the test; data_size=[bytes] where bytes is the size in bytes of the block to be transferred; threads=[number of threads]; to_ranks=[space-separated list of ranks] where the list of ranks is the ranks that rank 0 will send data to and get data from. If to_ranks is unspecified then all worker ranks are used.
  • REQUEST_TIMEOUT: Number of minutes after which filtering (e.g., /filter) and aggregating (e.g., /aggregate/groupby) queries will timeout. The default value is '20'.
  • MAX_GET_RECORDS_SIZE: The maximum number of records the database will serve for a given data retrieval call. The default value is '20000'.
  • MAX_GRBC_BATCH_SIZE: <DEVELOPER>
  • ENABLE_AUDIT: Enable or disable auditing.
  • AUDIT_HEADERS: Enable or disable auditing of request headers.
  • AUDIT_BODY: Enable or disable auditing of request bodies.
  • AUDIT_DATA: Enable or disable auditing of request data.
  • AUDIT_RESPONSE: Enable or disable auditing of response information.
  • SHADOW_AGG_SIZE: Size of the shadow aggregate chunk cache in bytes. The default value is '10000000'.
  • SHADOW_FILTER_SIZE: Size of the shadow filter chunk cache in bytes. The default value is '10000000'.
  • SYNCHRONOUS_COMPRESSION: compress vector on set_compression (instead of waiting for background thread). The default value is 'false'.
  • ENABLE_OVERLAPPED_EQUI_JOIN: Enable overlapped-equi-join filter. The default value is 'true'.
  • KAFKA_BATCH_SIZE: Maximum number of records to be ingested in a single batch. The default value is '1000'.
  • KAFKA_POLL_TIMEOUT: Maximum time (milliseconds) for each poll to get records from kafka. The default value is '0'.
  • KAFKA_WAIT_TIME: Maximum time (seconds) to buffer records received from kafka before ingestion. The default value is '30'.
  • EGRESS_PARQUET_COMPRESSION: Parquet file compression type Supported values: The default value is SNAPPY.
  • EGRESS_SINGLE_FILE_MAX_SIZE: Max file size (in MB) to allow saving to a single file. May be overridden by target limitations. The default value is '10000'.
  • MAX_CONCURRENT_KERNELS: Sets the max_concurrent_kernels value of the conf.
  • TCS_PER_TOM: Sets the tcs_per_tom value of the conf.
  • TPS_PER_TOM: Sets the tps_per_tom value of the conf.
  • AI_API_PROVIDER: AI API provider type
  • AI_API_URL: AI API URL
  • AI_API_KEY: AI API key
  • AI_API_CONNECTION_TIMEOUT: AI API connection timeout in seconds
  • POSTGRES_PROXY_IDLE_CONNECTION_TIMEOUT: Idle connection timeout in seconds
  • POSTGRES_PROXY_KEEP_ALIVE: Enable postgres proxy keep alive. The default value is 'false'.
optionsOptional parameters.
  • EVICT_TO_COLD: If true and evict_columns is specified, the given objects will be evicted to cold storage (if such a tier exists). Supported values:
  • PERSIST: If true the system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 5096 of file KineticaFunctions.cs.

AlterTableResponse kinetica.Kinetica.alterTable ( AlterTableRequest  request_)
inline

Apply various modifications to a table or view.

The available modifications include the following:
Manage a table's columns–a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, chunk skip, or geospatial index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 5168 of file KineticaFunctions.cs.

AlterTableResponse kinetica.Kinetica.alterTable ( string  table_name,
string  action,
string  _value,
IDictionary< string, string >  options = null 
)
inline

Apply various modifications to a table or view.

The available modifications include the following:
Manage a table's columns–a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, chunk skip, or geospatial index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.

Parameters
table_nameTable on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view.
actionModification operation to be applied Supported values:
  • ALLOW_HOMOGENEOUS_TABLES: No longer supported; action will be ignored.
  • CREATE_INDEX: Creates a column (attribute) index, chunk skip index, or geospatial index (depending on the specified index_type), on the column name specified in . If this column already has the specified index, an error will be returned.
  • DELETE_INDEX: Deletes a column (attribute) index, chunk skip index, or geospatial index (depending on the specified index_type), on the column name specified in . If this column does not have the specified index, an error will be returned.
  • MOVE_TO_COLLECTION: [DEPRECATED–please use move_to_schema and use /create/schema to create the schema if non-existent] Moves a table or view into a schema named . If the schema provided is non-existent, it will be automatically created.
  • MOVE_TO_SCHEMA: Moves a table or view into a schema named . If the schema provided is nonexistent, an error will be thrown. If is empty, then the table or view will be placed in the user's default schema.
  • PROTECTED: No longer used. Previously set whether the given should be protected or not. The would have been either 'true' or 'false'.
  • RENAME_TABLE: Renames a table or view within its current schema to . Has the same naming restrictions as tables.
  • TTL: Sets the time-to-live in minutes of the table or view specified in .
  • ADD_COLUMN: Adds the column specified in to the table specified in . Use column_type and column_properties in to set the column's type and properties, respectively.
  • CHANGE_COLUMN: Changes type and properties of the column specified in . Use column_type and column_properties in to set the column's type and properties, respectively. Note that primary key and/or shard key columns cannot be changed. All unchanging column properties must be listed for the change to take place, e.g., to add dictionary encoding to an existing 'char4' column, both 'char4' and 'dict' must be specified in the map.
  • SET_COLUMN_COMPRESSION: No longer supported; action will be ignored.
  • DELETE_COLUMN: Deletes the column specified in from the table specified in .
  • CREATE_FOREIGN_KEY: Creates a foreign key specified in using the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.
  • DELETE_FOREIGN_KEY: Deletes a foreign key. The should be the foreign_key_name specified when creating the key or the complete string used to define it.
  • ADD_PARTITION: Adds the partition specified in , to either a range-partitioned or manual list-partitioned table.
  • REMOVE_PARTITION: Removes the partition specified in (and relocates all of its data to the default partition) from either a range-partitioned or manual list-partitioned table.
  • DELETE_PARTITION: Deletes the partition specified in (and all of its data) from either a range-partitioned or manual list-partitioned table.
  • SET_GLOBAL_ACCESS_MODE: Sets the global access mode (i.e. locking) for the table specified in . Specify the access mode in . Valid modes are 'no_access', 'read_only', 'write_only' and 'read_write'.
  • REFRESH: For a materialized view, replays all the table creation commands required to create the view. For an external table, reloads all data in the table from its associated source files or data source.
  • SET_REFRESH_METHOD: For a materialized view, sets the method by which the view is refreshed to the method specified in - one of 'manual', 'periodic', or 'on_change'. For an external table, sets the method by which the table is refreshed to the method specified in - either 'manual' or 'on_start'.
  • SET_REFRESH_START_TIME: Sets the time to start periodic refreshes of this materialized view to the datetime string specified in with format 'YYYY-MM-DD HH:MM:SS'. Subsequent refreshes occur at the specified time + N * the refresh period.
  • SET_REFRESH_STOP_TIME: Sets the time to stop periodic refreshes of this materialized view to the datetime string specified in with format 'YYYY-MM-DD HH:MM:SS'.
  • SET_REFRESH_PERIOD: Sets the time interval in seconds at which to refresh this materialized view to the value specified in . Also, sets the refresh method to periodic if not already set.
  • SET_REFRESH_SPAN: Sets the future time-offset(in seconds) for the view refresh to stop.
  • SET_REFRESH_EXECUTE_AS: Sets the user name to refresh this materialized view to the value specified in .
  • REMOVE_TEXT_SEARCH_ATTRIBUTES: Removes text search attribute from all columns.
  • REMOVE_SHARD_KEYS: Removes the shard key property from all columns, so that the table will be considered randomly sharded. The data is not moved. The is ignored.
  • SET_STRATEGY_DEFINITION: Sets the tier strategy for the table and its columns to the one specified in , replacing the existing tier strategy in its entirety.
  • CANCEL_DATASOURCE_SUBSCRIPTION: Permanently unsubscribe a data source that is loading continuously as a stream. The data source can be Kafka / S3 / Azure.
  • PAUSE_DATASOURCE_SUBSCRIPTION: Temporarily unsubscribe a data source that is loading continuously as a stream. The data source can be Kafka / S3 / Azure.
  • RESUME_DATASOURCE_SUBSCRIPTION: Resubscribe to a paused data source subscription. The data source can be Kafka / S3 / Azure.
  • CHANGE_OWNER: Change the owner resource group of the table.
_valueThe value of the modification, depending on . For example, if is add_column, this would be the column name; while the column's definition would be covered by the column_type, column_properties, column_default_value, and add_column_expression in . If is ttl, it would be the number of minutes for the new TTL. If is refresh, this field would be blank.
optionsOptional parameters.
  • ACTION:
  • COLUMN_NAME:
  • TABLE_NAME:
  • COLUMN_DEFAULT_VALUE: When adding a column, set a default value for existing records. For nullable columns, the default value will be null, regardless of data type.
  • COLUMN_PROPERTIES: When adding or changing a column, set the column properties (strings, separated by a comma: data, store_only, text_search, char8, int8 etc).
  • COLUMN_TYPE: When adding or changing a column, set the column type (strings, separated by a comma: int, double, string, null etc).
  • COMPRESSION_TYPE: No longer supported; option will be ignored. Supported values: The default value is SNAPPY.
  • COPY_VALUES_FROM_COLUMN: [DEPRECATED–please use add_column_expression instead.]
  • RENAME_COLUMN: When changing a column, specify new column name.
  • VALIDATE_CHANGE_COLUMN: When changing a column, validate the change before applying it (or not). Supported values:
    • TRUE: Validate all values. A value too large (or too long) for the new type will prevent any change.
    • FALSE: When a value is too large or long, it will be truncated.
    The default value is TRUE.
  • UPDATE_LAST_ACCESS_TIME: Indicates whether the time-to-live (TTL) expiration countdown timer should be reset to the table's TTL. Supported values:
    • TRUE: Reset the expiration countdown timer to the table's configured TTL.
    • FALSE: Don't reset the timer; expiration countdown will continue from where it is, as if the table had not been accessed.
    The default value is TRUE.
  • ADD_COLUMN_EXPRESSION: When adding a column, an optional expression to use for the new column's values. Any valid expression may be used, including one containing references to existing columns in the same table.
  • STRATEGY_DEFINITION: Optional parameter for specifying the tier strategy for the table and its columns when is set_strategy_definition, replacing the existing tier strategy in its entirety.
  • INDEX_TYPE: Type of index to create, when is create_index, or to delete, when is delete_index. Supported values: The default value is COLUMN.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 5734 of file KineticaFunctions.cs.

AlterTableColumnsResponse kinetica.Kinetica.alterTableColumns ( AlterTableColumnsRequest  request_)
inline

Apply various modifications to columns in a table, view.

The available modifications include the following:
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns–a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 5769 of file KineticaFunctions.cs.

AlterTableColumnsResponse kinetica.Kinetica.alterTableColumns ( string  table_name,
IList< IDictionary< string, string >>  column_alterations,
IDictionary< string, string >  options 
)
inline

Apply various modifications to columns in a table, view.

The available modifications include the following:
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns–a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.

Parameters
table_nameTable on which the operation will be performed. Must be an existing table or view, in [schema_name.]table_name format, using standard name resolution rules.
column_alterationsList of alter table add/delete/change column requests - all for the same table. Each request is a map that includes 'column_name', 'action' and the options specific for the action. Note that the same options as in alter table requests but in the same map as the column name and the action. For example: [{'column_name':'col_1','action':'change_column','rename_column':'col_2'},{'column_name':'col_1','action':'add_column', 'type':'int','default_value':'1'}]
optionsOptional parameters.
Returns
Response object containing the result of the operation.

Definition at line 5815 of file KineticaFunctions.cs.

AlterTableMetadataResponse kinetica.Kinetica.alterTableMetadata ( AlterTableMetadataRequest  request_)
inline

Updates (adds or changes) metadata for tables.

The metadata key and values must both be strings. This is an easy way to annotate whole tables rather than single records within tables. Some examples of metadata are owner of the table, table creation timestamp etc.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 5839 of file KineticaFunctions.cs.

AlterTableMetadataResponse kinetica.Kinetica.alterTableMetadata ( IList< string >  table_names,
IDictionary< string, string >  metadata_map,
IDictionary< string, string >  options = null 
)
inline

Updates (adds or changes) metadata for tables.

The metadata key and values must both be strings. This is an easy way to annotate whole tables rather than single records within tables. Some examples of metadata are owner of the table, table creation timestamp etc.

Parameters
table_namesNames of the tables whose metadata will be updated, in [schema_name.]table_name format, using standard name resolution rules. All specified tables must exist, or an error will be returned.
metadata_mapA map which contains the metadata of the tables that are to be updated. Note that only one map is provided for all the tables; so the change will be applied to every table. If the provided map is empty, then all existing metadata for the table(s) will be cleared.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 5871 of file KineticaFunctions.cs.

AlterTableMonitorResponse kinetica.Kinetica.alterTableMonitor ( AlterTableMonitorRequest  request_)
inline

Alters a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 5891 of file KineticaFunctions.cs.

AlterTableMonitorResponse kinetica.Kinetica.alterTableMonitor ( string  topic_id,
IDictionary< string, string >  monitor_updates_map,
IDictionary< string, string >  options 
)
inline

Alters a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}).

Parameters
topic_idThe topic ID returned by /create/tablemonitor.
monitor_updates_mapMap containing the properties of the table monitor to be updated. Error if empty.
  • SCHEMA_NAME: Updates the schema name. If schema_name doesn't exist, an error will be thrown. If schema_name is empty, then the user's default schema will be used.
optionsOptional parameters.
Returns
Response object containing the result of the operation.

Definition at line 5924 of file KineticaFunctions.cs.

AlterTierResponse kinetica.Kinetica.alterTier ( AlterTierRequest  request_)
inline

Alters properties of an exisiting tier to facilitate resource management.


To disable watermark-based eviction, set both high_watermark and low_watermark to 100.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 5952 of file KineticaFunctions.cs.

AlterTierResponse kinetica.Kinetica.alterTier ( string  name,
IDictionary< string, string >  options = null 
)
inline

Alters properties of an exisiting tier to facilitate resource management.


To disable watermark-based eviction, set both high_watermark and low_watermark to 100.

Parameters
nameName of the tier to be altered. Must be an existing tier group name.
optionsOptional parameters.
  • CAPACITY: Maximum size in bytes this tier may hold at once.
  • HIGH_WATERMARK: Threshold of usage of this tier's resource that once exceeded, will trigger watermark-based eviction from this tier.
  • LOW_WATERMARK: Threshold of resource usage that once fallen below after crossing the high_watermark, will cease watermark-based eviction from this tier.
  • WAIT_TIMEOUT: Timeout in seconds for reading from or writing to this resource. Applies to cold storage tiers only.
  • PERSIST: If true the system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value is TRUE.
  • RANK: Apply the requested change only to a specific rank.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6035 of file KineticaFunctions.cs.

AlterUserResponse kinetica.Kinetica.alterUser ( AlterUserRequest  request_)
inline

Alters a user.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6050 of file KineticaFunctions.cs.

AlterUserResponse kinetica.Kinetica.alterUser ( string  name,
string  action,
string  _value,
IDictionary< string, string >  options = null 
)
inline

Alters a user.

Parameters
nameName of the user to be altered. Must be an existing user.
actionModification operation to be applied to the user. Supported values:
  • SET_PASSWORD: Sets the password of the user. The user must be an internal user.
  • SET_RESOURCE_GROUP: Sets the resource group for an internal user. The resource group must exist, otherwise, an empty string assigns the user to the default resource group.
  • SET_DEFAULT_SCHEMA: Set the default_schema for an internal user. An empty string means the user will have no default schema.
_valueThe value of the modification, depending on .
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6095 of file KineticaFunctions.cs.

AlterVideoResponse kinetica.Kinetica.alterVideo ( AlterVideoRequest  request_)
inline

Alters a video.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6112 of file KineticaFunctions.cs.

AlterVideoResponse kinetica.Kinetica.alterVideo ( string  path,
IDictionary< string, string >  options = null 
)
inline

Alters a video.

Parameters
pathFully-qualified KiFS path to the video to be altered.
optionsOptional parameters.
  • TTL: Sets the TTL of the video.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6138 of file KineticaFunctions.cs.

AppendRecordsResponse kinetica.Kinetica.appendRecords ( AppendRecordsRequest  request_)
inline

Append (or insert) all records from a source table (specified by ) to a particular target table (specified by ).

The field map (specified by ) holds the user specified map of target table column names with their mapped source column names.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6161 of file KineticaFunctions.cs.

AppendRecordsResponse kinetica.Kinetica.appendRecords ( string  table_name,
string  source_table_name,
IDictionary< string, string >  field_map,
IDictionary< string, string >  options = null 
)
inline

Append (or insert) all records from a source table (specified by source_table_name ) to a particular target table (specified by table_name ).

The field map (specified by field_map ) holds the user specified map of target table column names with their mapped source column names.

Parameters
table_nameThe table name for the records to be appended, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
source_table_nameThe source table name to get records from, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table name.
field_mapContains the mapping of column names from the target table (specified by ) as the keys, and corresponding column names or expressions (e.g., 'col_name+1') from the source table (specified by ). Must be existing column names in source table and target table, and their types must be matched. For details on using expressions, see Expressions.
optionsOptional parameters.
  • OFFSET: A positive integer indicating the number of initial results to skip from . Default is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT. The default value is '0'.
  • LIMIT: A positive integer indicating the maximum number of results to be returned from . Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is '-9999'.
  • EXPRESSION: Optional filter expression to apply to the . The default value is ''.
  • ORDER_BY: Comma-separated list of the columns to be sorted by from source table (specified by ), e.g., 'timestamp asc, x desc'. The order_by columns do not have to be present in . The default value is ''.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting source table records (specified by ) into a target table (specified by ) with a primary key. If set to true, any existing table record with primary key values that match those of a source table record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a source table record being inserted will remain unchanged, while the source record will be rejected and an error handled as determined by ignore_existing_pk. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting source table records (specified by ) into a target table (specified by ) with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any source table record being inserted that is rejected for having primary key values that match those of an existing target table record will be ignored with no error generated. If false, the rejection of any source table record for having primary key values matching an existing target table record will result in an error being raised. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore source table records whose primary key values collide with those of target table records
    • FALSE: Raise an error for any source table record whose primary key values collide with those of a target table record
    The default value is FALSE.
  • TRUNCATE_STRINGS: If set to true, it allows inserting longer strings into smaller charN string columns by truncating the longer strings to fit. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6344 of file KineticaFunctions.cs.

ClearStatisticsResponse kinetica.Kinetica.clearStatistics ( ClearStatisticsRequest  request_)
inline

Clears statistics (cardinality, mean value, etc.) for a column in a specified table.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6363 of file KineticaFunctions.cs.

ClearStatisticsResponse kinetica.Kinetica.clearStatistics ( string  table_name = "",
string  column_name = "",
IDictionary< string, string >  options = null 
)
inline

Clears statistics (cardinality, mean value, etc.) for a column in a specified table.

Parameters
table_nameName of a table, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table. The default value is ''.
column_nameName of the column in for which to clear statistics. The column must be from an existing table. An empty string clears statistics for all columns in the table. The default value is ''.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6390 of file KineticaFunctions.cs.

ClearTableResponse kinetica.Kinetica.clearTable ( ClearTableRequest  request_)
inline

Clears (drops) one or all tables in the database cluster.

The operation is synchronous meaning that the table will be cleared before the function returns. The response payload returns the status of the operation along with the name of the table that was cleared.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6413 of file KineticaFunctions.cs.

ClearTableResponse kinetica.Kinetica.clearTable ( string  table_name = "",
string  authorization = "",
IDictionary< string, string >  options = null 
)
inline

Clears (drops) one or all tables in the database cluster.

The operation is synchronous meaning that the table will be cleared before the function returns. The response payload returns the status of the operation along with the name of the table that was cleared.

Parameters
table_nameName of the table to be cleared, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table. Empty string clears all available tables, though this behavior is be prevented by default via gpudb.conf parameter 'disable_clear_all'. The default value is ''.
authorizationNo longer used. User can pass an empty string. The default value is ''.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If true and if the table specified in does not exist no error is returned. If false and if the table specified in does not exist then an error is returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6468 of file KineticaFunctions.cs.

ClearTableMonitorResponse kinetica.Kinetica.clearTableMonitor ( ClearTableMonitorRequest  request_)
inline

Deactivates a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6486 of file KineticaFunctions.cs.

ClearTableMonitorResponse kinetica.Kinetica.clearTableMonitor ( string  topic_id,
IDictionary< string, string >  options = null 
)
inline

Deactivates a table monitor previously created with Kinetica.createTableMonitor(string,IDictionary{string, string}).

Parameters
topic_idThe topic ID returned by /create/tablemonitor.
optionsOptional parameters.
  • KEEP_AUTOGENERATED_SINK: If true, the auto-generated datasink associated with this monitor, if there is one, will be retained for further use. If false, then the auto-generated sink will be dropped if there are no other monitors referencing it. Supported values: The default value is FALSE.
  • CLEAR_ALL_REFERENCES: If true, all references that share the same will be cleared. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6550 of file KineticaFunctions.cs.

ClearTriggerResponse kinetica.Kinetica.clearTrigger ( ClearTriggerRequest  request_)
inline

Clears or cancels the trigger identified by the specified handle.

The output returns the handle of the trigger cleared as well as indicating success or failure of the trigger deactivation.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6568 of file KineticaFunctions.cs.

ClearTriggerResponse kinetica.Kinetica.clearTrigger ( string  trigger_id,
IDictionary< string, string >  options = null 
)
inline

Clears or cancels the trigger identified by the specified handle.

The output returns the handle of the trigger cleared as well as indicating success or failure of the trigger deactivation.

Parameters
trigger_idID for the trigger to be deactivated.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6589 of file KineticaFunctions.cs.

CollectStatisticsResponse kinetica.Kinetica.collectStatistics ( CollectStatisticsRequest  request_)
inline

Collect statistics for a column(s) in a specified table.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6605 of file KineticaFunctions.cs.

CollectStatisticsResponse kinetica.Kinetica.collectStatistics ( string  table_name,
IList< string >  column_names,
IDictionary< string, string >  options = null 
)
inline

Collect statistics for a column(s) in a specified table.

Parameters
table_nameName of a table, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
column_namesList of one or more column names in for which to collect statistics (cardinality, mean value, etc.).
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6630 of file KineticaFunctions.cs.

CreateCredentialResponse kinetica.Kinetica.createCredential ( CreateCredentialRequest  request_)
inline

Create a new credential.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6705 of file KineticaFunctions.cs.

CreateCredentialResponse kinetica.Kinetica.createCredential ( string  credential_name,
string  type,
string  identity,
string  secret,
IDictionary< string, string >  options = null 
)
inline

Create a new credential.

Parameters
credential_nameName of the credential to be created. Must contain only letters, digits, and underscores, and cannot begin with a digit. Must not match an existing credential name.
typeType of the credential to be created. Supported values:
identityUser of the credential to be created.
secretPassword of the credential to be created.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 6786 of file KineticaFunctions.cs.

CreateDatasinkResponse kinetica.Kinetica.createDatasink ( CreateDatasinkRequest  request_)
inline

Creates a data sink, which contains the destination information for a data sink that is external to the database.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 6809 of file KineticaFunctions.cs.

CreateDatasinkResponse kinetica.Kinetica.createDatasink ( string  name,
string  destination,
IDictionary< string, string >  options = null 
)
inline

Creates a data sink, which contains the destination information for a data sink that is external to the database.

Parameters
nameName of the data sink to be created.
destinationDestination for the output data in format 'storage_provider_type://path[:port]'. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka' and 's3'.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 7124 of file KineticaFunctions.cs.

CreateDatasourceResponse kinetica.Kinetica.createDatasource ( CreateDatasourceRequest  request_)
inline

Creates a data source, which contains the location and connection information for a data store that is external to the database.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 7143 of file KineticaFunctions.cs.

CreateDatasourceResponse kinetica.Kinetica.createDatasource ( string  name,
string  location,
string  user_name,
string  password,
IDictionary< string, string >  options = null 
)
inline

Creates a data source, which contains the location and connection information for a data store that is external to the database.

Parameters
nameName of the data source to be created.
locationLocation of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure','gcs','hdfs','jdbc','kafka', 'confluent' and 's3'.
user_nameName of the remote system user; may be an empty string
passwordPassword for the remote system user; may be an empty string
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 7480 of file KineticaFunctions.cs.

CreateDirectoryResponse kinetica.Kinetica.createDirectory ( CreateDirectoryRequest  request_)
inline

Creates a new directory in KiFS.

The new directory serves as a location in which the user can upload files using Kinetica.uploadFiles(IList{string},IList{byte[]},IDictionary{string, string}).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 7540 of file KineticaFunctions.cs.

CreateDirectoryResponse kinetica.Kinetica.createDirectory ( string  directory_name,
IDictionary< string, string >  options = null 
)
inline

Creates a new directory in KiFS.

The new directory serves as a location in which the user can upload files using Kinetica.uploadFiles(IList{string},IList{byte[]},IDictionary{string, string}).

Parameters
directory_nameName of the directory in KiFS to be created.
optionsOptional parameters.
  • CREATE_HOME_DIRECTORY: When set, a home directory is created for the user name provided in the value. The must be an empty string in this case. The user must exist.
  • DATA_LIMIT: The maximum capacity, in bytes, to apply to the created directory. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.
  • NO_ERROR_IF_EXISTS: If true, does not return an error if the directory already exists Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 7600 of file KineticaFunctions.cs.

CreateEnvironmentResponse kinetica.Kinetica.createEnvironment ( CreateEnvironmentRequest  request_)
inline

Creates a new environment which can be used by user-defined functions (UDF).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 7617 of file KineticaFunctions.cs.

CreateEnvironmentResponse kinetica.Kinetica.createEnvironment ( string  environment_name,
IDictionary< string, string >  options = null 
)
inline

Creates a new environment which can be used by user-defined functions (UDF).

Parameters
environment_nameName of the environment to be created.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 7637 of file KineticaFunctions.cs.

CreateGraphResponse kinetica.Kinetica.createGraph ( CreateGraphRequest  request_)
inline

Creates a new graph network using given nodes, edges, weights, and restrictions.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 7665 of file KineticaFunctions.cs.

CreateGraphResponse kinetica.Kinetica.createGraph ( string  graph_name,
bool  directed_graph,
IList< string >  nodes,
IList< string >  edges,
IList< string >  weights,
IList< string >  restrictions,
IDictionary< string, string >  options = null 
)
inline

Creates a new graph network using given nodes, edges, weights, and restrictions.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.

Parameters
graph_nameName of the graph resource to generate.
directed_graphIf set to true, the graph will be directed. If set to false, the graph will not be directed. Consult Directed Graphs for more details. Supported values: The default value is TRUE.
nodesNodes represent fundamental topological units of a graph. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT', or constant values, e.g., '{9, 10, 11} AS NODE_ID'. If using constant values in an identifier combination, the number of values specified must match across the combination.
edgesEdges represent the required fundamental topological unit of a graph that typically connect nodes. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or constant values, e.g., "{'family', 'coworker'} AS EDGE_LABEL". If using constant values in an identifier combination, the number of values specified must match across the combination.
weightsWeights represent a method of informing the graph solver of the cost of including a given edge in a solution. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or constant values, e.g., '{4, 15} AS WEIGHTS_VALUESPECIFIED'. If using constant values in an identifier combination, the number of values specified must match across the combination.
restrictionsRestrictions represent a method of informing the graph solver which edges and/or nodes should be ignored for the solution. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or constant values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using constant values in an identifier combination, the number of values specified must match across the combination.
optionsOptional parameters.
  • MERGE_TOLERANCE: If node geospatial positions are input (e.g., WKTPOINT, X, Y), determines the minimum separation allowed between unique nodes. If nodes are within the tolerance of each other, they will be merged as a single node. The default value is '1.0E-5'.
  • RECREATE: If set to true and the graph (using ) already exists, the graph is deleted and recreated. Supported values: The default value is FALSE.
  • SAVE_PERSIST: If set to true, the graph will be saved in the persist directory (see the config reference for more information). If set to false, the graph will be removed when the graph server is shutdown. Supported values: The default value is FALSE.
  • ADD_TABLE_MONITOR: Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, if save_persist is also set to true, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, see /create/tablemonitor. Supported values: The default value is FALSE.
  • GRAPH_TABLE: If specified, the created graph is also created as a table with the given name, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. The table will have the following identifier columns: 'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If left blank, no table is created. The default value is ''.
  • ADD_TURNS: Adds dummy 'pillowed' edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Supported values: The default value is FALSE.
  • IS_PARTITIONED: Supported values: The default value is FALSE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server with the most available memory.
  • USE_RTREE: Use an range tree structure to accelerate and improve the accuracy of snapping, especially to edges. Supported values: The default value is TRUE.
  • LABEL_DELIMITER: If provided the label string will be split according to this delimiter and each sub-string will be applied as a separate label onto the specified edge. The default value is ''.
  • ALLOW_MULTIPLE_EDGES: Multigraph choice; allowing multiple edges with the same node pairs if set to true, otherwise, new edges with existing same node pairs will not be inserted. Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 7965 of file KineticaFunctions.cs.

CreateJobResponse kinetica.Kinetica.createJob ( CreateJobRequest  request_)
inline

Create a job which will run asynchronously.

The response returns a job ID, which can be used to query the status and result of the job. The status and the result of the job upon completion can be requested by Kinetica.getJob(long,IDictionary{string, string}).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 7992 of file KineticaFunctions.cs.

CreateJobResponse kinetica.Kinetica.createJob ( string  endpoint,
string  request_encoding,
byte[]  data,
string  data_str,
IDictionary< string, string >  options = null 
)
inline

Create a job which will run asynchronously.

The response returns a job ID, which can be used to query the status and result of the job. The status and the result of the job upon completion can be requested by Kinetica.getJob(long,IDictionary{string, string}).

Parameters
endpointIndicates which endpoint to execute, e.g. '/alter/table'.
request_encodingThe encoding of the request payload for the job. Supported values: The default value is BINARY.
dataBinary-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated in . Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, then must be binary or snappy.
data_strJSON-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated in . Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, then must be json.
optionsOptional parameters.
  • REMOVE_JOB_ON_COMPLETE: Supported values:
  • JOB_TAG: Tag to use for submitted job. The same tag could be used on backup cluster to retrieve response for the job. Tags can use letter, numbers, '_' and '-'
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 8076 of file KineticaFunctions.cs.

CreateJoinTableResponse kinetica.Kinetica.createJoinTable ( CreateJoinTableRequest  request_)
inline

Creates a table that is the result of a SQL JOIN.


For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 8101 of file KineticaFunctions.cs.

CreateJoinTableResponse kinetica.Kinetica.createJoinTable ( string  join_table_name,
IList< string >  table_names,
IList< string >  column_names,
IList< string >  expressions = null,
IDictionary< string, string >  options = null 
)
inline

Creates a table that is the result of a SQL JOIN.


For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.

Parameters
join_table_nameName of the join table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.
table_namesThe list of table names composing the join, each in [schema_name.]table_name format, using standard name resolution rules. Corresponds to a SQL statement FROM clause.
column_namesList of member table columns or column expressions to be included in the join. Columns can be prefixed with 'table_id.column_name', where 'table_id' is the table name or alias. Columns can be aliased via the syntax 'column_name as alias'. Wild cards '*' can be used to include all columns across member tables or 'table_id.*' for all of a single table's columns. Columns and column expressions composing the join must be uniquely named or aliased–therefore, the '*' wild card cannot be used if column names aren't unique across all tables.
expressionsAn optional list of expressions to combine and filter the joined tables. Corresponds to a SQL statement WHERE clause. For details see: expressions. The default value is an empty List.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_join_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the join as part of and use /create/schema to create the schema if non-existent] Name of a schema for the join. If the schema is non-existent, it will be automatically created. The default value is ''.
  • MAX_QUERY_DIMENSIONS: No longer used.
  • OPTIMIZE_LOOKUPS: Use more memory to speed up the joining of tables. Supported values: The default value is FALSE.
  • STRATEGY_DEFINITION: The tier strategy for the table and its columns.
  • TTL: Sets the TTL of the join table specified in .
  • VIEW_ID: view this projection is part of. The default value is ''.
  • NO_COUNT: Return a count of 0 for the join table for logging and for /show/table; optimization needed for large overlapped equi-join stencils. The default value is 'false'.
  • CHUNK_SIZE: Maximum number of records per joined-chunk for this table. Defaults to the gpudb.conf file chunk size
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 8243 of file KineticaFunctions.cs.

CreateMaterializedViewResponse kinetica.Kinetica.createMaterializedView ( CreateMaterializedViewRequest  request_)
inline

Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.


For materialized view details and examples, see Materialized Views.
The response contains , which is used to tag each subsequent operation (projection, union, aggregation, filter, or join) that will compose the view.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 8277 of file KineticaFunctions.cs.

CreateMaterializedViewResponse kinetica.Kinetica.createMaterializedView ( string  table_name,
IDictionary< string, string >  options = null 
)
inline

Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.


For materialized view details and examples, see Materialized Views.
The response contains , which is used to tag each subsequent operation (projection, union, aggregation, filter, or join) that will compose the view.

Parameters
table_nameName of the table to be created that is the top-level table of the materialized view, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.
optionsOptional parameters.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the materialized view as part of and use /create/schema to create the schema if non-existent] Name of a schema which is to contain the newly created view. If the schema provided is non-existent, it will be automatically created.
  • EXECUTE_AS: User name to use to run the refresh job
  • PERSIST: If true, then the materialized view specified in will be persisted and will not expire unless a ttl is specified. If false, then the materialized view will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • REFRESH_SPAN: Sets the future time-offset(in seconds) at which periodic refresh stops
  • REFRESH_STOP_TIME: When refresh_method is periodic, specifies the time at which a periodic refresh is stopped. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.
  • REFRESH_METHOD: Method by which the join can be refreshed when the data in underlying member tables have changed. Supported values:
    • MANUAL: Refresh only occurs when manually requested by calling /alter/table with an 'action' of 'refresh'
    • ON_QUERY: Refresh any time the view is queried.
    • ON_CHANGE: If possible, incrementally refresh (refresh just those records added) whenever an insert, update, delete or refresh of input table is done. A full refresh is done if an incremental refresh is not possible.
    • PERIODIC: Refresh table periodically at rate specified by refresh_period
    The default value is MANUAL.
  • REFRESH_PERIOD: When refresh_method is periodic, specifies the period in seconds at which refresh occurs
  • REFRESH_START_TIME: When refresh_method is periodic, specifies the first time at which a refresh is to be done. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.
  • TTL: Sets the TTL of the table specified in .
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 8429 of file KineticaFunctions.cs.

CreateProcResponse kinetica.Kinetica.createProc ( CreateProcRequest  request_)
inline

Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 8449 of file KineticaFunctions.cs.

CreateProcResponse kinetica.Kinetica.createProc ( string  proc_name,
string  execution_mode = CreateProcRequest.ExecutionMode.DISTRIBUTED,
IDictionary< string, byte[]>  files = null,
string  command = "",
IList< string >  args = null,
IDictionary< string, string >  options = null 
)
inline

Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.

Parameters
proc_nameName of the proc to be created. Must not be the name of a currently existing proc.
execution_modeThe execution mode of the proc. Supported values:
  • DISTRIBUTED: Input table data will be divided into data segments that are distributed across all nodes in the cluster, and the proc command will be invoked once per data segment in parallel. Output table data from each invocation will be saved to the same node as the corresponding input data.
  • NONDISTRIBUTED: The proc command will be invoked only once per execution, and will not have direct access to any tables named as input or output table parameters in the call to /execute/proc. It will, however, be able to access the database using native API calls.
The default value is DISTRIBUTED.
filesA map of the files that make up the proc. The keys of the map are file names, and the values are the binary contents of the files. The file names may include subdirectory names (e.g. 'subdir/file') but must not resolve to a directory above the root for the proc. Files may be loaded from existing files in KiFS. Those file names should be prefixed with the uri kifs:// and the values in the map should be empty. The default value is an empty Dictionary.
commandThe command (excluding arguments) that will be invoked when the proc is executed. It will be invoked from the directory containing the proc and may be any command that can be resolved from that directory. It need not refer to a file actually in that directory; for example, it could be 'java' if the proc is a Java application; however, any necessary external programs must be preinstalled on every database node. If the command refers to a file in that directory, it must be preceded with './' as per Linux convention. If not specified, and exactly one file is provided in , that file will be invoked. The default value is ''.
argsAn array of command-line arguments that will be passed to when the proc is executed. The default value is an empty List.
optionsOptional parameters.
  • MAX_CONCURRENCY_PER_NODE: The maximum number of concurrent instances of the proc that will be executed per node. 0 allows unlimited concurrency. The default value is '0'.
  • SET_ENVIRONMENT: A python environment to use when executing the proc. Must be an existing environment, else an error will be returned. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 8548 of file KineticaFunctions.cs.

CreateProjectionResponse kinetica.Kinetica.createProjection ( CreateProjectionRequest  request_)
inline

Creates a new projection of an existing table.

A projection represents a subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as Kinetica.getRecordsByColumn(string,IList{string},long,long,IDictionary{string, string}).
A projection can be created with a different shard key than the source table. By specifying shard_key, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.
If is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 8605 of file KineticaFunctions.cs.

CreateProjectionResponse kinetica.Kinetica.createProjection ( string  table_name,
string  projection_name,
IList< string >  column_names,
IDictionary< string, string >  options = null 
)
inline

Creates a new projection of an existing table.

A projection represents a subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as Kinetica.getRecordsByColumn(string,IList{string},long,long,IDictionary{string, string}).
A projection can be created with a different shard key than the source table. By specifying shard_key, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.
If table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).

Parameters
table_nameName of the existing table on which the projection is to be applied, in [schema_name.]table_name format, using standard name resolution rules. An empty table name creates a projection from a single-row virtual table, where columns specified should be constants or constant expressions.
projection_nameName of the projection to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.
column_namesList of columns from to be included in the projection. Can include derived columns. Can be specified as aliased via the syntax 'column_name as alias'.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . If persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_projection_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the projection as part of and use /create/schema to create the schema if non-existent] Name of a schema for the projection. If the schema is non-existent, it will be automatically created. The default value is ''.
  • EXPRESSION: An optional filter expression to be applied to the source table prior to the projection. The default value is ''.
  • IS_REPLICATED: If true then the projection will be replicated even if the source table is not. Supported values: The default value is FALSE.
  • OFFSET: The number of initial results to skip (this can be useful for paging through the results). The default value is '0'.
  • LIMIT: The number of records to keep. The default value is '-9999'.
  • ORDER_BY: Comma-separated list of the columns to be sorted by; e.g. 'timestamp asc, x desc'. The columns specified must be present in . If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for this projection.
  • CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the projection. The columns specified must be present in . If any alias is given for any column name, the alias must be used, rather than the original column name.
  • TTL: Sets the TTL of the projection specified in .
  • SHARD_KEY: Comma-separated list of the columns to be sharded on; e.g. 'column1, column2'. The columns specified must be present in . If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.
  • PERSIST: If true, then the projection specified in will be persisted and will not expire unless a ttl is specified. If false, then the projection will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • PRESERVE_DICT_ENCODING: If true, then columns that were dict encoded in the source table will be dict encoded in the projection. Supported values: The default value is TRUE.
  • RETAIN_PARTITIONS: Determines whether the created projection will retain the partitioning scheme from the source table. Supported values: The default value is FALSE.
  • PARTITION_TYPE: Partitioning scheme to use. Supported values:
  • PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined by partition_definitions.
  • PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice of partition_type. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.
  • IS_AUTOMATIC_PARTITION: If true, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value is FALSE.
  • VIEW_ID: ID of view of which this projection is a member. The default value is ''.
  • STRATEGY_DEFINITION: The tier strategy for the table and its columns.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 8958 of file KineticaFunctions.cs.

CreateResourceGroupResponse kinetica.Kinetica.createResourceGroup ( CreateResourceGroupRequest  request_)
inline

Creates a new resource group to facilitate resource management.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 8978 of file KineticaFunctions.cs.

CreateResourceGroupResponse kinetica.Kinetica.createResourceGroup ( string  name,
IDictionary< string, IDictionary< string, string >>  tier_attributes,
string  ranking,
string  adjoining_resource_group = "",
IDictionary< string, string >  options = null 
)
inline

Creates a new resource group to facilitate resource management.

Parameters
nameName of the group to be created. Must contain only letters, digits, and underscores, and cannot begin with a digit. Must not match existing resource group name.
tier_attributesOptional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM & RAM tiers. For instance, to set max VRAM capacity to 1GB and max RAM capacity to 10GB, use: {'VRAM':{'max_memory':'1000000000'}, 'RAM':{'max_memory':'10000000000'}}
  • MAX_MEMORY: Maximum amount of memory usable in the given tier at one time for this group.
The default value is an empty Dictionary.
rankingIndicates the relative ranking among existing resource groups where this new resource group will be placed. When using before or after, specify which resource group this one will be inserted before or after in . Supported values:
adjoining_resource_groupIf is before or after, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ''.
optionsOptional parameters.
  • MAX_CPU_CONCURRENCY: Maximum number of simultaneous threads that will be used to execute a request for this group.
  • MAX_DATA: Maximum amount of cumulative ram usage regardless of tier status for this group.
  • MAX_SCHEDULING_PRIORITY: Maximum priority of a scheduled task for this group.
  • MAX_TIER_PRIORITY: Maximum priority of a tiered object for this group.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 9069 of file KineticaFunctions.cs.

CreateRoleResponse kinetica.Kinetica.createRole ( CreateRoleRequest  request_)
inline

Creates a new role.

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 9093 of file KineticaFunctions.cs.

CreateRoleResponse kinetica.Kinetica.createRole ( string  name,
IDictionary< string, string >  options = null 
)
inline

Creates a new role.

This method should be used for on-premise deployments only.

Parameters
nameName of the role to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.
optionsOptional parameters.
  • RESOURCE_GROUP: Name of an existing resource group to associate with this user
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 9123 of file KineticaFunctions.cs.

CreateSchemaResponse kinetica.Kinetica.createSchema ( CreateSchemaRequest  request_)
inline

Creates a SQL-style schema.

Schemas are containers for tables and views. Multiple tables and views can be defined with the same name in different schemas.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 9141 of file KineticaFunctions.cs.

CreateSchemaResponse kinetica.Kinetica.createSchema ( string  schema_name,
IDictionary< string, string >  options = null 
)
inline

Creates a SQL-style schema.

Schemas are containers for tables and views. Multiple tables and views can be defined with the same name in different schemas.

Parameters
schema_nameName of the schema to be created. Has the same naming restrictions as tables.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 9184 of file KineticaFunctions.cs.

CreateTableResponse kinetica.Kinetica.createTable ( CreateTableRequest  request_)
inline

Creates a new table.

If a new table is being created, the type of the table is given by , which must be the ID of a currently registered type (i.e. one created via Kinetica.createType(string,string,IDictionary{string, IList{string}},IDictionary{string, string})).
A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 9251 of file KineticaFunctions.cs.

CreateTableResponse kinetica.Kinetica.createTable ( string  table_name,
string  type_id,
IDictionary< string, string >  options = null 
)
inline

Creates a new table.

If a new table is being created, the type of the table is given by type_id , which must be the ID of a currently registered type (i.e. one created via Kinetica.createType(string,string,IDictionary{string, IList{string}},IDictionary{string, string})).
A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.

Parameters
table_nameName of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Error for requests with existing table of the same name and type ID may be suppressed by using the no_error_if_exists option.
type_idID of a currently registered type. All objects added to the newly created table will be of this type.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 9573 of file KineticaFunctions.cs.

CreateTableExternalResponse kinetica.Kinetica.createTableExternal ( CreateTableExternalRequest  request_)
inline

Creates a new external table, which is a local database object whose source data is located externally to the database.

The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.
The external table can have its structure defined explicitly, via , which contains many of the options from Kinetica.createTable(string,string,IDictionary{string, string}); or defined implicitly, inferred from the source data.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 9606 of file KineticaFunctions.cs.

CreateTableExternalResponse kinetica.Kinetica.createTableExternal ( string  table_name,
IList< string >  filepaths,
IDictionary< string, IDictionary< string, string >>  modify_columns = null,
IDictionary< string, string >  create_table_options = null,
IDictionary< string, string >  options = null 
)
inline

Creates a new external table, which is a local database object whose source data is located externally to the database.

The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.
The external table can have its structure defined explicitly, via create_table_options , which contains many of the options from Kinetica.createTable(string,string,IDictionary{string, string}); or defined implicitly, inferred from the source data.

Parameters
table_nameName of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.
filepathsA list of file paths from which data will be sourced; For paths in KiFS, use the uri prefix of kifs:// followed by the path to a file or directory. File matching by prefix is supported, e.g. kifs://dir/file would match dir/file_1 and dir/file_2. When prefix matching is used, the path must start with a full, valid KiFS directory name. If an external data source is specified in datasource_name, these file paths must resolve to accessible files at that data source location. Prefix matching is supported. If the data source is hdfs, prefixes must be aligned with directories, i.e. partial file names will not match. If no data source is specified, the files are assumed to be local to the database and must all be accessible to the gpudb user, residing on the path (or relative to the path) specified by the external files directory in the Kinetica configuration file. Wildcards (*) can be used to specify a group of files. Prefix matching is supported, the prefixes must be aligned with directories. If the first path ends in .tsv, the text delimiter will be defaulted to a tab character. If the first path ends in .psv, the text delimiter will be defaulted to a pipe character (|).
modify_columnsNot implemented yet. The default value is an empty Dictionary.
create_table_optionsOptions from /create/table, allowing the structure of the table to be defined independently of the data source The default value is an empty Dictionary.
optionsOptional parameters.
  • BAD_RECORD_TABLE_NAME: Name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error_handling is abort, bad records table is not populated.
  • BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. The default value is '10000'.
  • BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will be bad_record_table_limit and total size of the table per rank is limited to bad_record_table_limit.
  • BATCH_SIZE: Number of records to insert per batch when inserting data. The default value is '50000'.
  • COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. See default_column_formats for valid format syntax.
  • COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive with columns_to_skip.
  • COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive with columns_to_load.
  • COMPRESSION_TYPE: Source data compression type Supported values:
    • NONE: No compression.
    • AUTO: Auto detect compression type
    • GZIP: gzip file compression.
    • BZIP2: bzip2 file compression.
    The default value is AUTO.
  • DATASOURCE_NAME: Name of an existing external data source from which data file(s) specified in will be loaded
  • DEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
    The default value is ABORT.
  • EXTERNAL_TABLE_TYPE: Specifies whether the external table holds a local copy of the external data. Supported values:
    • MATERIALIZED: Loads a copy of the external data into the database, refreshed on demand
    • LOGICAL: External data will not be loaded into the database; the data will be retrieved from the source upon servicing each query against the external table
    The default value is MATERIALIZED.
  • FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:
    • AVRO: Avro file format
    • DELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.
    • GDB: Esri/GDB file format
    • JSON: Json file format
    • PARQUET: Apache Parquet file format
    • SHAPEFILE: ShapeFile file format
    The default value is DELIMITED_TEXT.
  • GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requets: key=value
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:
    • FULL: Run a type inference on the source data (if needed) and ingest
    • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
    • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
    The default value is FULL.
  • JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip. The default value is '50000'.
  • KAFKA_CONSUMERS_PER_RANK: Number of Kafka consumer threads per rank (valid range 1-6). The default value is '1'.
  • KAFKA_GROUP_ID: The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions).
  • KAFKA_OFFSET_RESET_POLICY: Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset. Supported values: The default value is EARLIEST.
  • KAFKA_OPTIMISTIC_INGEST: Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism. Supported values: The default value is FALSE.
  • KAFKA_SUBSCRIPTION_CANCEL_AFTER: Sets the Kafka subscription lifespan (in minutes). Expired subscription will be cancelled automatically.
  • KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT: Maximum time to collect Kafka messages before type inferencing on the set of them.
  • LAYER: Geo files layer(s) name(s): comma separated.
  • LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database Supported values:
    • HEAD: The head node loads all data. All files must be available to the head node.
    • DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
    • DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
    The default value is HEAD.
  • LOCAL_TIME_OFFSET: Apply an offset to Avro local timestamp columns.
  • MAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: if this number is larger than batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread).
  • NUM_TASKS_PER_RANK: Number of tasks for reading file per rank. Default will be system configuration parameter, external_file_reader_num_tasks.
  • POLL_INTERVAL: If true, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds. The default value is '0'.
  • PRIMARY_KEYS: Comma separated list of column names to set as primary keys, when not specified in the type.
  • REFRESH_METHOD: Method by which the table can be refreshed from its source data. Supported values:
    • MANUAL: Refresh only occurs when manually requested by invoking the refresh action of /alter/table on this table.
    • ON_START: Refresh table on database startup and when manually requested by invoking the refresh action of /alter/table on this table.
    The default value is MANUAL.
  • SCHEMA_REGISTRY_SCHEMA_NAME: Name of the Avro schema in the schema registry to use when reading Avro records.
  • SHARD_KEYS: Comma separated list of column names to set as shard keys, when not specified in the type.
  • SKIP_LINES: Skip number of lines from begining of file.
  • SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value is FALSE.
  • TABLE_INSERT_MODE: Insertion scheme to use when inserting records from multiple shapefiles. Supported values:
    • SINGLE: Insert all records into a single table.
    • TABLE_PER_FILE: Insert records from each file into a new table corresponding to that file.
    The default value is SINGLE.
  • TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. For delimited_text file_type only. The default value is '#'.
  • TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). For delimited_text file_type only. The default value is ','.
  • TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, & vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. For delimited_text file_type only.
  • TEXT_HAS_HEADER: Indicates whether the source data contains a header row. For delimited_text file_type only. Supported values: The default value is TRUE.
  • TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. For delimited_text file_type only. The default value is '|'.
  • TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. For delimited_text file_type only. The default value is '\N'.
  • TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. For delimited_text file_type only. The default value is '"'.
  • TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma seperated list of column names or '*' for all columns. To add 'text_search' property only to string columns greater than or equal to a minimum size, also set the text_search_min_column_length
  • TEXT_SEARCH_MIN_COLUMN_LENGTH: Set the minimum column size for strings to apply the 'text_search' property to. Used only when text_search_columns has a value.
  • TRUNCATE_STRINGS: If set to true, truncate string values that are longer than the column's type size. Supported values: The default value is FALSE.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the file(s). Supported values: The default value is FALSE.
  • TYPE_INFERENCE_MODE: Optimize type inferencing for either speed or accuracy. Supported values:
    • ACCURACY: Scans data to get exactly-typed & sized columns for all data scanned.
    • SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
    The default value is SPEED.
  • REMOTE_QUERY: Remote SQL query from which data will be sourced
  • REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting remote_query into multiple sub-queries using the data distribution of given column
  • REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP).
  • REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be 'upserted'). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 10738 of file KineticaFunctions.cs.

CreateTableMonitorResponse kinetica.Kinetica.createTableMonitor ( CreateTableMonitorRequest  request_)
inline

Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified by ) and forwards event notifications to subscribers via ZMQ.

After this call completes, subscribe to the returned on the ZMQ table monitor port (default 9002). Each time an operation of the given type on the table completes, a multipart message is published for that topic; the first part contains only the topic ID, and each subsequent part contains one binary-encoded Avro object that corresponds to the event and can be decoded using . The monitor will continue to run (regardless of whether or not there are any subscribers) until deactivated with Kinetica.clearTableMonitor(string,IDictionary{string, string}).
For more information on table monitors, see Table Monitors.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 10785 of file KineticaFunctions.cs.

CreateTableMonitorResponse kinetica.Kinetica.createTableMonitor ( string  table_name,
IDictionary< string, string >  options = null 
)
inline

Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified by table_name ) and forwards event notifications to subscribers via ZMQ.

After this call completes, subscribe to the returned on the ZMQ table monitor port (default 9002). Each time an operation of the given type on the table completes, a multipart message is published for that topic; the first part contains only the topic ID, and each subsequent part contains one binary-encoded Avro object that corresponds to the event and can be decoded using . The monitor will continue to run (regardless of whether or not there are any subscribers) until deactivated with Kinetica.clearTableMonitor(string,IDictionary{string, string}).
For more information on table monitors, see Table Monitors.

Parameters
table_nameName of the table to monitor, in [schema_name.]table_name format, using standard name resolution rules.
optionsOptional parameters.
  • EVENT: Type of modification event on the target table to be monitored by this table monitor. Supported values:
    • INSERT: Get notifications of new record insertions. The new row images are forwarded to the subscribers.
    • UPDATE: Get notifications of update operations. The modified row count information is forwarded to the subscribers.
    • DELETE: Get notifications of delete operations. The deleted row count information is forwarded to the subscribers.
    The default value is INSERT.
  • MONITOR_ID: ID to use for this monitor instead of a randomly generated one
  • DATASINK_NAME: Name of an existing data sink to send change data notifications to
  • DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'.
  • KAFKA_TOPIC_NAME: Name of the Kafka topic to publish to if destination in is specified and is a Kafka broker
  • INCREASING_COLUMN: Column on subscribed table that will increase for new records (e.g., TIMESTAMP).
  • EXPRESSION: Filter expression to limit records for notification
  • REFRESH_METHOD: Method controlling when the table monitor reports changes to the . Supported values:
    • ON_CHANGE: Report changes as they occur.
    • PERIODIC: Report changes periodically at rate specified by refresh_period.
    The default value is ON_CHANGE.
  • REFRESH_PERIOD: When refresh_method is periodic, specifies the period in seconds at which changes are reported.
  • REFRESH_START_TIME: When refresh_method is periodic, specifies the first time at which changes are reported. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 10940 of file KineticaFunctions.cs.

CreateTriggerByAreaResponse kinetica.Kinetica.createTriggerByArea ( CreateTriggerByAreaRequest  request_)
inline

Sets up an area trigger mechanism for two column_names for one or more tables.

(This function is essentially the two-dimensional version of Kinetica.createTriggerByRange(string,IList{string},string,double,double,IDictionary{string, string}).) Once the trigger has been activated, any record added to the listed tables(s) via Kinetica.insertRecords{T}(string,IList{T},IDictionary{string, string}) with the chosen columns' values falling within the specified region will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved via Kinetica.showSystemStatus(IDictionary{string, string})) for any listening client to collect. Active triggers can be cancelled by using the Kinetica.clearTrigger(string,IDictionary{string, string}) endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 10979 of file KineticaFunctions.cs.

CreateTriggerByAreaResponse kinetica.Kinetica.createTriggerByArea ( string  request_id,
IList< string >  table_names,
string  x_column_name,
IList< double >  x_vector,
string  y_column_name,
IList< double >  y_vector,
IDictionary< string, string >  options = null 
)
inline

Sets up an area trigger mechanism for two column_names for one or more tables.

(This function is essentially the two-dimensional version of Kinetica.createTriggerByRange(string,IList{string},string,double,double,IDictionary{string, string}).) Once the trigger has been activated, any record added to the listed tables(s) via Kinetica.insertRecords{T}(string,IList{T},IDictionary{string, string}) with the chosen columns' values falling within the specified region will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved via Kinetica.showSystemStatus(IDictionary{string, string})) for any listening client to collect. Active triggers can be cancelled by using the Kinetica.clearTrigger(string,IDictionary{string, string}) endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.

Parameters
request_idUser-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.
table_namesNames of the tables on which the trigger will be activated and maintained, each in [schema_name.]table_name format, using standard name resolution rules.
x_column_nameName of a numeric column on which the trigger is activated. Usually 'x' for geospatial data points.
x_vectorThe respective coordinate values for the region on which the trigger is activated. This usually translates to the x-coordinates of a geospatial region.
y_column_nameName of a second numeric column on which the trigger is activated. Usually 'y' for geospatial data points.
y_vectorThe respective coordinate values for the region on which the trigger is activated. This usually translates to the y-coordinates of a geospatial region. Must be the same length as xvals.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11040 of file KineticaFunctions.cs.

CreateTriggerByRangeResponse kinetica.Kinetica.createTriggerByRange ( CreateTriggerByRangeRequest  request_)
inline

Sets up a simple range trigger for a column_name for one or more tables.

Once the trigger has been activated, any record added to the listed tables(s) via Kinetica.insertRecords{T}(string,IList{T},IDictionary{string, string}) with the chosen column_name's value falling within the specified range will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved via Kinetica.showSystemStatus(IDictionary{string, string})) for any listening client to collect. Active triggers can be cancelled by using the Kinetica.clearTrigger(string,IDictionary{string, string}) endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 11086 of file KineticaFunctions.cs.

CreateTriggerByRangeResponse kinetica.Kinetica.createTriggerByRange ( string  request_id,
IList< string >  table_names,
string  column_name,
double  min,
double  max,
IDictionary< string, string >  options = null 
)
inline

Sets up a simple range trigger for a column_name for one or more tables.

Once the trigger has been activated, any record added to the listed tables(s) via Kinetica.insertRecords{T}(string,IList{T},IDictionary{string, string}) with the chosen column_name's value falling within the specified range will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved via Kinetica.showSystemStatus(IDictionary{string, string})) for any listening client to collect. Active triggers can be cancelled by using the Kinetica.clearTrigger(string,IDictionary{string, string}) endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.

Parameters
request_idUser-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.
table_namesTables on which the trigger will be active, each in [schema_name.]table_name format, using standard name resolution rules.
column_nameName of a numeric column_name on which the trigger is activated.
minThe lower bound (inclusive) for the trigger range.
maxThe upper bound (inclusive) for the trigger range.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11136 of file KineticaFunctions.cs.

CreateTypeResponse kinetica.Kinetica.createType ( CreateTypeRequest  request_)
inline

Creates a new type describing the layout of a table.

The type definition is a JSON string describing the fields (i.e. columns) of the type. Each field consists of a name and a data type. Supported data types are: double, float, int, long, string, and bytes. In addition, one or more properties can be specified for each column which customize the memory usage and query availability of that column. Note that some properties are mutually exclusive–i.e. they cannot be specified for any given column simultaneously. One example of mutually exclusive properties are data and store_only.
A single primary key and/or single shard key can be set across one or more columns. If a primary key is specified, then a uniqueness constraint is enforced, in that only a single object can exist with a given primary key column value (or set of values for the key columns, if using a composite primary key). When inserting data into a table with a primary key, depending on the parameters in the request, incoming objects with primary key values that match existing objects will either overwrite (i.e. update) the existing object or will be skipped and not added into the set.
Example of a type definition with some of the parameters::
{"type":"record", "name":"point", "fields":[{"name":"msg_id","type":"string"}, {"name":"x","type":"double"}, {"name":"y","type":"double"}, {"name":"TIMESTAMP","type":"double"}, {"name":"source","type":"string"}, {"name":"group_id","type":"string"}, {"name":"OBJECT_ID","type":"string"}] }
Properties::
{"group_id":["store_only"], "msg_id":["store_only","text_search"] }

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 11201 of file KineticaFunctions.cs.

CreateTypeResponse kinetica.Kinetica.createType ( string  type_definition,
string  label,
IDictionary< string, IList< string >>  properties = null,
IDictionary< string, string >  options = null 
)
inline

Creates a new type describing the layout of a table.

The type definition is a JSON string describing the fields (i.e. columns) of the type. Each field consists of a name and a data type. Supported data types are: double, float, int, long, string, and bytes. In addition, one or more properties can be specified for each column which customize the memory usage and query availability of that column. Note that some properties are mutually exclusive–i.e. they cannot be specified for any given column simultaneously. One example of mutually exclusive properties are data and store_only.
A single primary key and/or single shard key can be set across one or more columns. If a primary key is specified, then a uniqueness constraint is enforced, in that only a single object can exist with a given primary key column value (or set of values for the key columns, if using a composite primary key). When inserting data into a table with a primary key, depending on the parameters in the request, incoming objects with primary key values that match existing objects will either overwrite (i.e. update) the existing object or will be skipped and not added into the set.
Example of a type definition with some of the parameters::
{"type":"record", "name":"point", "fields":[{"name":"msg_id","type":"string"}, {"name":"x","type":"double"}, {"name":"y","type":"double"}, {"name":"TIMESTAMP","type":"double"}, {"name":"source","type":"string"}, {"name":"group_id","type":"string"}, {"name":"OBJECT_ID","type":"string"}] }
Properties::
{"group_id":["store_only"], "msg_id":["store_only","text_search"] }

Parameters
type_definitiona JSON string describing the columns of the type to be registered.
labelA user-defined description string which can be used to differentiate between tables and types with otherwise identical schemas.
propertiesEach key-value pair specifies the properties to use for a given column where the key is the column name. All keys used must be relevant column names for the given table. Specifying any property overrides the default properties for that column (which is based on the column's data type). Valid values are:
  • DATA: Default property for all numeric and string type columns; makes the column available for GPU queries.
  • TEXT_SEARCH: Valid only for select 'string' columns. Enables full text search–see Full Text Search for details and applicable string column types. Can be set independently of data and store_only.
  • STORE_ONLY: Persist the column value but do not make it available to queries (e.g. /filter)-i.e. it is mutually exclusive to the data property. Any 'bytes' type column must have a store_only property. This property reduces system memory usage.
  • DISK_OPTIMIZED: Works in conjunction with the data property for string columns. This property reduces system disk usage by disabling reverse string lookups. Queries like /filter, /filter/bylist, and /filter/byvalue work as usual but /aggregate/unique and /aggregate/groupby are not allowed on columns with this property.
  • TIMESTAMP: Valid only for 'long' columns. Indicates that this field represents a timestamp and will be provided in milliseconds since the Unix epoch: 00:00:00 Jan 1 1970. Dates represented by a timestamp must fall between the year 1000 and the year 2900.
  • ULONG: Valid only for 'string' columns. It represents an unsigned long integer data type. The string can only be interpreted as an unsigned long data type with minimum value of zero, and maximum value of 18446744073709551615.
  • UUID: Valid only for 'string' columns. It represents an uuid data type. Internally, it is stored as a 128-bit integer.
  • DECIMAL: Valid only for 'string' columns. It represents a SQL type NUMERIC(19, 4) data type. There can be up to 15 digits before the decimal point and up to four digits in the fractional part. The value can be positive or negative (indicated by a minus sign at the beginning). This property is mutually exclusive with the text_search property.
  • DATE: Valid only for 'string' columns. Indicates that this field represents a date and will be provided in the format 'YYYY-MM-DD'. The allowable range is 1000-01-01 through 2900-01-01. This property is mutually exclusive with the text_search property.
  • TIME: Valid only for 'string' columns. Indicates that this field represents a time-of-day and will be provided in the format 'HH:MM:SS.mmm'. The allowable range is 00:00:00.000 through 23:59:59.999. This property is mutually exclusive with the text_search property.
  • DATETIME: Valid only for 'string' columns. Indicates that this field represents a datetime and will be provided in the format 'YYYY-MM-DD HH:MM:SS.mmm'. The allowable range is 1000-01-01 00:00:00.000 through 2900-01-01 23:59:59.999. This property is mutually exclusive with the text_search property.
  • CHAR1: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 1 character.
  • CHAR2: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 2 characters.
  • CHAR4: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 4 characters.
  • CHAR8: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 8 characters.
  • CHAR16: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 16 characters.
  • CHAR32: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 32 characters.
  • CHAR64: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 64 characters.
  • CHAR128: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 128 characters.
  • CHAR256: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 256 characters.
  • BOOLEAN: This property provides optimized memory and query performance for int columns. Ints with this property must be between 0 and 1(inclusive)
  • INT8: This property provides optimized memory and query performance for int columns. Ints with this property must be between -128 and +127 (inclusive)
  • INT16: This property provides optimized memory and query performance for int columns. Ints with this property must be between -32768 and +32767 (inclusive)
  • IPV4: This property provides optimized memory, disk and query performance for string columns representing IPv4 addresses (i.e. 192.168.1.1). Strings with this property must be of the form: A.B.C.D where A, B, C and D are in the range of 0-255.
  • WKT: Valid only for 'string' and 'bytes' columns. Indicates that this field contains geospatial geometry objects in Well-Known Text (WKT) or Well-Known Binary (WKB) format.
  • PRIMARY_KEY: This property indicates that this column will be part of (or the entire) primary key.
  • SHARD_KEY: This property indicates that this column will be part of (or the entire) shard key.
  • NULLABLE: This property indicates that this column is nullable. However, setting this property is insufficient for making the column nullable. The user must declare the type of the column as a union between its regular type and 'null' in the avro schema for the record type in . For example, if a column is of type integer and is nullable, then the entry for the column in the avro schema must be: ['int', 'null']. The C++, C#, Java, and Python APIs have built-in convenience for bypassing setting the avro schema by hand. For those languages, one can use this property as usual and not have to worry about the avro schema for the record.
  • DICT: This property indicates that this column should be dictionary encoded. It can only be used in conjunction with restricted string (charN), int, long or date columns. Dictionary encoding is best for columns where the cardinality (the number of unique values) is expected to be low. This property can save a large amount of memory.
  • INIT_WITH_NOW: For 'date', 'time', 'datetime', or 'timestamp' column types, replace empty strings and invalid timestamps with 'NOW()' upon insert.
  • INIT_WITH_UUID: For 'uuid' type, replace empty strings and invalid UUID values with randomly-generated UUIDs upon insert.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11532 of file KineticaFunctions.cs.

CreateUnionResponse kinetica.Kinetica.createUnion ( CreateUnionRequest  request_)
inline

Merges data from one or more tables with comparable data types into a new table.


The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 11582 of file KineticaFunctions.cs.

CreateUnionResponse kinetica.Kinetica.createUnion ( string  table_name,
IList< string >  table_names,
IList< IList< string >>  input_column_names,
IList< string >  output_column_names,
IDictionary< string, string >  options = null 
)
inline

Merges data from one or more tables with comparable data types into a new table.


The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.

Parameters
table_nameName of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.
table_namesThe list of table names to merge, in [schema_name.]table_name format, using standard name resolution rules. Must contain the names of one or more existing tables.
input_column_namesThe list of columns from each of the corresponding input tables.
output_column_namesThe list of names of the columns to be stored in the output table.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . If persist is false (or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the projection as part of and use /create/schema to create the schema if non-existent] Name of the schema for the output table. If the schema provided is non-existent, it will be automatically created. The default value is ''.
  • MODE: If merge_views, then this operation will merge the provided views. All must be views from the same underlying base table. Supported values:
    • UNION_ALL: Retains all rows from the specified tables.
    • UNION: Retains all unique rows from the specified tables (synonym for union_distinct).
    • UNION_DISTINCT: Retains all unique rows from the specified tables.
    • EXCEPT: Retains all unique rows from the first table that do not appear in the second table (only works on 2 tables).
    • EXCEPT_ALL: Retains all rows(including duplicates) from the first table that do not appear in the second table (only works on 2 tables).
    • INTERSECT: Retains all unique rows that appear in both of the specified tables (only works on 2 tables).
    • INTERSECT_ALL: Retains all rows(including duplicates) that appear in both of the specified tables (only works on 2 tables).
    • MERGE_VIEWS: Merge two or more views (or views of views) of the same base data set into a new view. If this mode is selected AND must be empty. The resulting view would match the results of a SQL OR operation, e.g., if filter 1 creates a view using the expression 'x = 20' and filter 2 creates a view using the expression 'x <= 10', then the merge views operation creates a new view using the expression 'x = 20 OR x <= 10'.
    The default value is UNION_ALL.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for this output table.
  • CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the output table. The columns specified must be present in .
  • TTL: Sets the TTL of the output table specified in .
  • PERSIST: If true, then the output table specified in will be persisted and will not expire unless a ttl is specified. If false, then the output table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is FALSE.
  • VIEW_ID: ID of view of which this output table is a member. The default value is ''.
  • FORCE_REPLICATED: If true, then the output table specified in will be replicated even if the source tables are not. Supported values: The default value is FALSE.
  • STRATEGY_DEFINITION: The tier strategy for the table and its columns.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11828 of file KineticaFunctions.cs.

CreateUserExternalResponse kinetica.Kinetica.createUserExternal ( CreateUserExternalRequest  request_)
inline

Creates a new external user (a user whose credentials are managed by an external LDAP).

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 11851 of file KineticaFunctions.cs.

CreateUserExternalResponse kinetica.Kinetica.createUserExternal ( string  name,
IDictionary< string, string >  options = null 
)
inline

Creates a new external user (a user whose credentials are managed by an external LDAP).

This method should be used for on-premise deployments only.

Parameters
nameName of the user to be created. Must exactly match the user's name in the external LDAP, prefixed with a Must not be the same name as an existing user.
optionsOptional parameters.
  • RESOURCE_GROUP: Name of an existing resource group to associate with this user
  • DEFAULT_SCHEMA: Default schema to associate with this user
  • CREATE_HOME_DIRECTORY: When true, a home directory in KiFS is created for this user Supported values: The default value is TRUE.
  • DIRECTORY_DATA_LIMIT: The maximum capacity to apply to the created directory if create_home_directory is true. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11914 of file KineticaFunctions.cs.

CreateUserInternalResponse kinetica.Kinetica.createUserInternal ( CreateUserInternalRequest  request_)
inline

Creates a new internal user (a user whose credentials are managed by the database system).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 11930 of file KineticaFunctions.cs.

CreateUserInternalResponse kinetica.Kinetica.createUserInternal ( string  name,
string  password,
IDictionary< string, string >  options = null 
)
inline

Creates a new internal user (a user whose credentials are managed by the database system).

Parameters
nameName of the user to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.
passwordInitial password of the user to be created. May be an empty string for no password.
optionsOptional parameters.
  • RESOURCE_GROUP: Name of an existing resource group to associate with this user
  • DEFAULT_SCHEMA: Default schema to associate with this user
  • CREATE_HOME_DIRECTORY: When true, a home directory in KiFS is created for this user Supported values: The default value is TRUE.
  • DIRECTORY_DATA_LIMIT: The maximum capacity to apply to the created directory if create_home_directory is true. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 11994 of file KineticaFunctions.cs.

CreateVideoResponse kinetica.Kinetica.createVideo ( CreateVideoRequest  request_)
inline

Creates a job to generate a sequence of raster images that visualize data over a specified time.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12012 of file KineticaFunctions.cs.

CreateVideoResponse kinetica.Kinetica.createVideo ( string  attribute,
string  begin,
double  duration_seconds,
string  end,
double  frames_per_second,
string  style,
string  path,
string  style_parameters,
IDictionary< string, string >  options = null 
)
inline

Creates a job to generate a sequence of raster images that visualize data over a specified time.

Parameters
attributeThe animated attribute to map to the video's frames. Must be present in the LAYERS specified for the visualization. This is often a time-related field but may be any numeric type.
beginThe start point for the video. Accepts an expression evaluable over the .
duration_secondsSeconds of video to produce
endThe end point for the video. Accepts an expression evaluable over the .
frames_per_secondThe presentation frame rate of the encoded video in frames per second.
styleThe name of the visualize mode; should correspond to the schema used for the field. Supported values:
pathFully-qualified KiFS path. Write access is required. A file must not exist at that path, unless replace_if_exists is true.
style_parametersA string containing the JSON-encoded visualize request. Must correspond to the visualize mode specified in the field.
optionsOptional parameters.
  • TTL: Sets the TTL of the video.
  • WINDOW: Specified using the data-type corresponding to the . For a window of size W, a video frame rendered for time t will visualize data in the interval [t-W,t]. The minimum window size is the interval between successive frames. The minimum value is the default. If a value less than the minimum value is specified, it is replaced with the minimum window size. Larger values will make changes throughout the video appear more smooth while smaller values will capture fast variations in the data.
  • NO_ERROR_IF_EXISTS: If true, does not return an error if the video already exists. Ignored if replace_if_exists is true. Supported values: The default value is FALSE.
  • REPLACE_IF_EXISTS: If true, deletes any existing video with the same path before creating a new video. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12141 of file KineticaFunctions.cs.

void kinetica.Kinetica.DecodeRawBinaryDataUsingRecordType< T > ( KineticaType  record_type,
IList< byte[]>  records_binary,
IList< T >  records 
)
inline

Given a KineticaType object for a certain record type, decode binary data into distinct records (objects).

Template Parameters
TThe type of the records.
Parameters
record_typeThe type for the records.
records_binaryThe binary encoded data to be decoded.
recordsThe decoded objects/records.
Type Constraints
T :new() 

Definition at line 200 of file Kinetica.cs.

void kinetica.Kinetica.DecodeRawBinaryDataUsingSchemaString< T > ( string  schema_string,
IList< byte[]>  records_binary,
IList< T >  records 
)
inline

Given a schema string for a certain record type, decode binary data into distinct records (objects).

Template Parameters
TThe type of the records.
Parameters
schema_stringThe schema for the records.
records_binaryThe binary encoded data to be decoded.
recordsThe decoded objects/records.
Type Constraints
T :new() 

Definition at line 221 of file Kinetica.cs.

void kinetica.Kinetica.DecodeRawBinaryDataUsingSchemaString< T > ( IList< string >  schema_strings,
IList< IList< byte[]>>  lists_records_binary,
IList< IList< T >>  record_lists 
)
inline

Given a list of schema strings, decode binary data into distinct records (objects).

Template Parameters
TThe type of the records.
Parameters
schema_stringsThe schemas for the records.
lists_records_binaryThe binary encoded data to be decoded (the data is in a 2D list).
record_listsThe decoded objects/records in a 2d list.
Type Constraints
T :new() 

Definition at line 245 of file Kinetica.cs.

void kinetica.Kinetica.DecodeRawBinaryDataUsingTypeIDs< T > ( IList< string >  type_ids,
IList< byte[]>  records_binary,
IList< T >  records 
)
inline

Given IDs of records types registered with Kinetica, decode binary data into distinct records (objects).

Template Parameters
TThe type of the records.
Parameters
type_idsThe IDs for each of the records' types.
records_binaryThe binary encoded data to be decoded.
recordsThe decoded objects/records.
Type Constraints
T :new() 

Definition at line 285 of file Kinetica.cs.

void kinetica.Kinetica.DecodeRawBinaryDataUsingTypeIDs< T > ( IList< string >  type_ids,
IList< IList< byte[]>>  lists_records_binary,
IList< IList< T >>  record_lists 
)
inline

Given IDs of records types registered with Kinetica, decode binary data into distinct records (objects).

Template Parameters
TThe type of the records.
Parameters
type_idsThe IDs for each of the lists of records.
lists_records_binaryThe binary encoded data to be decoded in a 2d list.
record_listsThe decoded objects/records in a 2d list.
Type Constraints
T :new() 

Definition at line 314 of file Kinetica.cs.

DeleteDirectoryResponse kinetica.Kinetica.deleteDirectory ( DeleteDirectoryRequest  request_)
inline

Deletes a directory from KiFS.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12167 of file KineticaFunctions.cs.

DeleteDirectoryResponse kinetica.Kinetica.deleteDirectory ( string  directory_name,
IDictionary< string, string >  options = null 
)
inline

Deletes a directory from KiFS.

Parameters
directory_nameName of the directory in KiFS to be deleted. The directory must contain no files, unless recursive is true
optionsOptional parameters.
  • RECURSIVE: If true, will delete directory and all files residing in it. If false, directory must be empty for deletion. Supported values: The default value is FALSE.
  • NO_ERROR_IF_NOT_EXISTS: If true, no error is returned if specified directory does not exist Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12228 of file KineticaFunctions.cs.

DeleteFilesResponse kinetica.Kinetica.deleteFiles ( DeleteFilesRequest  request_)
inline

Deletes one or more files from KiFS.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12244 of file KineticaFunctions.cs.

DeleteFilesResponse kinetica.Kinetica.deleteFiles ( IList< string >  file_names,
IDictionary< string, string >  options = null 
)
inline

Deletes one or more files from KiFS.

Parameters
file_namesAn array of names of files to be deleted. File paths may contain wildcard characters after the KiFS directory delimeter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12288 of file KineticaFunctions.cs.

DeleteGraphResponse kinetica.Kinetica.deleteGraph ( DeleteGraphRequest  request_)
inline

Deletes an existing graph from the graph server and/or persist.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12304 of file KineticaFunctions.cs.

DeleteGraphResponse kinetica.Kinetica.deleteGraph ( string  graph_name,
IDictionary< string, string >  options = null 
)
inline

Deletes an existing graph from the graph server and/or persist.

Parameters
graph_nameName of the graph to be deleted.
optionsOptional parameters.
  • DELETE_PERSIST: If set to true, the graph is removed from the server and persist. If set to false, the graph is removed from the server but is left in persist. The graph can be reloaded from persist if it is recreated with the same 'graph_name'. Supported values: The default value is TRUE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to get information about all the servers.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12353 of file KineticaFunctions.cs.

DeleteProcResponse kinetica.Kinetica.deleteProc ( DeleteProcRequest  request_)
inline

Deletes a proc.

Any currently running instances of the proc will be killed.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12369 of file KineticaFunctions.cs.

DeleteProcResponse kinetica.Kinetica.deleteProc ( string  proc_name,
IDictionary< string, string >  options = null 
)
inline

Deletes a proc.

Any currently running instances of the proc will be killed.

Parameters
proc_nameName of the proc to be deleted. Must be the name of a currently existing proc.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12388 of file KineticaFunctions.cs.

DeleteRecordsResponse kinetica.Kinetica.deleteRecords ( DeleteRecordsRequest  request_)
inline

Deletes record(s) matching the provided criteria from the given table.

The record selection criteria can either be one or more (matching multiple records), a single record identified by record_id options, or all records when using delete_all_records. Note that the three selection criteria are mutually exclusive. This operation cannot be run on a view. The operation is synchronous meaning that a response will not be available until the request is completely processed and all the matching records are deleted.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12412 of file KineticaFunctions.cs.

DeleteRecordsResponse kinetica.Kinetica.deleteRecords ( string  table_name,
IList< string >  expressions,
IDictionary< string, string >  options = null 
)
inline

Deletes record(s) matching the provided criteria from the given table.

The record selection criteria can either be one or more expressions (matching multiple records), a single record identified by record_id options, or all records when using delete_all_records. Note that the three selection criteria are mutually exclusive. This operation cannot be run on a view. The operation is synchronous meaning that a response will not be available until the request is completely processed and all the matching records are deleted.

Parameters
table_nameName of the table from which to delete records, in [schema_name.]table_name format, using standard name resolution rules. Must contain the name of an existing table; not applicable to views.
expressionsA list of the actual predicates, one for each select; format should follow the guidelines provided here. Specifying one or more is mutually exclusive to specifying record_id in the .
optionsOptional parameters.
  • GLOBAL_EXPRESSION: An optional global expression to reduce the search space of the . The default value is ''.
  • RECORD_ID: A record ID identifying a single record, obtained at the time of /insert/records or by calling /get/records/fromcollection with the return_record_ids option. This option cannot be used to delete records from replicated tables.
  • DELETE_ALL_RECORDS: If set to true, all records in the table will be deleted. If set to false, then the option is effectively ignored. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12488 of file KineticaFunctions.cs.

DeleteResourceGroupResponse kinetica.Kinetica.deleteResourceGroup ( DeleteResourceGroupRequest  request_)
inline

Deletes a resource group.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12505 of file KineticaFunctions.cs.

DeleteResourceGroupResponse kinetica.Kinetica.deleteResourceGroup ( string  name,
IDictionary< string, string >  options = null 
)
inline

Deletes a resource group.

Parameters
nameName of the resource group to be deleted.
optionsOptional parameters.
  • CASCADE_DELETE: If true, delete any existing entities owned by this group. Otherwise this request will return an error of any such entities exist. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12545 of file KineticaFunctions.cs.

DeleteRoleResponse kinetica.Kinetica.deleteRole ( DeleteRoleRequest  request_)
inline

Deletes an existing role.

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12562 of file KineticaFunctions.cs.

DeleteRoleResponse kinetica.Kinetica.deleteRole ( string  name,
IDictionary< string, string >  options = null 
)
inline

Deletes an existing role.

This method should be used for on-premise deployments only.

Parameters
nameName of the role to be deleted. Must be an existing role.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12582 of file KineticaFunctions.cs.

DeleteUserResponse kinetica.Kinetica.deleteUser ( DeleteUserRequest  request_)
inline

Deletes an existing user.

This method should be used for on-premise deployments only.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12599 of file KineticaFunctions.cs.

DeleteUserResponse kinetica.Kinetica.deleteUser ( string  name,
IDictionary< string, string >  options = null 
)
inline

Deletes an existing user.

This method should be used for on-premise deployments only.

Parameters
nameName of the user to be deleted. Must be an existing user.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12619 of file KineticaFunctions.cs.

DownloadFilesResponse kinetica.Kinetica.downloadFiles ( DownloadFilesRequest  request_)
inline

Downloads one or more files from KiFS.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12635 of file KineticaFunctions.cs.

DownloadFilesResponse kinetica.Kinetica.downloadFiles ( IList< string >  file_names,
IList< long >  read_offsets,
IList< long >  read_lengths,
IDictionary< string, string >  options = null 
)
inline

Downloads one or more files from KiFS.

Parameters
file_namesAn array of the file names to download from KiFS. File paths may contain wildcard characters after the KiFS directory delimeter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.
read_offsetsAn array of starting byte offsets from which to read each respective file in . Must either be empty or the same length as . If empty, files are downloaded in their entirety. If not empty, must also not be empty.
read_lengthsArray of number of bytes to read from each respective file in . Must either be empty or the same length as . If empty, files are downloaded in their entirety. If not empty, must also not be empty.
optionsOptional parameters.
  • FILE_ENCODING: Encoding to be applied to the output file data. When using JSON serialization it is recommended to specify this as base64. Supported values:
    • BASE64: Apply base64 encoding to the output file data.
    • NONE: Do not apply any encoding to the output file data.
    The default value is NONE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12700 of file KineticaFunctions.cs.

DropCredentialResponse kinetica.Kinetica.dropCredential ( DropCredentialRequest  request_)
inline

Drop an existing credential.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12751 of file KineticaFunctions.cs.

DropCredentialResponse kinetica.Kinetica.dropCredential ( string  credential_name,
IDictionary< string, string >  options = null 
)
inline

Drop an existing credential.

Parameters
credential_nameName of the credential to be dropped. Must be an existing credential.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12770 of file KineticaFunctions.cs.

DropDatasinkResponse kinetica.Kinetica.dropDatasink ( DropDatasinkRequest  request_)
inline

Drops an existing data sink.


By default, if any table monitors use this sink as a destination, the request will be blocked unless option clear_table_monitors is true.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12792 of file KineticaFunctions.cs.

DropDatasinkResponse kinetica.Kinetica.dropDatasink ( string  name,
IDictionary< string, string >  options = null 
)
inline

Drops an existing data sink.


By default, if any table monitors use this sink as a destination, the request will be blocked unless option clear_table_monitors is true.

Parameters
nameName of the data sink to be dropped. Must be an existing data sink.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12839 of file KineticaFunctions.cs.

DropDatasourceResponse kinetica.Kinetica.dropDatasource ( DropDatasourceRequest  request_)
inline

Drops an existing data source.

Any external tables that depend on the data source must be dropped before it can be dropped.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12858 of file KineticaFunctions.cs.

DropDatasourceResponse kinetica.Kinetica.dropDatasource ( string  name,
IDictionary< string, string >  options = null 
)
inline

Drops an existing data source.

Any external tables that depend on the data source must be dropped before it can be dropped.

Parameters
nameName of the data source to be dropped. Must be an existing data source.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12880 of file KineticaFunctions.cs.

DropEnvironmentResponse kinetica.Kinetica.dropEnvironment ( DropEnvironmentRequest  request_)
inline

Drop an existing user-defined function (UDF) environment.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12897 of file KineticaFunctions.cs.

DropEnvironmentResponse kinetica.Kinetica.dropEnvironment ( string  environment_name,
IDictionary< string, string >  options = null 
)
inline

Drop an existing user-defined function (UDF) environment.

Parameters
environment_nameName of the environment to be dropped. Must be an existing environment.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If true and if the environment specified in does not exist, no error is returned. If false and if the environment specified in does not exist, then an error is returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 12943 of file KineticaFunctions.cs.

DropSchemaResponse kinetica.Kinetica.dropSchema ( DropSchemaRequest  request_)
inline

Drops an existing SQL-style schema, specified in .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 12992 of file KineticaFunctions.cs.

DropSchemaResponse kinetica.Kinetica.dropSchema ( string  schema_name,
IDictionary< string, string >  options = null 
)
inline

Drops an existing SQL-style schema, specified in schema_name .

Parameters
schema_nameName of the schema to be dropped. Must be an existing schema.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If true and if the schema specified in does not exist, no error is returned. If false and if the schema specified in does not exist, then an error is returned. Supported values: The default value is FALSE.
  • CASCADE: If true, all tables within the schema will be dropped. If false, the schema will be dropped only if empty. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 13056 of file KineticaFunctions.cs.

ExecuteProcResponse kinetica.Kinetica.executeProc ( ExecuteProcRequest  request_)
inline

Executes a proc.

This endpoint is asynchronous and does not wait for the proc to complete before returning.
If the proc being executed is distributed, & may be passed to the proc to use for reading data, and may be passed to the proc to use for writing data.
If the proc being executed is non-distributed, these table parameters will be ignored.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 13126 of file KineticaFunctions.cs.

ExecuteProcResponse kinetica.Kinetica.executeProc ( string  proc_name,
IDictionary< string, string >  _params = null,
IDictionary< string, byte[]>  bin_params = null,
IList< string >  input_table_names = null,
IDictionary< string, IList< string >>  input_column_names = null,
IList< string >  output_table_names = null,
IDictionary< string, string >  options = null 
)
inline

Executes a proc.

This endpoint is asynchronous and does not wait for the proc to complete before returning.
If the proc being executed is distributed, input_table_names & input_column_names may be passed to the proc to use for reading data, and output_table_names may be passed to the proc to use for writing data.
If the proc being executed is non-distributed, these table parameters will be ignored.

Parameters
proc_nameName of the proc to execute. Must be the name of a currently existing proc.
_paramsA map containing named parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an empty Dictionary.
bin_paramsA map containing named binary parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an empty Dictionary.
input_table_namesNames of the tables containing data to be passed to the proc. Each name specified must be the name of a currently existing table, in [schema_name.]table_name format, using standard name resolution rules. If no table names are specified, no data will be passed to the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an empty List.
input_column_namesMap of table names from to lists of names of columns from those tables that will be passed to the proc. Each column name specified must be the name of an existing column in the corresponding table. If a table name from is not included, all columns from that table will be passed to the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an empty Dictionary.
output_table_namesNames of the tables to which output data from the proc will be written, each in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If a specified table does not exist, it will automatically be created with the same schema as the corresponding table (by order) from , excluding any primary and shard keys. If a specified table is a non-persistent result table, it must not have primary or shard keys. If no table names are specified, no output data can be returned from the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an empty List.
optionsOptional parameters.
  • CACHE_INPUT: A comma-delimited list of table names from from which input data will be cached for use in subsequent calls to /execute/proc with the use_cached_input option. Cached input data will be retained until the proc status is cleared with the /show/proc/status option of /show/proc/status and all proc instances using the cached data have completed. The default value is ''.
  • USE_CACHED_INPUT: A comma-delimited list of run IDs (as returned from prior calls to /execute/proc) of running or completed proc instances from which input data cached using the cache_input option will be used. Cached input data will not be used for any tables specified in , but data from all other tables cached for the specified run IDs will be passed to the proc. If the same table was cached for multiple specified run IDs, the cached data from the first run ID specified in the list that includes that table will be used. The default value is ''.
  • RUN_TAG: A string that, if not empty, can be used in subsequent calls to /show/proc/status or /kill/proc to identify the proc instance. The default value is ''.
  • MAX_OUTPUT_LINES: The maximum number of lines of output from stdout and stderr to return via /show/proc/status. If the number of lines output exceeds the maximum, earlier lines are discarded. The default value is '100'.
  • EXECUTE_AT_STARTUP: If true, an instance of the proc will run when the database is started instead of running immediately. The <member name="run_id"> can be retrieved using /show/proc and used in /show/proc/status. Supported values: The default value is FALSE.
  • EXECUTE_AT_STARTUP_AS: Sets the alternate user name to execute this proc instance as when execute_at_startup is true. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 13280 of file KineticaFunctions.cs.

ExecuteSqlResponse kinetica.Kinetica.executeSql ( ExecuteSqlRequest  request_)
inline

Execute a SQL statement (query, DML, or DDL).


See SQL Support for the complete set of supported SQL commands.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 13306 of file KineticaFunctions.cs.

ExecuteSqlResponse kinetica.Kinetica.executeSql ( string  statement,
long  offset = 0,
long  limit = -9999,
string  request_schema_str = "",
IList< byte[]>  data = null,
IDictionary< string, string >  options = null 
)
inline

Execute a SQL statement (query, DML, or DDL).


See SQL Support for the complete set of supported SQL commands.

Parameters
statementSQL statement (query, DML, or DDL) to be executed
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use <member name="has_more_records"> to see if more records exist in the result to be fetched, and & to request subsequent pages of results. The default value is -9999.
request_schema_strAvro schema of . The default value is ''.
dataAn array of binary-encoded data for the records to be binded to the SQL query. Or use query_parameters to pass the data in JSON format. The default value is an empty List.
optionsOptional parameters.
  • COST_BASED_OPTIMIZATION: If false, disables the cost-based optimization of the given query. Supported values: The default value is FALSE.
  • DISTRIBUTED_JOINS: If true, enables the use of distributed joins in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Supported values: The default value is FALSE.
  • DISTRIBUTED_OPERATIONS: If true, enables the use of distributed operations in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Supported values: The default value is FALSE.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into or updating a table with a primary key, only used when primary key record collisions are rejected (update_on_existing_pk is false). If set to true, any record insert/update that is rejected for resulting in a primary key collision with an existing table record will be ignored with no error generated. If false, the rejection of any insert/update for resulting in a primary key collision will cause an error to be reported. If the specified table does not have a primary key or if update_on_existing_pk is true, then this option has no effect. Supported values:
    • TRUE: Ignore inserts/updates that result in primary key collisions with existing records
    • FALSE: Treat as errors any inserts/updates that result in primary key collisions with existing records
    The default value is FALSE.
  • LATE_MATERIALIZATION: If true, Joins/Filters results will always be materialized ( saved to result tables format) Supported values: The default value is FALSE.
  • PAGING_TABLE: When empty or the specified paging table not exists, the system will create a paging table and return when query output has more records than the user asked. If the paging table exists in the system, the records from the paging table are returned without evaluating the query.
  • PAGING_TABLE_TTL: Sets the TTL of the paging table.
  • PARALLEL_EXECUTION: If false, disables the parallel step execution of the given query. Supported values: The default value is TRUE.
  • PLAN_CACHE: If false, disables plan caching for the given query. Supported values: The default value is TRUE.
  • PREPARE_MODE: If true, compiles a query into an execution plan and saves it in query cache. Query execution is not performed and an empty response will be returned to user Supported values: The default value is FALSE.
  • PRESERVE_DICT_ENCODING: If true, then columns that were dict encoded in the source table will be dict encoded in the projection table. Supported values: The default value is TRUE.
  • QUERY_PARAMETERS: Query parameters in JSON array or arrays (for inserting multiple rows). This can be used instead of and .
  • RESULTS_CACHING: If false, disables caching of the results of the given query Supported values: The default value is TRUE.
  • RULE_BASED_OPTIMIZATION: If false, disables rule-based rewrite optimizations for the given query Supported values: The default value is TRUE.
  • SSQ_OPTIMIZATION: If false, scalar subqueries will be translated into joins Supported values: The default value is TRUE.
  • TTL: Sets the TTL of the intermediate result tables used in query execution.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into or updating a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted or updated will be replaced by that record. If set to false, any such primary key collision will result in the insert/update being rejected and the error handled as determined by ignore_existing_pk. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Replace the collided-into record with the record inserted or updated when a new/modified record causes a primary key collision with an existing record
    • FALSE: Reject the insert or update when it results in a primary key collision with an existing record
    The default value is FALSE.
  • VALIDATE_CHANGE_COLUMN: When changing a column using alter table, validate the change before applying it. If true, then validate all values. A value too large (or too long) for the new type will prevent any change. If false, then when a value is too large or long, it will be truncated. Supported values: The default value is TRUE.
  • CURRENT_SCHEMA: Use the supplied value as the default schema when processing this SQL command.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 13711 of file KineticaFunctions.cs.

ExportRecordsToFilesResponse kinetica.Kinetica.exportRecordsToFiles ( ExportRecordsToFilesRequest  request_)
inline

Export records from a table to files.

All tables can be exported, in full or partial (see columns_to_export and columns_to_skip). Additional filtering can be applied when using export table with expression through SQL. Default destination is KIFS, though other storage types (Azure, S3, GCS, and HDFS) are supported through datasink_name; see Kinetica.createDatasink(string,string,IDictionary{string, string}).
Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 13750 of file KineticaFunctions.cs.

ExportRecordsToFilesResponse kinetica.Kinetica.exportRecordsToFiles ( string  table_name,
string  filepath,
IDictionary< string, string >  options = null 
)
inline

Export records from a table to files.

All tables can be exported, in full or partial (see columns_to_export and columns_to_skip). Additional filtering can be applied when using export table with expression through SQL. Default destination is KIFS, though other storage types (Azure, S3, GCS, and HDFS) are supported through datasink_name; see Kinetica.createDatasink(string,string,IDictionary{string, string}).
Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.

Parameters
table_name
filepathPath to data export target. If has a file extension, it is read as the name of a file. If is a directory, then the source table name with a random UUID appended will be used as the name of each exported file, all written to that directory. If filepath is a filename, then all exported files will have a random UUID appended to the given name. In either case, the target directory specified or implied must exist. The names of all exported files are returned in the response.
optionsOptional parameters.
  • BATCH_SIZE: Number of records to be exported as a batch. The default value is '1000000'.
  • COLUMN_FORMATS: For each source column specified, applies the column-property-bound format. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. See default_column_formats for valid format syntax.
  • COLUMNS_TO_EXPORT: Specifies a comma-delimited list of columns from the source table to export, written to the output file in the order they are given. Column names can be provided, in which case the target file will use those names as the column headers as well. Alternatively, column numbers can be specified–discretely or as a range. For example, a value of '5,7,1..3' will write values from the fifth column in the source table into the first column in the target file, from the seventh column in the source table into the second column in the target file, and from the first through third columns in the source table into the third through fifth columns in the target file. Mutually exclusive with columns_to_skip.
  • COLUMNS_TO_SKIP: Comma-separated list of column names or column numbers to not export. All columns in the source table not specified will be written to the target file in the order they appear in the table definition. Mutually exclusive with columns_to_export.
  • DATASINK_NAME: Datasink name, created using /create/datasink.
  • DEFAULT_COLUMN_FORMATS: Specifies the default format to use to write data. Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given source column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless custom column_formats for that annotation are specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to write text as "05/04/2000 12:12:11"
  • EXPORT_DDL: Save DDL to a separate file. The default value is 'false'.
  • FILE_EXTENSION: Extension to give the export file. The default value is '.csv'.
  • FILE_TYPE: Specifies the file format to use when exporting data. Supported values: The default value is DELIMITED_TEXT.
  • KINETICA_HEADER: Whether to include a Kinetica proprietary header. Will not be written if text_has_header is false. Supported values: The default value is FALSE.
  • KINETICA_HEADER_DELIMITER: If a Kinetica proprietary header is included, then specify a property separator. Different from column delimiter. The default value is '|'.
  • COMPRESSION_TYPE: File compression type. GZip can be applied to text and Parquet files. Snappy can only be applied to Parquet files, and is the default compression for them. Supported values:
  • SINGLE_FILE: Save records to a single file. This option may be ignored if file size exceeds internal file size limits (this limit will differ on different targets). Supported values: The default value is TRUE.
  • SINGLE_FILE_MAX_SIZE: Max file size (in MB) to allow saving to a single file. May be overridden by target limitations. The default value is ''.
  • TEXT_DELIMITER: Specifies the character to write out to delimit field values and field names in the header (if present). For delimited_text file_type only. The default value is ','.
  • TEXT_HAS_HEADER: Indicates whether to write out a header row. For delimited_text file_type only. Supported values: The default value is TRUE.
  • TEXT_NULL_STRING: Specifies the character string that should be written out for the null value in the data. For delimited_text file_type only. The default value is '\N'.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14046 of file KineticaFunctions.cs.

ExportRecordsToTableResponse kinetica.Kinetica.exportRecordsToTable ( ExportRecordsToTableRequest  request_)
inline

Exports records from source table to the specified target table in an external database

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14065 of file KineticaFunctions.cs.

ExportRecordsToTableResponse kinetica.Kinetica.exportRecordsToTable ( string  table_name,
string  remote_query = "",
IDictionary< string, string >  options = null 
)
inline

Exports records from source table to the specified target table in an external database

Parameters
table_nameName of the table from which the data will be exported to remote database, in [schema_name.]table_name format, using standard name resolution rules.
remote_queryParameterized insert query to export gpudb table data into remote database. The default value is ''.
optionsOptional parameters.
  • BATCH_SIZE: Batch size, which determines how many rows to export per round trip. The default value is '200000'.
  • DATASINK_NAME: Name of an existing external data sink to which table name specified in will be exported
  • JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.
  • JDBC_CONNECTION_INIT_STATEMENT: Executes the statement once before doing actual load. The default value is ''.
  • REMOTE_TABLE: Name of the target table to which source table is exported. When this option is specified remote_query cannot be specified. The default value is ''.
  • USE_ST_GEOMFROM_CASTS: Wraps parametrized variables with st_geomfromtext or st_geomfromwkb based on source column type Supported values: The default value is FALSE.
  • USE_INDEXED_PARAMETERS: Uses $n style syntax when generating insert query for remote_table option Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14164 of file KineticaFunctions.cs.

FilterResponse kinetica.Kinetica.filter ( FilterRequest  request_)
inline

Filters data based on the specified expression.

The results are stored in a result set with the given .
For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14194 of file KineticaFunctions.cs.

FilterResponse kinetica.Kinetica.filter ( string  table_name,
string  view_name,
string  expression,
IDictionary< string, string >  options = null 
)
inline

Filters data based on the specified expression.

The results are stored in a result set with the given view_name .
For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.

Parameters
table_nameName of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
expressionThe select expression to filter the specified table. For details see Expressions.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
  • VIEW_ID: view this filtered-view is part of. The default value is ''.
  • TTL: Sets the TTL of the view specified in .
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14286 of file KineticaFunctions.cs.

FilterByAreaResponse kinetica.Kinetica.filterByArea ( FilterByAreaRequest  request_)
inline

Calculates which objects from a table are within a named area of interest (NAI/polygon).

The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name passed in as part of the input.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14314 of file KineticaFunctions.cs.

FilterByAreaResponse kinetica.Kinetica.filterByArea ( string  table_name,
string  view_name,
string  x_column_name,
IList< double >  x_vector,
string  y_column_name,
IList< double >  y_vector,
IDictionary< string, string >  options = null 
)
inline

Calculates which objects from a table are within a named area of interest (NAI/polygon).

The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name view_name passed in as part of the input.

Parameters
table_nameName of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
x_column_nameName of the column containing the x values to be filtered.
x_vectorList of x coordinates of the vertices of the polygon representing the area to be filtered.
y_column_nameName of the column containing the y values to be filtered.
y_vectorList of y coordinates of the vertices of the polygon representing the area to be filtered.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14397 of file KineticaFunctions.cs.

FilterByAreaGeometryResponse kinetica.Kinetica.filterByAreaGeometry ( FilterByAreaGeometryRequest  request_)
inline

Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon).

The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name passed in as part of the input.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14431 of file KineticaFunctions.cs.

FilterByAreaGeometryResponse kinetica.Kinetica.filterByAreaGeometry ( string  table_name,
string  view_name,
string  column_name,
IList< double >  x_vector,
IList< double >  y_vector,
IDictionary< string, string >  options = null 
)
inline

Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon).

The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name view_name passed in as part of the input.

Parameters
table_nameName of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of the geospatial geometry column to be filtered.
x_vectorList of x coordinates of the vertices of the polygon representing the area to be filtered.
y_vectorList of y coordinates of the vertices of the polygon representing the area to be filtered.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] The schema for the newly created view. If the schema is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14512 of file KineticaFunctions.cs.

FilterByBoxResponse kinetica.Kinetica.filterByBox ( FilterByBoxRequest  request_)
inline

Calculates how many objects within the given table lie in a rectangular box.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a is passed in as part of the input payload.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14546 of file KineticaFunctions.cs.

FilterByBoxResponse kinetica.Kinetica.filterByBox ( string  table_name,
string  view_name,
string  x_column_name,
double  min_x,
double  max_x,
string  y_column_name,
double  min_y,
double  max_y,
IDictionary< string, string >  options = null 
)
inline

Calculates how many objects within the given table lie in a rectangular box.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a view_name is passed in as part of the input payload.

Parameters
table_nameName of the table on which the bounding box operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
x_column_nameName of the column on which to perform the bounding box query. Must be a valid numeric column.
min_xLower bound for the column chosen by . Must be less than or equal to .
max_xUpper bound for . Must be greater than or equal to .
y_column_nameName of a column on which to perform the bounding box query. Must be a valid numeric column.
min_yLower bound for . Must be less than or equal to .
max_yUpper bound for . Must be greater than or equal to .
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14637 of file KineticaFunctions.cs.

FilterByBoxGeometryResponse kinetica.Kinetica.filterByBoxGeometry ( FilterByBoxGeometryRequest  request_)
inline

Calculates which geospatial geometry objects from a table intersect a rectangular box.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a is passed in as part of the input payload.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14672 of file KineticaFunctions.cs.

FilterByBoxGeometryResponse kinetica.Kinetica.filterByBoxGeometry ( string  table_name,
string  view_name,
string  column_name,
double  min_x,
double  max_x,
double  min_y,
double  max_y,
IDictionary< string, string >  options = null 
)
inline

Calculates which geospatial geometry objects from a table intersect a rectangular box.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a view_name is passed in as part of the input payload.

Parameters
table_nameName of the table on which the bounding box operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of the geospatial geometry column to be filtered.
min_xLower bound for the x-coordinate of the rectangular box. Must be less than or equal to .
max_xUpper bound for the x-coordinate of the rectangular box. Must be greater than or equal to .
min_yLower bound for the y-coordinate of the rectangular box. Must be less than or equal to .
max_yUpper bound for the y-coordinate of the rectangular box. Must be greater than or equal to .
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14761 of file KineticaFunctions.cs.

FilterByGeometryResponse kinetica.Kinetica.filterByGeometry ( FilterByGeometryRequest  request_)
inline

Applies a geometry filter against a geospatial geometry column in a given table or view.

The filtering geometry is provided by .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14790 of file KineticaFunctions.cs.

FilterByGeometryResponse kinetica.Kinetica.filterByGeometry ( string  table_name,
string  view_name,
string  column_name,
string  input_wkt,
string  operation,
IDictionary< string, string >  options = null 
)
inline

Applies a geometry filter against a geospatial geometry column in a given table or view.

The filtering geometry is provided by input_wkt .

Parameters
table_nameName of the table on which the filter by geometry will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view containing a geospatial geometry column.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of the column to be used in the filter. Must be a geospatial geometry column.
input_wktA geometry in WKT format that will be used to filter the objects in . The default value is ''.
operationThe geometric filtering operation to perform Supported values:
  • CONTAINS: Matches records that contain the given WKT in , i.e. the given WKT is within the bounds of a record's geometry.
  • CROSSES: Matches records that cross the given WKT.
  • DISJOINT: Matches records that are disjoint from the given WKT.
  • EQUALS: Matches records that are the same as the given WKT.
  • INTERSECTS: Matches records that intersect the given WKT.
  • OVERLAPS: Matches records that overlap the given WKT.
  • TOUCHES: Matches records that touch the given WKT.
  • WITHIN: Matches records that are within the given WKT.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 14919 of file KineticaFunctions.cs.

FilterByListResponse kinetica.Kinetica.filterByList ( FilterByListRequest  request_)
inline

Calculates which records from a table have values in the given list for the corresponding column.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input filter specification is also created if a is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 14964 of file KineticaFunctions.cs.

FilterByListResponse kinetica.Kinetica.filterByList ( string  table_name,
string  view_name,
IDictionary< string, IList< string >>  column_values_map,
IDictionary< string, string >  options = null 
)
inline

Calculates which records from a table have values in the given list for the corresponding column.

The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input filter specification is also created if a view_name is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.

Parameters
table_nameName of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_values_mapList of values for the corresponding column in the table
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
  • FILTER_MODE: String indicating the filter mode, either 'in_list' or 'not_in_list'. Supported values:
    • IN_LIST: The filter will match all items that are in the provided list(s).
    • NOT_IN_LIST: The filter will match all items that are not in the provided list(s).
    The default value is IN_LIST.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15076 of file KineticaFunctions.cs.

FilterByRadiusResponse kinetica.Kinetica.filterByRadius ( FilterByRadiusRequest  request_)
inline

Calculates which objects from a table lie within a circle with the given radius and center point (i.e.

circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a is passed in as part of the request.
For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15112 of file KineticaFunctions.cs.

FilterByRadiusResponse kinetica.Kinetica.filterByRadius ( string  table_name,
string  view_name,
string  x_column_name,
double  x_center,
string  y_column_name,
double  y_center,
double  radius,
IDictionary< string, string >  options = null 
)
inline

Calculates which objects from a table lie within a circle with the given radius and center point (i.e.

circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a view_name is passed in as part of the request.
For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.

Parameters
table_nameName of the table on which the filter by radius operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
x_column_nameName of the column to be used for the x-coordinate (the longitude) of the center.
x_centerValue of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.
y_column_nameName of the column to be used for the y-coordinate-the latitude-of the center.
y_centerValue of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.
radiusThe radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of '42000' means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema which is to contain the newly created view. If the schema is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15210 of file KineticaFunctions.cs.

FilterByRadiusGeometryResponse kinetica.Kinetica.filterByRadiusGeometry ( FilterByRadiusGeometryRequest  request_)
inline

Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e.

circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a is passed in as part of the request.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15246 of file KineticaFunctions.cs.

FilterByRadiusGeometryResponse kinetica.Kinetica.filterByRadiusGeometry ( string  table_name,
string  view_name,
string  column_name,
double  x_center,
double  y_center,
double  radius,
IDictionary< string, string >  options = null 
)
inline

Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e.

circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a view_name is passed in as part of the request.

Parameters
table_nameName of the table on which the filter by radius operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of the geospatial geometry column to be filtered.
x_centerValue of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.
y_centerValue of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.
radiusThe radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of '42000' means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15336 of file KineticaFunctions.cs.

FilterByRangeResponse kinetica.Kinetica.filterByRange ( FilterByRangeRequest  request_)
inline

Calculates which objects from a table have a column that is within the given bounds.

An object from the table identified by is added to the view if its column is within [, ] (inclusive). The operation is synchronous. The response provides a count of the number of objects which passed the bound filter. Although this functionality can also be accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15380 of file KineticaFunctions.cs.

FilterByRangeResponse kinetica.Kinetica.filterByRange ( string  table_name,
string  view_name,
string  column_name,
double  lower_bound,
double  upper_bound,
IDictionary< string, string >  options = null 
)
inline

Calculates which objects from a table have a column that is within the given bounds.

An object from the table identified by table_name is added to the view view_name if its column is within [lower_bound , upper_bound ] (inclusive). The operation is synchronous. The response provides a count of the number of objects which passed the bound filter. Although this functionality can also be accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).

Parameters
table_nameName of the table on which the filter by range operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of a column on which the operation would be applied.
lower_boundValue of the lower bound (inclusive).
upper_boundValue of the upper bound (inclusive).
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15468 of file KineticaFunctions.cs.

FilterBySeriesResponse kinetica.Kinetica.filterBySeries ( FilterBySeriesRequest  request_)
inline

Filters objects matching all points of the given track (works only on track type data).

It allows users to specify a particular track to find all other points in the table that fall within specified ranges (spatial and temporal) of all points of the given track. Additionally, the user can specify another track to see if the two intersect (or go close to each other within the specified ranges). The user also has the flexibility of using different metrics for the spatial distance calculation: Euclidean (flat geometry) or Great Circle (spherical geometry to approximate the Earth's surface distances). The filtered points are stored in a newly created result set. The return value of the function is the number of points in the resultant set (view).
This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15511 of file KineticaFunctions.cs.

FilterBySeriesResponse kinetica.Kinetica.filterBySeries ( string  table_name,
string  view_name,
string  track_id,
IList< string >  target_track_ids,
IDictionary< string, string >  options = null 
)
inline

Filters objects matching all points of the given track (works only on track type data).

It allows users to specify a particular track to find all other points in the table that fall within specified ranges (spatial and temporal) of all points of the given track. Additionally, the user can specify another track to see if the two intersect (or go close to each other within the specified ranges). The user also has the flexibility of using different metrics for the spatial distance calculation: Euclidean (flat geometry) or Great Circle (spherical geometry to approximate the Earth's surface distances). The filtered points are stored in a newly created result set. The return value of the function is the number of points in the resultant set (view).
This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.

Parameters
table_nameName of the table on which the filter by track operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be a currently existing table with a track present.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
track_idThe ID of the track which will act as the filtering points. Must be an existing track within the given table.
target_track_idsUp to one track ID to intersect with the "filter" track. If any provided, it must be an valid track ID within the given set.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
  • SPATIAL_RADIUS: A positive number passed as a string representing the radius of the search area centered around each track point's geospatial coordinates. The value is interpreted in meters. Required parameter.
  • TIME_RADIUS: A positive number passed as a string representing the maximum allowable time difference between the timestamps of a filtered object and the given track's points. The value is interpreted in seconds. Required parameter.
  • SPATIAL_DISTANCE_METRIC: A string representing the coordinate system to use for the spatial search criteria. Acceptable values are 'euclidean' and 'great_circle'. Optional parameter; default is 'euclidean'. Supported values:
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15640 of file KineticaFunctions.cs.

FilterByStringResponse kinetica.Kinetica.filterByString ( FilterByStringRequest  request_)
inline

Calculates which objects from a table or view match a string expression for the given string columns.

Setting case_sensitive can modify case sensitivity in matching for all modes except search. For search mode details and limitations, see Full Text Search.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15667 of file KineticaFunctions.cs.

FilterByStringResponse kinetica.Kinetica.filterByString ( string  table_name,
string  view_name,
string  expression,
string  mode,
IList< string >  column_names,
IDictionary< string, string >  options = null 
)
inline

Calculates which objects from a table or view match a string expression for the given string columns.

Setting case_sensitive can modify case sensitivity in matching for all modes except search. For search mode details and limitations, see Full Text Search.

Parameters
table_nameName of the table on which the filter operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
expressionThe expression with which to filter the table.
modeThe string filtering mode to apply. See below for details. Supported values:
  • SEARCH: Full text search query with wildcards and boolean operators. Note that for this mode, no column can be specified in ; all string columns of the table that have text search enabled will be searched.
  • EQUALS: Exact whole-string match (accelerated).
  • CONTAINS: Partial substring match (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
  • STARTS_WITH: Strings that start with the given expression (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
  • REGEX: Full regular expression search (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
column_namesList of columns on which to apply the filter. Ignored for search mode.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
  • CASE_SENSITIVE: If false then string filtering will ignore case. Does not apply to search mode. Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 15803 of file KineticaFunctions.cs.

FilterByTableResponse kinetica.Kinetica.filterByTable ( FilterByTableRequest  request_)
inline

Filters objects in one table based on objects in another table.

The user must specify matching column types from the two tables (i.e. the target table from which objects will be filtered and the source table based on which the filter will be created); the column names need not be the same. If a is specified, then the filtered objects will then be put in a newly created view. The operation is synchronous, meaning that a response will not be returned until all objects are fully available in the result view. The return value contains the count (i.e. the size) of the resulting view.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 15839 of file KineticaFunctions.cs.

FilterByTableResponse kinetica.Kinetica.filterByTable ( string  table_name,
string  view_name,
string  column_name,
string  source_table_name,
string  source_table_column_name,
IDictionary< string, string >  options = null 
)
inline

Filters objects in one table based on objects in another table.

The user must specify matching column types from the two tables (i.e. the target table from which objects will be filtered and the source table based on which the filter will be created); the column names need not be the same. If a view_name is specified, then the filtered objects will then be put in a newly created view. The operation is synchronous, meaning that a response will not be returned until all objects are fully available in the result view. The return value contains the count (i.e. the size) of the resulting view.

Parameters
table_nameName of the table whose data will be filtered, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
column_nameName of the column by whose value the data will be filtered from the table designated by .
source_table_nameName of the table whose data will be compared against in the table called , in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
source_table_column_nameName of the column in the whose values will be used as the filter for table . Must be a geospatial geometry column if in 'spatial' mode; otherwise, Must match the type of the .
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
  • FILTER_MODE: String indicating the filter mode, either in_table or not_in_table. Supported values: The default value is IN_TABLE.
  • MODE: Mode - should be either spatial or normal. Supported values: The default value is NORMAL.
  • BUFFER: Buffer size, in meters. Only relevant for spatial mode. The default value is '0'.
  • BUFFER_METHOD: Method used to buffer polygons. Only relevant for spatial mode. Supported values:
    • NORMAL
    • GEOS: Use geos 1 edge per corner algorithm
    The default value is NORMAL.
  • MAX_PARTITION_SIZE: Maximum number of points in a partition. Only relevant for spatial mode. The default value is '0'.
  • MAX_PARTITION_SCORE: Maximum number of points * edges in a partition. Only relevant for spatial mode. The default value is '8000000'.
  • X_COLUMN_NAME: Name of column containing x value of point being filtered in spatial mode. The default value is 'x'.
  • Y_COLUMN_NAME: Name of column containing y value of point being filtered in spatial mode. The default value is 'y'.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 16028 of file KineticaFunctions.cs.

FilterByValueResponse kinetica.Kinetica.filterByValue ( FilterByValueRequest  request_)
inline

Calculates which objects from a table has a particular value for a particular column.

The input parameters provide a way to specify either a String or a Double valued column and a desired value for the column on which the filter is performed. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new result view which satisfies the input filter restriction specification is also created with a view name passed in as part of the input payload. Although this functionality can also be accomplished with the standard filter function, it is more efficient.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 16068 of file KineticaFunctions.cs.

FilterByValueResponse kinetica.Kinetica.filterByValue ( string  table_name,
string  view_name,
bool  is_string,
double  _value,
string  value_str,
string  column_name,
IDictionary< string, string >  options = null 
)
inline

Calculates which objects from a table has a particular value for a particular column.

The input parameters provide a way to specify either a String or a Double valued column and a desired value for the column on which the filter is performed. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new result view which satisfies the input filter restriction specification is also created with a view name passed in as part of the input payload. Although this functionality can also be accomplished with the standard filter function, it is more efficient.

Parameters
table_nameName of an existing table on which to perform the calculation, in [schema_name.]table_name format, using standard name resolution rules.
view_nameIf provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.
is_stringIndicates whether the value being searched for is string or numeric.
_valueThe value to search for. The default value is 0.
value_strThe string value to search for. The default value is ''.
column_nameName of a column on which the filter by value would be applied.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . This is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_view_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the view as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 16156 of file KineticaFunctions.cs.

static string kinetica.Kinetica.GetApiVersion ( )
inlinestatic

API Version

Returns
Version String for API

Definition at line 77 of file Kinetica.cs.

GetJobResponse kinetica.Kinetica.getJob ( GetJobRequest  request_)
inline

Get the status and result of asynchronously running job.

See the Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}) for starting an asynchronous job. Some fields of the response are filled only after the submitted job has finished execution.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 16183 of file KineticaFunctions.cs.

GetJobResponse kinetica.Kinetica.getJob ( long  job_id,
IDictionary< string, string >  options = null 
)
inline

Get the status and result of asynchronously running job.

See the Kinetica.createJob(string,string,byte[],string,IDictionary{string, string}) for starting an asynchronous job. Some fields of the response are filled only after the submitted job has finished execution.

Parameters
job_idA unique identifier for the job whose status and result is to be fetched.
optionsOptional parameters.
  • JOB_TAG: Job tag returned in call to create the job
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 16214 of file KineticaFunctions.cs.

GetRecordsResponse<T> kinetica.Kinetica.getRecords< T > ( GetRecordsRequest  request_)
inline

Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.

This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.
This operation supports paging through the data via the and parameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.

Template Parameters
TThe type of object being retrieved.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16246 of file KineticaFunctions.cs.

GetRecordsResponse<T> kinetica.Kinetica.getRecords< T > ( string  table_name,
long  offset = 0,
long  limit = -9999,
IDictionary< string, string >  options = null 
)
inline

Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.

This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.
This operation supports paging through the data via the offset and limit parameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.

Template Parameters
TThe type of object being retrieved.
Parameters
table_nameName of the table or view from which the records will be fetched, in [schema_name.]table_name format, using standard name resolution rules.
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use <member name="has_more_records"> to see if more records exist in the result to be fetched, and & to request subsequent pages of results. The default value is -9999.
options
  • EXPRESSION: Optional filter expression to apply to the table.
  • FAST_INDEX_LOOKUP: Indicates if indexes should be used to perform the lookup for a given expression if possible. Only applicable if there is no sorting, the expression contains only equivalence comparisons based on existing tables indexes and the range of requested values is from [0 to END_OF_SET]. Supported values: The default value is TRUE.
  • SORT_BY: Optional column that the data should be sorted by. Empty by default (i.e. no sorting is applied).
  • SORT_ORDER: String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Supported values: The default value is ASCENDING.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16366 of file KineticaFunctions.cs.

GetRecordsByColumnResponse kinetica.Kinetica.getRecordsByColumn ( GetRecordsByColumnRequest  request_)
inline

For a given table, retrieves the values from the requested column(s).

Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with the and parameters.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as Kinetica.createProjection(string,string,IList{string},IDictionary{string, string}).
When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 16421 of file KineticaFunctions.cs.

GetRecordsByColumnResponse kinetica.Kinetica.getRecordsByColumn ( string  table_name,
IList< string >  column_names,
long  offset = 0,
long  limit = -9999,
IDictionary< string, string >  options = null 
)
inline

For a given table, retrieves the values from the requested column(s).

Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with the offset and limit parameters.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as Kinetica.createProjection(string,string,IList{string},IDictionary{string, string}).
When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.

Parameters
table_nameName of the table or view on which this operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. An empty table name retrieves one record from a single-row virtual table, where columns specified should be constants or constant expressions.
column_namesThe list of column values to retrieve.
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use <member name="has_more_records"> to see if more records exist in the result to be fetched, and & to request subsequent pages of results. The default value is -9999.
options
  • EXPRESSION: Optional filter expression to apply to the table.
  • SORT_BY: Optional column that the data should be sorted by. Used in conjunction with sort_order. The order_by option can be used in lieu of sort_by / sort_order. The default value is ''.
  • SORT_ORDER: String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Supported values: The default value is ASCENDING.
  • ORDER_BY: Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., 'timestamp asc, x desc'. The default value is ''.
  • CONVERT_WKTS_TO_WKBS: If true, then WKT string columns will be returned as WKB bytes. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 16569 of file KineticaFunctions.cs.

GetRecordsBySeriesResponse<T> kinetica.Kinetica.getRecordsBySeries< T > ( GetRecordsBySeriesRequest  request_)
inline

Retrieves the complete series/track records from the given based on the partial track information contained in the .


This operation supports paging through the data via the and parameters.
In contrast to Kinetica.getRecords{T}(string,long,long,IDictionary{string, string}) this returns records grouped by series/track. So if is 0 and is 5 this operation would return the first 5 series/tracks in . Each series/track will be returned sorted by their TIMESTAMP column.

Template Parameters
TThe type of object being retrieved.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16609 of file KineticaFunctions.cs.

GetRecordsBySeriesResponse<T> kinetica.Kinetica.getRecordsBySeries< T > ( string  table_name,
string  world_table_name,
int  offset = 0,
int  limit = 250,
IDictionary< string, string >  options = null 
)
inline

Retrieves the complete series/track records from the given world_table_name based on the partial track information contained in the table_name .


This operation supports paging through the data via the offset and limit parameters.
In contrast to Kinetica.getRecords{T}(string,long,long,IDictionary{string, string}) this returns records grouped by series/track. So if offset is 0 and limit is 5 this operation would return the first 5 series/tracks in table_name . Each series/track will be returned sorted by their TIMESTAMP column.

Template Parameters
TThe type of object being retrieved.
Parameters
table_nameName of the table or view for which series/tracks will be fetched, in [schema_name.]table_name format, using standard name resolution rules.
world_table_nameName of the table containing the complete series/track information to be returned for the tracks present in the , in [schema_name.]table_name format, using standard name resolution rules. Typically this is used when retrieving series/tracks from a view (which contains partial series/tracks) but the user wants to retrieve the entire original series/tracks. Can be blank.
offsetA positive integer indicating the number of initial series/tracks to skip (useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of series/tracks to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 250.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16673 of file KineticaFunctions.cs.

GetRecordsFromCollectionResponse<T> kinetica.Kinetica.getRecordsFromCollection< T > ( GetRecordsFromCollectionRequest  request_)
inline

Retrieves records from a collection.

The operation can optionally return the record IDs which can be used in certain queries such as Kinetica.deleteRecords(string,IList{string},IDictionary{string, string}).
This operation supports paging through the data via the and parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)

Template Parameters
TThe type of object being retrieved.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16711 of file KineticaFunctions.cs.

GetRecordsFromCollectionResponse<T> kinetica.Kinetica.getRecordsFromCollection< T > ( string  table_name,
long  offset = 0,
long  limit = -9999,
IDictionary< string, string >  options = null 
)
inline

Retrieves records from a collection.

The operation can optionally return the record IDs which can be used in certain queries such as Kinetica.deleteRecords(string,IList{string},IDictionary{string, string}).
This operation supports paging through the data via the offset and limit parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)

Template Parameters
TThe type of object being retrieved.
Parameters
table_nameName of the collection or table from which records are to be retrieved, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing collection or table.
offsetA positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limitA positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Use & to request subsequent pages of results. The default value is -9999.
options
  • RETURN_RECORD_IDS: If true then return the internal record ID along with each returned record. Supported values: The default value is FALSE.
  • EXPRESSION: Optional filter expression to apply to the table. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.
Type Constraints
T :new() 

Definition at line 16800 of file KineticaFunctions.cs.

GrantPermissionResponse kinetica.Kinetica.grantPermission ( GrantPermissionRequest  request_)
inline

Grant user or role the specified permission on the specified object.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 16865 of file KineticaFunctions.cs.

GrantPermissionResponse kinetica.Kinetica.grantPermission ( string  principal,
string  _object,
string  object_type,
string  permission,
IDictionary< string, string >  options = null 
)
inline

Grant user or role the specified permission on the specified object.

Parameters
principalName of the user or role for which the permission is being granted. Must be an existing user or role. The default value is ''.
_objectName of object permission is being granted to. It is recommended to use a fully-qualified name when possible.
object_typeThe type of object being granted to Supported values:
permissionPermission being granted. Supported values:
  • ADMIN: Full read/write and administrative access on the object.
  • CONNECT: Connect access on the given data source or data sink.
  • DELETE: Delete rows from tables.
  • EXECUTE: Ability to Execute the Procedure object.
  • INSERT: Insert access to tables.
  • READ: Ability to read, list and use the object.
  • UPDATE: Update access to the table.
  • USER_ADMIN: Access to administer users and roles that do not have system_admin permission.
  • WRITE: Access to write, change and delete objects.
optionsOptional parameters.
  • COLUMNS: Apply table security to these columns, comma-separated. The default value is ''.
  • FILTER_EXPRESSION: Optional filter expression to apply to this grant. Only rows that match the filter will be affected. The default value is ''.
  • WITH_GRANT_OPTION: Allow the recipient to grant the same permission (or subset) to others Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17041 of file KineticaFunctions.cs.

GrantPermissionCredentialResponse kinetica.Kinetica.grantPermissionCredential ( GrantPermissionCredentialRequest  request_)
inline

Grants a credential-level permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17064 of file KineticaFunctions.cs.

GrantPermissionCredentialResponse kinetica.Kinetica.grantPermissionCredential ( string  name,
string  permission,
string  credential_name,
IDictionary< string, string >  options = null 
)
inline

Grants a credential-level permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role. Supported values:
credential_nameName of the credential on which the permission will be granted. Must be an existing credential, or an empty string to grant access on all credentials.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17104 of file KineticaFunctions.cs.

GrantPermissionDatasourceResponse kinetica.Kinetica.grantPermissionDatasource ( GrantPermissionDatasourceRequest  request_)
inline

Grants a data source permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17126 of file KineticaFunctions.cs.

GrantPermissionDatasourceResponse kinetica.Kinetica.grantPermissionDatasource ( string  name,
string  permission,
string  datasource_name,
IDictionary< string, string >  options = null 
)
inline

Grants a data source permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role Supported values:
  • ADMIN: Admin access on the given data source
  • CONNECT: Connect access on the given data source
datasource_nameName of the data source on which the permission will be granted. Must be an existing data source, or an empty string to grant permission on all data sources.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17165 of file KineticaFunctions.cs.

GrantPermissionDirectoryResponse kinetica.Kinetica.grantPermissionDirectory ( GrantPermissionDirectoryRequest  request_)
inline

Grants a KiFS directory-level permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17187 of file KineticaFunctions.cs.

GrantPermissionDirectoryResponse kinetica.Kinetica.grantPermissionDirectory ( string  name,
string  permission,
string  directory_name,
IDictionary< string, string >  options = null 
)
inline

Grants a KiFS directory-level permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role. Supported values:
  • DIRECTORY_READ: For files in the directory, access to list files, download files, or use files in server side functions
  • DIRECTORY_WRITE: Access to upload files to, or delete files from, the directory. A user or role with write access automatically has read access
directory_nameName of the KiFS directory to which the permission grants access. An empty directory name grants access to all KiFS directories
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17228 of file KineticaFunctions.cs.

GrantPermissionProcResponse kinetica.Kinetica.grantPermissionProc ( GrantPermissionProcRequest  request_)
inline

Grants a proc-level permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17249 of file KineticaFunctions.cs.

GrantPermissionProcResponse kinetica.Kinetica.grantPermissionProc ( string  name,
string  permission,
string  proc_name,
IDictionary< string, string >  options = null 
)
inline

Grants a proc-level permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role. Supported values:
proc_nameName of the proc to which the permission grants access. Must be an existing proc, or an empty string to grant access to all procs.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17285 of file KineticaFunctions.cs.

GrantPermissionSystemResponse kinetica.Kinetica.grantPermissionSystem ( GrantPermissionSystemRequest  request_)
inline

Grants a system-level permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17304 of file KineticaFunctions.cs.

GrantPermissionSystemResponse kinetica.Kinetica.grantPermissionSystem ( string  name,
string  permission,
IDictionary< string, string >  options = null 
)
inline

Grants a system-level permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role. Supported values:
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17350 of file KineticaFunctions.cs.

GrantPermissionTableResponse kinetica.Kinetica.grantPermissionTable ( GrantPermissionTableRequest  request_)
inline

Grants a table-level permission to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17369 of file KineticaFunctions.cs.

GrantPermissionTableResponse kinetica.Kinetica.grantPermissionTable ( string  name,
string  permission,
string  table_name,
string  filter_expression = "",
IDictionary< string, string >  options = null 
)
inline

Grants a table-level permission to a user or role.

Parameters
nameName of the user or role to which the permission will be granted. Must be an existing user or role.
permissionPermission to grant to the user or role. Supported values:
table_nameName of the table to which the permission grants access, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, view, or schema. If a schema, the permission also applies to tables and views in the schema.
filter_expressionOptional filter expression to apply to this grant. Only rows that match the filter will be affected. The default value is ''.
optionsOptional parameters.
  • COLUMNS: Apply security to these columns, comma-separated. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17435 of file KineticaFunctions.cs.

GrantRoleResponse kinetica.Kinetica.grantRole ( GrantRoleRequest  request_)
inline

Grants membership in a role to a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17457 of file KineticaFunctions.cs.

GrantRoleResponse kinetica.Kinetica.grantRole ( string  role,
string  member,
IDictionary< string, string >  options = null 
)
inline

Grants membership in a role to a user or role.

Parameters
roleName of the role in which membership will be granted. Must be an existing role.
memberName of the user or role that will be granted membership in . Must be an existing user or role.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17478 of file KineticaFunctions.cs.

HasPermissionResponse kinetica.Kinetica.hasPermission ( HasPermissionRequest  request_)
inline

Checks if the specified user has the specified permission on the specified object.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17495 of file KineticaFunctions.cs.

HasPermissionResponse kinetica.Kinetica.hasPermission ( string  principal,
string  _object,
string  object_type,
string  permission,
IDictionary< string, string >  options = null 
)
inline

Checks if the specified user has the specified permission on the specified object.

Parameters
principalName of the user for which the permission is being checked. Must be an existing user. If blank, will use the current user. The default value is ''.
_objectName of object to check for the requested permission. It is recommended to use a fully-qualified name when possible.
object_typeThe type of object being checked Supported values:
permissionPermission to check for. Supported values:
  • ADMIN: Full read/write and administrative access on the object.
  • CONNECT: Connect access on the given data source or data sink.
  • DELETE: Delete rows from tables.
  • EXECUTE: Ability to Execute the Procedure object.
  • INSERT: Insert access to tables.
  • READ: Ability to read, list and use the object.
  • UPDATE: Update access to the table.
  • USER_ADMIN: Access to administer users and roles that do not have system_admin permission.
  • WRITE: Access to write, change and delete objects.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If false will return an error if the provided does not exist or is blank. If true then it will return false for <member name="has_permission">. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17660 of file KineticaFunctions.cs.

HasProcResponse kinetica.Kinetica.hasProc ( HasProcRequest  request_)
inline

Checks the existence of a proc with the given name.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17681 of file KineticaFunctions.cs.

HasProcResponse kinetica.Kinetica.hasProc ( string  proc_name,
IDictionary< string, string >  options = null 
)
inline

Checks the existence of a proc with the given name.

Parameters
proc_nameName of the proc to check for existence.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17700 of file KineticaFunctions.cs.

HasRoleResponse kinetica.Kinetica.hasRole ( HasRoleRequest  request_)
inline

Checks if the specified user has the specified role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17716 of file KineticaFunctions.cs.

HasRoleResponse kinetica.Kinetica.hasRole ( string  principal,
string  role,
IDictionary< string, string >  options = null 
)
inline

Checks if the specified user has the specified role.

Parameters
principalName of the user for which role membersih is being checked. Must be an existing user. If blank, will use the current user. The default value is ''.
roleName of role to check for membership.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If false will return an error if the provided does not exist or is blank. If true then it will return false for <member name="has_role">. Supported values: The default value is FALSE.
  • ONLY_DIRECT: If false will search recursively if the is a member of . If true then must directly be a member of . Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17782 of file KineticaFunctions.cs.

HasSchemaResponse kinetica.Kinetica.hasSchema ( HasSchemaRequest  request_)
inline

Checks for the existence of a schema with the given name.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17799 of file KineticaFunctions.cs.

HasSchemaResponse kinetica.Kinetica.hasSchema ( string  schema_name,
IDictionary< string, string >  options = null 
)
inline

Checks for the existence of a schema with the given name.

Parameters
schema_nameName of the schema to check for existence, in root, using standard name resolution rules.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17820 of file KineticaFunctions.cs.

HasTableResponse kinetica.Kinetica.hasTable ( HasTableRequest  request_)
inline

Checks for the existence of a table with the given name.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17836 of file KineticaFunctions.cs.

HasTableResponse kinetica.Kinetica.hasTable ( string  table_name,
IDictionary< string, string >  options = null 
)
inline

Checks for the existence of a table with the given name.

Parameters
table_nameName of the table to check for existence, in [schema_name.]table_name format, using standard name resolution rules.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17857 of file KineticaFunctions.cs.

HasTypeResponse kinetica.Kinetica.hasType ( HasTypeRequest  request_)
inline

Check for the existence of a type.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17872 of file KineticaFunctions.cs.

HasTypeResponse kinetica.Kinetica.hasType ( string  type_id,
IDictionary< string, string >  options = null 
)
inline

Check for the existence of a type.

Parameters
type_idId of the type returned in response to /create/type request.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 17890 of file KineticaFunctions.cs.

InsertRecordsResponse kinetica.Kinetica.insertRecords< T > ( InsertRecordsRequest< T >  request_)
inline

Adds multiple records to the specified table.

The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.
The parameter can be used to customize this function's behavior.
The update_on_existing_pk option specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.
The return_record_ids option indicates that the database should return the unique identifiers of inserted records.

Template Parameters
TThe type of object being added.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 18016 of file KineticaFunctions.cs.

InsertRecordsResponse kinetica.Kinetica.insertRecords< T > ( string  table_name,
IList< T >  data,
IDictionary< string, string >  options = null 
)
inline

Adds multiple records to the specified table.

The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.
The options parameter can be used to customize this function's behavior.
The update_on_existing_pk option specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.
The return_record_ids option indicates that the database should return the unique identifiers of inserted records.

Template Parameters
TThe type of object being added.
Parameters
table_nameName of table to which the records are to be added, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.
dataAn array of binary-encoded data for the records to be added. All records must be of the same type as that of the table. Empty array if is json.
optionsOptional parameters.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk, allow_partial_batch, & return_individual_errors. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by allow_partial_batch & return_individual_errors. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • RETURN_RECORD_IDS: If true then return the internal record id along for each inserted record. Supported values: The default value is FALSE.
  • TRUNCATE_STRINGS: If set to true, any strings which are too long for their target charN string columns will be truncated to fit. Supported values: The default value is FALSE.
  • RETURN_INDIVIDUAL_ERRORS: If set to true, success will always be returned, and any errors found will be included in the info map. The "bad_record_indices" entry is a comma-separated list of bad records (0-based). And if so, there will also be an "error_N" entry for each record with an error, where N is the index (0-based). Supported values: The default value is FALSE.
  • ALLOW_PARTIAL_BATCH: If set to true, all correct records will be inserted and incorrect records will be rejected and reported. Otherwise, the entire batch will be rejected if any records are incorrect. Supported values: The default value is FALSE.
  • DRY_RUN: If set to true, no data will be saved and any errors will be returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 18251 of file KineticaFunctions.cs.

InsertRecordsFromFilesResponse kinetica.Kinetica.insertRecordsFromFiles ( InsertRecordsFromFilesRequest  request_)
inline

Reads from one or more files and inserts the data into a new or existing table.

The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.
For delimited text files, there are two loading schemes: positional and name-based. The name-based loading scheme is enabled when the file has a header present and text_has_header is set to true. In this scheme, the source file(s) field names must match the target table's column names exactly; however, the source file can have more fields than the target table has columns. If error_handling is set to permissive, the source file can have fewer fields than the target table has columns. If the name-based loading scheme is being used, names matching the file header's names may be provided to columns_to_load instead of numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 18300 of file KineticaFunctions.cs.

InsertRecordsFromFilesResponse kinetica.Kinetica.insertRecordsFromFiles ( string  table_name,
IList< string >  filepaths,
IDictionary< string, IDictionary< string, string >>  modify_columns = null,
IDictionary< string, string >  create_table_options = null,
IDictionary< string, string >  options = null 
)
inline

Reads from one or more files and inserts the data into a new or existing table.

The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.
For delimited text files, there are two loading schemes: positional and name-based. The name-based loading scheme is enabled when the file has a header present and text_has_header is set to true. In this scheme, the source file(s) field names must match the target table's column names exactly; however, the source file can have more fields than the target table has columns. If error_handling is set to permissive, the source file can have fewer fields than the target table has columns. If the name-based loading scheme is being used, names matching the file header's names may be provided to columns_to_load instead of numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.

Parameters
table_nameName of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existing type_id or the type inferred from the file, and the new table name will have to meet standard table naming criteria.
filepathsA list of file paths from which data will be sourced; For paths in KiFS, use the uri prefix of kifs:// followed by the path to a file or directory. File matching by prefix is supported, e.g. kifs://dir/file would match dir/file_1 and dir/file_2. When prefix matching is used, the path must start with a full, valid KiFS directory name. If an external data source is specified in datasource_name, these file paths must resolve to accessible files at that data source location. Prefix matching is supported. If the data source is hdfs, prefixes must be aligned with directories, i.e. partial file names will not match. If no data source is specified, the files are assumed to be local to the database and must all be accessible to the gpudb user, residing on the path (or relative to the path) specified by the external files directory in the Kinetica configuration file. Wildcards (*) can be used to specify a group of files. Prefix matching is supported, the prefixes must be aligned with directories. If the first path ends in .tsv, the text delimiter will be defaulted to a tab character. If the first path ends in .psv, the text delimiter will be defaulted to a pipe character (|).
modify_columnsNot implemented yet. The default value is an empty Dictionary.
create_table_optionsOptions from /create/table, allowing the structure of the table to be defined independently of the data source, when creating the target table The default value is an empty Dictionary.
optionsOptional parameters.
  • BAD_RECORD_TABLE_NAME: Name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error_handling is abort, bad records table is not populated.
  • BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. The default value is '10000'.
  • BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will be bad_record_table_limit and total size of the table per rank is limited to bad_record_table_limit.
  • BATCH_SIZE: Number of records to insert per batch when inserting data. The default value is '50000'.
  • COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. See default_column_formats for valid format syntax.
  • COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive with columns_to_skip.
  • COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive with columns_to_load.
  • COMPRESSION_TYPE: Source data compression type Supported values:
    • NONE: No compression.
    • AUTO: Auto detect compression type
    • GZIP: gzip file compression.
    • BZIP2: bzip2 file compression.
    The default value is AUTO.
  • DATASOURCE_NAME: Name of an existing external data source from which data file(s) specified in will be loaded
  • DEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
    The default value is ABORT.
  • FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:
    • AVRO: Avro file format
    • DELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.
    • GDB: Esri/GDB file format
    • JSON: Json file format
    • PARQUET: Apache Parquet file format
    • SHAPEFILE: ShapeFile file format
    The default value is DELIMITED_TEXT.
  • GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requets: key=value
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:
    • FULL: Run a type inference on the source data (if needed) and ingest
    • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
    • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
    The default value is FULL.
  • KAFKA_CONSUMERS_PER_RANK: Number of Kafka consumer threads per rank (valid range 1-6). The default value is '1'.
  • KAFKA_GROUP_ID: The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions).
  • KAFKA_OFFSET_RESET_POLICY: Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset. Supported values: The default value is EARLIEST.
  • KAFKA_OPTIMISTIC_INGEST: Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism. Supported values: The default value is FALSE.
  • KAFKA_SUBSCRIPTION_CANCEL_AFTER: Sets the Kafka subscription lifespan (in minutes). Expired subscription will be cancelled automatically.
  • KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT: Maximum time to collect Kafka messages before type inferencing on the set of them.
  • LAYER: Geo files layer(s) name(s): comma separated.
  • LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database Supported values:
    • HEAD: The head node loads all data. All files must be available to the head node.
    • DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
    • DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
    The default value is HEAD.
  • LOCAL_TIME_OFFSET: Apply an offset to Avro local timestamp columns.
  • MAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: if this number is larger than batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread).
  • NUM_TASKS_PER_RANK: Number of tasks for reading file per rank. Default will be system configuration parameter, external_file_reader_num_tasks.
  • POLL_INTERVAL: If true, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds. The default value is '0'.
  • PRIMARY_KEYS: Comma separated list of column names to set as primary keys, when not specified in the type.
  • SCHEMA_REGISTRY_SCHEMA_NAME: Name of the Avro schema in the schema registry to use when reading Avro records.
  • SHARD_KEYS: Comma separated list of column names to set as shard keys, when not specified in the type.
  • SKIP_LINES: Skip number of lines from begining of file.
  • SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value is FALSE.
  • TABLE_INSERT_MODE: Insertion scheme to use when inserting records from multiple shapefiles. Supported values:
    • SINGLE: Insert all records into a single table.
    • TABLE_PER_FILE: Insert records from each file into a new table corresponding to that file.
    The default value is SINGLE.
  • TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. For delimited_text file_type only. The default value is '#'.
  • TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). For delimited_text file_type only. The default value is ','.
  • TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, & vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. For delimited_text file_type only.
  • TEXT_HAS_HEADER: Indicates whether the source data contains a header row. For delimited_text file_type only. Supported values: The default value is TRUE.
  • TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. For delimited_text file_type only. The default value is '|'.
  • TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. For delimited_text file_type only. The default value is '\N'.
  • TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. For delimited_text file_type only. The default value is '"'.
  • TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma seperated list of column names or '*' for all columns. To add 'text_search' property only to string columns greater than or equal to a minimum size, also set the text_search_min_column_length
  • TEXT_SEARCH_MIN_COLUMN_LENGTH: Set the minimum column size for strings to apply the 'text_search' property to. Used only when text_search_columns has a value.
  • TRUNCATE_STRINGS: If set to true, truncate string values that are longer than the column's type size. Supported values: The default value is FALSE.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the file(s). Supported values: The default value is FALSE.
  • TYPE_INFERENCE_MODE: Optimize type inferencing for either speed or accuracy. Supported values:
    • ACCURACY: Scans data to get exactly-typed & sized columns for all data scanned.
    • SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
    The default value is SPEED.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be 'upserted'). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 19376 of file KineticaFunctions.cs.

InsertRecordsFromPayloadResponse kinetica.Kinetica.insertRecordsFromPayload ( InsertRecordsFromPayloadRequest  request_)
inline

Reads from the given text-based or binary payload and inserts the data into a new or existing table.

The table will be created if it doesn't already exist.
Returns once all records are processed.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 19404 of file KineticaFunctions.cs.

InsertRecordsFromPayloadResponse kinetica.Kinetica.insertRecordsFromPayload ( string  table_name,
string  data_text,
byte[]  data_bytes,
IDictionary< string, IDictionary< string, string >>  modify_columns = null,
IDictionary< string, string >  create_table_options = null,
IDictionary< string, string >  options = null 
)
inline

Reads from the given text-based or binary payload and inserts the data into a new or existing table.

The table will be created if it doesn't already exist.
Returns once all records are processed.

Parameters
table_nameName of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existing type_id or the type inferred from the payload, and the new table name will have to meet standard table naming criteria.
data_textRecords formatted as delimited text
data_bytesRecords formatted as binary data
modify_columnsNot implemented yet. The default value is an empty Dictionary.
create_table_optionsOptions used when creating the target table. Includes type to use. The other options match those in /create/table The default value is an empty Dictionary.
optionsOptional parameters.
  • AVRO_HEADER_BYTES: Optional number of bytes to skip when reading an avro record.
  • AVRO_NUM_RECORDS: Optional number of avro records, if data includes only records.
  • AVRO_SCHEMA: Optional string representing avro schema, for insert records in avro format, that does not include is schema.
  • AVRO_SCHEMALESS: When user provides 'avro_schema', avro data is assumed to be schemaless, unless specified. Default is 'true' when given avro_schema. Igonred when avro_schema is not given. Supported values:
  • BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string).
  • BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000
  • BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will be 'bad_record_table_limit' and total size of the table per rank is limited to 'bad_record_table_limit'
  • BATCH_SIZE: Internal tuning parameter–number of records per batch when inserting data.
  • COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. See default_column_formats for valid format syntax.
  • COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive with columns_to_skip.
  • COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive with columns_to_load.
  • COMPRESSION_TYPE: Optional: payload compression type Supported values:
    • NONE: Uncompressed
    • AUTO: Default. Auto detect compression type
    • GZIP: gzip file compression.
    • BZIP2: bzip2 file compression.
    The default value is AUTO.
  • DEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, & datetime. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
    The default value is ABORT.
  • FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:
    • AVRO: Avro file format
    • DELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.
    • GDB: Esri/GDB file format
    • JSON: Json file format
    • PARQUET: Apache Parquet file format
    • SHAPEFILE: ShapeFile file format
    The default value is DELIMITED_TEXT.
  • GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requets: key=value. The default value is ''.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:
    • FULL: Run a type inference on the source data (if needed) and ingest
    • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
    • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
    The default value is FULL.
  • LAYER: Optional: geo files layer(s) name(s): comma separated. The default value is ''.
  • LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database Supported values:
    • HEAD: The head node loads all data. All files must be available to the head node.
    • DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.
    • DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
    The default value is HEAD.
  • LOCAL_TIME_OFFSET: For Avro local timestamp columns
  • MAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread). The default value is ''.
  • NUM_TASKS_PER_RANK: Optional: number of tasks for reading file per rank. Default will be external_file_reader_num_tasks
  • POLL_INTERVAL: If true, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds.
  • PRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SCHEMA_REGISTRY_SCHEMA_ID:
  • SCHEMA_REGISTRY_SCHEMA_NAME:
  • SCHEMA_REGISTRY_SCHEMA_VERSION:
  • SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SKIP_LINES: Skip number of lines from begining of file.
  • SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value is FALSE.
  • TABLE_INSERT_MODE: Optional: table_insert_mode. When inserting records from multiple files: if table_per_file then insert from each file into a new table. Currently supported only for shapefiles. Supported values: The default value is SINGLE.
  • TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. For delimited_text file_type only. The default value is '#'.
  • TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). For delimited_text file_type only. The default value is ','.
  • TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, & vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. For delimited_text file_type only.
  • TEXT_HAS_HEADER: Indicates whether the source data contains a header row. For delimited_text file_type only. Supported values: The default value is TRUE.
  • TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. For delimited_text file_type only. The default value is '|'.
  • TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. For delimited_text file_type only. The default value is '\N'.
  • TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. For delimited_text file_type only. The default value is '"'.
  • TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma seperated list of column names or '*' for all columns. To add text_search property only to string columns of minimum size, set also the option 'text_search_min_column_length'
  • TEXT_SEARCH_MIN_COLUMN_LENGTH: Set minimum column size. Used only when 'text_search_columns' has a value.
  • TRUNCATE_STRINGS: If set to true, truncate string values that are longer than the column's type size. Supported values: The default value is FALSE.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the file(s). Supported values: The default value is FALSE.
  • TYPE_INFERENCE_MODE: optimize type inference for: Supported values:
    • ACCURACY: Scans data to get exactly-typed & sized columns for all data scanned.
    • SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
    The default value is SPEED.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 20378 of file KineticaFunctions.cs.

InsertRecordsFromQueryResponse kinetica.Kinetica.insertRecordsFromQuery ( InsertRecordsFromQueryRequest  request_)
inline

Computes remote query result and inserts the result data into a new or existing table

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 20404 of file KineticaFunctions.cs.

InsertRecordsFromQueryResponse kinetica.Kinetica.insertRecordsFromQuery ( string  table_name,
string  remote_query,
IDictionary< string, IDictionary< string, string >>  modify_columns = null,
IDictionary< string, string >  create_table_options = null,
IDictionary< string, string >  options = null 
)
inline

Computes remote query result and inserts the result data into a new or existing table

Parameters
table_nameName of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existing type_id or the type inferred from the remote query, and the new table name will have to meet standard table naming criteria.
remote_queryQuery for which result data needs to be imported
modify_columnsNot implemented yet. The default value is an empty Dictionary.
create_table_optionsOptions used when creating the target table. The default value is an empty Dictionary.
optionsOptional parameters.
  • BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.
  • BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000
  • BATCH_SIZE: Number of records per batch when inserting data.
  • DATASOURCE_NAME: Name of an existing external data source from which table will be loaded
  • ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:
    • PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
    • IGNORE_BAD_RECORDS: Malformed records are skipped.
    • ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
    The default value is ABORT.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled when update_on_existing_pk is false). If set to true, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. If false, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined by error_handling. If the specified table does not have a primary key or if upsert mode is in effect (update_on_existing_pk is true), then this option has no effect. Supported values:
    • TRUE: Ignore new records whose primary key values collide with those of existing records
    • FALSE: Treat as errors any new records whose primary key values collide with those of existing records
    The default value is FALSE.
  • INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:
    • FULL: Run a type inference on the source data (if needed) and ingest
    • DRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode of error_handling.
    • TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
    The default value is FULL.
  • JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.
  • JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.
  • NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.
  • NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasks
  • PRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.
  • SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value is FALSE.
  • TRUNCATE_TABLE: If set to true, truncates the table specified by prior to loading the data. Supported values: The default value is FALSE.
  • REMOTE_QUERY: Remote SQL query from which data will be sourced
  • REMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.
  • REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.
  • REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.
  • REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.
  • TRUNCATE_STRINGS: If set to true, truncate string values that are longer than the column's type size. Supported values: The default value is FALSE.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined by ignore_existing_pk & error_handling. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Upsert new records when primary keys match existing records
    • FALSE: Reject new records when primary keys match existing records
    The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 20953 of file KineticaFunctions.cs.

InsertRecordsRandomResponse kinetica.Kinetica.insertRecordsRandom ( InsertRecordsRandomRequest  request_)
inline

Generates a specified number of random records and adds them to the given table.

There is an optional parameter that allows the user to customize the ranges of the column values. It also allows the user to specify linear profiles for some or all columns in which case linear values are generated rather than random ones. Only individual tables are supported for this operation.
This operation is synchronous, meaning that a response will not be returned until all random records are fully available.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 20987 of file KineticaFunctions.cs.

InsertRecordsRandomResponse kinetica.Kinetica.insertRecordsRandom ( string  table_name,
long  count,
IDictionary< string, IDictionary< string, double >>  options = null 
)
inline

Generates a specified number of random records and adds them to the given table.

There is an optional parameter that allows the user to customize the ranges of the column values. It also allows the user to specify linear profiles for some or all columns in which case linear values are generated rather than random ones. Only individual tables are supported for this operation.
This operation is synchronous, meaning that a response will not be returned until all random records are fully available.

Parameters
table_nameTable to which random records will be added, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, not a view.
countNumber of records to generate.
optionsOptional parameter to pass in specifications for the randomness of the values. This map is different from the options parameter of most other endpoints in that it is a map of string to map of string to doubles, while most others are maps of string to string. In this map, the top level keys represent which column's parameters are being specified, while the internal keys represents which parameter is being specified. These parameters take on different meanings depending on the type of the column. Below follows a more detailed description of the map:
  • SEED: If provided, the internal random number generator will be initialized with the given value. The minimum is 0. This allows for the same set of random numbers to be generated across invocation of this endpoint in case the user wants to repeat the test. Since , is a map of maps, we need an internal map to provide the seed value. For example, to pass 100 as the seed value through this parameter, you need something equivalent to: 'options' = {'seed': { 'value': 100 } }
    • VALUE: The seed value to use
  • ALL: This key indicates that the specifications relayed in the internal map are to be applied to all columns of the records.
    • MIN: For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the 'TIMESTAMP' column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. Value needs to be within [0, 200]. If the min is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
    • MAX: For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings is set to this value (default is 200). If both minimum and maximum are provided, max must be greater than or equal to min. Value needs to be within [0, 200]. If the max is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
    • INTERVAL: If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linerally generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: 'attrname_creationIndex#', i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (eg char4) the prefix is dropped. No nulls will be generated for nullable columns.
    • NULL_PERCENTAGE: If specified, then generate the given percentage of the count as nulls for all nullable columns. This option will be ignored for non-nullable columns. The value must be within the range [0, 1.0]. The default value is 5% (0.05).
    • CARDINALITY: If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
  • ATTR_NAME: Use the desired column name in place of attr_name, and set the following parameters for the column specified. This overrides any parameter set by all.
    • MIN: For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the 'TIMESTAMP' column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. Value needs to be within [0, 200]. If the min is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
    • MAX: For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings is set to this value (default is 200). If both minimum and maximum are provided, max must be greater than or equal to min. Value needs to be within [0, 200]. If the max is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
    • INTERVAL: If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linerally generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: 'attrname_creationIndex#', i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (eg char4) the prefix is dropped. No nulls will be generated for nullable columns.
    • NULL_PERCENTAGE: If specified and if this column is nullable, then generate the given percentage of the count as nulls. This option will result in an error if the column is not nullable. The value must be within the range [0, 1.0]. The default value is 5% (0.05).
    • CARDINALITY: If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
  • TRACK_LENGTH: This key-map pair is only valid for track data sets (an error is thrown otherwise). No nulls would be generated for nullable columns.
    • MIN: Minimum possible length for generated series; default is 100 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, min must be less than or equal to max.
    • MAX: Maximum possible length for generated series; default is 500 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, max must be greater than or equal to min.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 21244 of file KineticaFunctions.cs.

InsertRecordsResponse kinetica.Kinetica.insertRecordsRaw ( RawInsertRecordsRequest  request_)
inline

Adds multiple records to the specified table.

The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.
The parameter can be used to customize this function's behavior.
The update_on_existing_pk option specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.
The return_record_ids option indicates that the database should return the unique identifiers of inserted records.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 17976 of file KineticaFunctions.cs.

InsertSymbolResponse kinetica.Kinetica.insertSymbol ( InsertSymbolRequest  request_)
inline

Adds a symbol or icon (i.e.

an image) to represent data points when data is rendered visually. Users must provide the symbol identifier (string), a format (currently supported: 'svg' and 'svg_path'), the data for the symbol, and any additional optional parameter (e.g. color). To have a symbol used for rendering create a table with a string column named 'SYMBOLCODE' (along with 'x' or 'y' for example). Then when the table is rendered (via WMS) if the 'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column is used to pick the symbol displayed for each point.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 21271 of file KineticaFunctions.cs.

InsertSymbolResponse kinetica.Kinetica.insertSymbol ( string  symbol_id,
string  symbol_format,
byte[]  symbol_data,
IDictionary< string, string >  options = null 
)
inline

Adds a symbol or icon (i.e.

an image) to represent data points when data is rendered visually. Users must provide the symbol identifier (string), a format (currently supported: 'svg' and 'svg_path'), the data for the symbol, and any additional optional parameter (e.g. color). To have a symbol used for rendering create a table with a string column named 'SYMBOLCODE' (along with 'x' or 'y' for example). Then when the table is rendered (via WMS) if the 'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column is used to pick the symbol displayed for each point.

Parameters
symbol_idThe id of the symbol being added. This is the same id that should be in the 'SYMBOLCODE' column for objects using this symbol
symbol_formatSpecifies the symbol format. Must be either 'svg' or 'svg_path'. Supported values:
symbol_dataThe actual symbol data. If is 'svg' then this should be the raw bytes representing an svg file. If is svg path then this should be an svg path string, for example: 'M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z'
optionsOptional parameters.
  • COLOR: If is 'svg' this is ignored. If is 'svg_path' then this option specifies the color (in RRGGBB hex format) of the path. For example, to have the path rendered in red, used 'FF0000'. If 'color' is not provided then '00FF00' (i.e. green) is used by default.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 21332 of file KineticaFunctions.cs.

KillProcResponse kinetica.Kinetica.killProc ( KillProcRequest  request_)
inline

Kills a running proc instance.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 21350 of file KineticaFunctions.cs.

KillProcResponse kinetica.Kinetica.killProc ( string  run_id = "",
IDictionary< string, string >  options = null 
)
inline

Kills a running proc instance.

Parameters
run_idThe run ID of a running proc instance. If a proc with a matching run ID is not found or the proc instance has already completed, no procs will be killed. If not specified, all running proc instances will be killed. The default value is ''.
optionsOptional parameters.
  • RUN_TAG: If is specified, kill the proc instance that has a matching run ID and a matching run tag that was provided to /execute/proc. If is not specified, kill the proc instance(s) where a matching run tag was provided to /execute/proc. The default value is ''.
  • CLEAR_EXECUTE_AT_STARTUP: If true, kill and remove the instance of the proc matching the auto-start run ID that was created to run when the database is started. The auto-start run ID was returned from /execute/proc and can be retrieved using /show/proc. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 21404 of file KineticaFunctions.cs.

LockTableResponse kinetica.Kinetica.lockTable ( LockTableRequest  request_)
inline

Manages global access to a table's data.

By default a table has a of read_write, indicating all operations are permitted. A user may request a read_only or a write_only lock, after which only read or write operations, respectively, are permitted on the table until the lock is removed. When is no_access then no operations are permitted on the table. The lock status can be queried by setting to status.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 21459 of file KineticaFunctions.cs.

LockTableResponse kinetica.Kinetica.lockTable ( string  table_name,
string  lock_type = LockTableRequest.LockType.STATUS,
IDictionary< string, string >  options = null 
)
inline

Manages global access to a table's data.

By default a table has a lock_type of read_write, indicating all operations are permitted. A user may request a read_only or a write_only lock, after which only read or write operations, respectively, are permitted on the table until the lock is removed. When lock_type is no_access then no operations are permitted on the table. The lock status can be queried by setting lock_type to status.

Parameters
table_nameName of the table to be locked, in [schema_name.]table_name format, using standard name resolution rules. It must be a currently existing table or view.
lock_typeThe type of lock being applied to the table. Setting it to status will return the current lock status of the table without changing it. Supported values: The default value is STATUS.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 21521 of file KineticaFunctions.cs.

MatchGraphResponse kinetica.Kinetica.matchGraph ( MatchGraphRequest  request_)
inline

Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 21551 of file KineticaFunctions.cs.

MatchGraphResponse kinetica.Kinetica.matchGraph ( string  graph_name,
IList< string >  sample_points,
string  solve_method = MatchGraphRequest.SolveMethod.MARKOV_CHAIN,
string  solution_table = "",
IDictionary< string, string >  options = null 
)
inline

Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.

Parameters
graph_nameName of the underlying geospatial graph resource to match to using .
sample_pointsSample points used to match to an underlying geospatial graph. Sample points must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with: existing column names, e.g., 'table.column AS SAMPLE_X'; expressions, e.g., 'ST_MAKEPOINT(table.x, table.y) AS SAMPLE_WKTPOINT'; or constant values, e.g., '{1, 2, 10} AS SAMPLE_TRIPID'.
solve_methodThe type of solver to use for graph matching. Supported values:
  • MARKOV_CHAIN: Matches to the graph using the Hidden Markov Model (HMM)-based method, which conducts a range-tree closest-edge search to find the best combinations of possible road segments (num_segments) for each sample point to create the best route. The route is secured one point at a time while looking ahead chain_width number of points, so the prediction is corrected after each point. This solution type is the most accurate but also the most computationally intensive. Related options: num_segments and chain_width.
  • MATCH_OD_PAIRS: Matches to find the most probable path between origin and destination pairs with cost constraints.
  • MATCH_SUPPLY_DEMAND: Matches to optimize scheduling multiple supplies (trucks) with varying sizes to varying demand sites with varying capacities per depot. Related options: partial_loading and max_combinations.
  • MATCH_BATCH_SOLVES: Matches source and destination pairs for the shortest path solves in batch mode.
  • MATCH_LOOPS: Matches closed loops (Eulerian paths) originating and ending at each graph node within min and max hops (levels).
  • MATCH_CHARGING_STATIONS: Matches an optimal path across a number of ev-charging stations between source and target locations.
  • MATCH_SIMILARITY: Matches the intersection set(s) by computing the Jaccard similarity score between node pairs.
  • MATCH_PICKUP_DROPOFF: Matches the pickups and dropoffs by optimizing the total trip costs
  • MATCH_CLUSTERS: Matches the graph nodes with a cluster index using Louvain clustering algorithm
  • MATCH_PATTERN: Matches a pattern in the graph
The default value is MARKOV_CHAIN.
solution_tableThe name of the table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table contains a track of geospatial points for the matched portion of the graph, a track ID, and a score value. Also outputs a details table containing a trip ID (that matches the track ID), the latitude/longitude pair, the timestamp the point was recorded at, and an edge ID corresponding to the matched road segment. Must not be an existing table of the same name. The default value is ''.
optionsAdditional parameters
  • GPS_NOISE: GPS noise value (in meters) to remove redundant sample points. Use -1 to disable noise reduction. The default value accounts for 95% of point variation (+ or -5 meters). The default value is '5.0'.
  • NUM_SEGMENTS: Maximum number of potentially matching road segments for each sample point. For the markov_chain solver, the default is 3. The default value is '3'.
  • SEARCH_RADIUS: Maximum search radius used when snapping sample points onto potentially matching surrounding segments. The default value corresponds to approximately 100 meters. The default value is '0.001'.
  • CHAIN_WIDTH: For the markov_chain solver only. Length of the sample points lookahead window within the Markov kernel; the larger the number, the more accurate the solution. The default value is '9'.
  • SOURCE: Optional WKT starting point from for the solver. The default behavior for the endpoint is to use time to determine the starting point. The default value is 'POINT NULL'.
  • DESTINATION: Optional WKT ending point from for the solver. The default behavior for the endpoint is to use time to determine the destination point. The default value is 'POINT NULL'.
  • PARTIAL_LOADING: For the match_supply_demand solver only. When false (non-default), trucks do not off-load at the demand (store) side if the remainder is less than the store's need Supported values:
    • TRUE: Partial off-loading at multiple store (demand) locations
    • FALSE: No partial off-loading allowed if supply is less than the store's demand.
    The default value is TRUE.
  • MAX_COMBINATIONS: For the match_supply_demand solver only. This is the cutoff for the number of generated combinations for sequencing the demand locations - can increase this up to 2M. The default value is '10000'.
  • MAX_SUPPLY_COMBINATIONS: For the match_supply_demand solver only. This is the cutoff for the number of generated combinations for sequencing the supply locations if/when 'permute_supplies' is true. The default value is '10000'.
  • LEFT_TURN_PENALTY: This will add an additonal weight over the edges labelled as 'left turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • RIGHT_TURN_PENALTY: This will add an additonal weight over the edges labelled as' right turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • INTERSECTION_PENALTY: This will add an additonal weight over the edges labelled as 'intersection' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • SHARP_TURN_PENALTY: This will add an additonal weight over the edges labelled as 'sharp turn' or 'u-turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • AGGREGATED_OUTPUT: For the match_supply_demand solver only. When it is true (default), each record in the output table shows a particular truck's scheduled cumulative round trip path (MULTILINESTRING) and the corresponding aggregated cost. Otherwise, each record shows a single scheduled truck route (LINESTRING) towards a particular demand location (store id) with its corresponding cost. The default value is 'true'.
  • OUTPUT_TRACKS: For the match_supply_demand solver only. When it is true (non-default), the output will be in tracks format for all the round trips of each truck in which the timestamps are populated directly from the edge weights starting from their originating depots. The default value is 'false'.
  • MAX_TRIP_COST: For the match_supply_demand and match_pickup_dropoff solvers only. If this constraint is greater than zero (default) then the trucks/rides will skip travelling from one demand/pick location to another if the cost between them is greater than this number (distance or time). Zero (default) value means no check is performed. The default value is '0.0'.
  • FILTER_FOLDING_PATHS: For the markov_chain solver only. When true (non-default), the paths per sequence combination is checked for folding over patterns and can significantly increase the execution time depending on the chain width and the number of gps samples. Supported values:
    • TRUE: Filter out the folded paths.
    • FALSE: Do not filter out the folded paths
    The default value is FALSE.
  • UNIT_UNLOADING_COST: For the match_supply_demand solver only. The unit cost per load amount to be delivered. If this value is greater than zero (default) then the additional cost of this unit load multiplied by the total dropped load will be added over to the trip cost to the demand location. The default value is '0.0'.
  • MAX_NUM_THREADS: For the markov_chain solver only. If specified (greater than zero), the maximum number of threads will not be greater than the specified value. It can be lower due to the memory and the number cores available. Default value of zero allows the algorithm to set the maximal number of threads within these constraints. The default value is '0'.
  • SERVICE_LIMIT: For the match_supply_demand solver only. If specified (greater than zero), any supply actor's total service cost (distance or time) will be limited by the specified value including multiple rounds (if set). The default value is '0.0'.
  • ENABLE_REUSE: For the match_supply_demand solver only. If specified (true), all supply actors can be scheduled for second rounds from their originating depots. Supported values:
    • TRUE: Allows reusing supply actors (trucks, e.g.) for scheduling again.
    • FALSE: Supply actors are scheduled only once from their depots.
    The default value is FALSE.
  • MAX_STOPS: For the match_supply_demand solver only. If specified (greater than zero), a supply actor (truck) can at most have this many stops (demand locations) in one round trip. Otherwise, it is unlimited. If 'enable_truck_reuse' is on, this condition will be applied separately at each round trip use of the same truck. The default value is '0'.
  • SERVICE_RADIUS: For the match_supply_demand and match_pickup_dropoff solvers only. If specified (greater than zero), it filters the demands/picks outside this radius centered around the supply actor/ride's originating location (distance or time). The default value is '0.0'.
  • PERMUTE_SUPPLIES: For the match_supply_demand solver only. If specified (true), supply side actors are permuted for the demand combinations during msdo optimization - note that this option increases optimization time significantly - use of 'max_combinations' option is recommended to prevent prohibitively long runs Supported values:
    • TRUE: Generates sequences over supply side permutations if total supply is less than twice the total demand
    • FALSE: Permutations are not performed, rather a specific order of supplies based on capacity is computed
    The default value is TRUE.
  • BATCH_TSM_MODE: For the match_supply_demand solver only. When enabled, it sets the number of visits on each demand location by a single salesman at each trip is considered to be (one) 1, otherwise there is no bound. Supported values:
    • TRUE: Sets only one visit per demand location by a salesman (tsm mode)
    • FALSE: No preset limit (usual msdo mode)
    The default value is FALSE.
  • ROUND_TRIP: For the match_supply_demand solver only. When enabled, the supply will have to return back to the origination location. Supported values:
    • TRUE: The optimization is done for trips in round trip manner always returning to originating locations
    • FALSE: Supplies do not have to come back to their originating locations in their routes. The routes are considered finished at the final dropoff.
    The default value is TRUE.
  • NUM_CYCLES: For the match_clusters solver only. Terminates the cluster exchange iterations across 2-step-cycles (outer loop) when quality does not improve during iterations. The default value is '10'.
  • NUM_LOOPS_PER_CYCLE: For the match_clusters solver only. Terminates the cluster exchanges within the first step iterations of a cycle (inner loop) unless convergence is reached. The default value is '10'.
  • NUM_OUTPUT_CLUSTERS: For the match_clusters solver only. Limits the output to the top 'num_output_clusters' clusters based on density. Default value of zero outputs all clusters. The default value is '0'.
  • MAX_NUM_CLUSTERS: For the match_clusters solver only. If set (value greater than zero), it terminates when the number of clusters goes below than this number. The default value is '0'.
  • CLUSTER_QUALITY_METRIC: For the match_clusters solver only. The quality metric for Louvain modularity optimization solver. Supported values:
    • GIRVAN: Uses the Newman Girvan quality metric for cluster solver
    • SPECTRAL: Applies recursive spectral bisection (RSB) partitioning solver
    The default value is GIRVAN.
  • RESTRICTED_TYPE: For the match_supply_demand solver only. Optimization is performed by restricting routes labeled by 'MSDO_ODDEVEN_RESTRICTED' only for this supply actor (truck) type Supported values:
    • ODD: Applies odd/even rule restrictions to odd tagged vehicles.
    • EVEN: Applies odd/even rule restrictions to even tagged vehicles.
    • NONE: Does not apply odd/even rule restrictions to any vehicles.
    The default value is NONE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth. The default value is ''.
  • INVERSE_SOLVE: For the match_batch_solves solver only. Solves source-destination pairs using inverse shortest path solver. Supported values:
    • TRUE: Solves using inverse shortest path solver.
    • FALSE: Solves using direct shortest path solver.
    The default value is FALSE.
  • MIN_LOOP_LEVEL: For the match_loops solver only. Finds closed loops around each node deducible not less than this minimal hop (level) deep. The default value is '0'.
  • MAX_LOOP_LEVEL: For the match_loops solver only. Finds closed loops around each node deducible not more than this maximal hop (level) deep. The default value is '5'.
  • SEARCH_LIMIT: For the match_loops solver only. Searches within this limit of nodes per vertex to detect loops. The value zero means there is no limit. The default value is '10000'.
  • OUTPUT_BATCH_SIZE: For the match_loops solver only. Uses this value as the batch size of the number of loops in flushing(inserting) to the output table. The default value is '1000'.
  • CHARGING_CAPACITY: For the match_charging_stations solver only. This is the maximum ev-charging capacity of a vehicle (distance in meters or time in seconds depending on the unit of the graph weights). The default value is '300000.0'.
  • CHARGING_CANDIDATES: For the match_charging_stations solver only. Solver searches for this many number of stations closest around each base charging location found by capacity. The default value is '10'.
  • CHARGING_PENALTY: For the match_charging_stations solver only. This is the penalty for full charging. The default value is '30000.0'.
  • MAX_HOPS: For the match_similarity solver only. Searches within this maximum hops for source and target node pairs to compute the Jaccard scores. The default value is '3'.
  • TRAVERSAL_NODE_LIMIT: For the match_similarity solver only. Limits the traversal depth if it reaches this many number of nodes. The default value is '1000'.
  • PAIRED_SIMILARITY: For the match_similarity solver only. If true, it computes Jaccard score between each pair, otherwise it will compute Jaccard from the intersection set between the source and target nodes Supported values: The default value is TRUE.
  • FORCE_UNDIRECTED: For the match_pattern solver only. Pattern matching will be using both pattern and graph as undirected if set to true. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 22251 of file KineticaFunctions.cs.

MergeRecordsResponse kinetica.Kinetica.mergeRecords ( MergeRecordsRequest  request_)
inline

Create a new empty result table (specified by ), and insert all records from source tables (specified by ) based on the field mapping information (specified by ).


For merge records details and examples, see Merge Records. For limitations, see Merge Records Limitations and Cautions.
The field map (specified by ) holds the user-specified maps of target table column names to source table columns. The array of must match one-to-one with the , e.g., there's a map present in for each table listed in .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 22296 of file KineticaFunctions.cs.

MergeRecordsResponse kinetica.Kinetica.mergeRecords ( string  table_name,
IList< string >  source_table_names,
IList< IDictionary< string, string >>  field_maps,
IDictionary< string, string >  options = null 
)
inline

Create a new empty result table (specified by table_name ), and insert all records from source tables (specified by source_table_names ) based on the field mapping information (specified by field_maps ).


For merge records details and examples, see Merge Records. For limitations, see Merge Records Limitations and Cautions.
The field map (specified by field_maps ) holds the user-specified maps of target table column names to source table columns. The array of field_maps must match one-to-one with the source_table_names , e.g., there's a map present in field_maps for each table listed in source_table_names .

Parameters
table_nameThe name of the new result table for the records to be merged into, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Must NOT be an existing table.
source_table_namesThe list of names of source tables to get the records from, each in [schema_name.]table_name format, using standard name resolution rules. Must be existing table names.
field_mapsContains a list of source/target column mappings, one mapping for each source table listed in being merged into the target table specified by . Each mapping contains the target column names (as keys) that the data in the mapped source columns or column expressions (as values) will be merged into. All of the source columns being merged into a given target column must match in type, as that type will determine the type of the new target column.
optionsOptional parameters.
  • CREATE_TEMP_TABLE: If true, a unique temporary table name will be generated in the sys_temp schema and used in place of . If persist is false, then this is always allowed even if the caller does not have permission to create tables. The generated name is returned in qualified_table_name. Supported values: The default value is FALSE.
  • COLLECTION_NAME: [DEPRECATED–please specify the containing schema for the merged table as part of and use /create/schema to create the schema if non-existent] Name of a schema for the newly created merged table specified by .
  • IS_REPLICATED: Indicates the distribution scheme for the data of the merged table specified in . If true, the table will be replicated. If false, the table will be randomly sharded. Supported values: The default value is FALSE.
  • TTL: Sets the TTL of the merged table specified in .
  • PERSIST: If true, then the table specified in will be persisted and will not expire unless a ttl is specified. If false, then the table will be an in-memory table and will expire unless a ttl is specified otherwise. Supported values: The default value is TRUE.
  • CHUNK_SIZE: Indicates the number of records per chunk to be used for the merged table specified in .
  • VIEW_ID: view this result table is part of. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 22460 of file KineticaFunctions.cs.

ModifyGraphResponse kinetica.Kinetica.modifyGraph ( ModifyGraphRequest  request_)
inline

Update an existing graph network using given nodes, edges, weights, restrictions, and options.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 22488 of file KineticaFunctions.cs.

ModifyGraphResponse kinetica.Kinetica.modifyGraph ( string  graph_name,
IList< string >  nodes,
IList< string >  edges,
IList< string >  weights,
IList< string >  restrictions,
IDictionary< string, string >  options = null 
)
inline

Update an existing graph network using given nodes, edges, weights, restrictions, and options.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.

Parameters
graph_nameName of the graph resource to modify.
nodesNodes with which to update existing in graph specified by . Review Nodes for more information. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT', or raw values, e.g., '{9, 10, 11} AS NODE_ID'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.
edgesEdges with which to update existing in graph specified by . Review Edges for more information. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or raw values, e.g., "{'family', 'coworker'} AS EDGE_LABEL". If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.
weightsWeights with which to update existing in graph specified by . Review Weights for more information. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or raw values, e.g., '{4, 15} AS WEIGHTS_VALUESPECIFIED'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.
restrictionsRestrictions with which to update existing in graph specified by . Review Restrictions for more information. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or raw values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.
optionsOptional parameters.
  • RESTRICTION_THRESHOLD_VALUE: Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than the restriction_threshold_value will not be included in the graph.
  • EXPORT_CREATE_RESULTS: If set to true, returns the graph topology in the response as arrays. Supported values: The default value is FALSE.
  • ENABLE_GRAPH_DRAW: If set to true, adds a 'EDGE_WKTLINE' column identifier to the specified graph_table so the graph can be viewed via WMS; for social and non-geospatial graphs, the 'EDGE_WKTLINE' column identifier will be populated with spatial coordinates derived from a flattening layout algorithm so the graph can still be viewed. Supported values: The default value is FALSE.
  • SAVE_PERSIST: If set to true, the graph will be saved in the persist directory (see the config reference for more information). If set to false, the graph will be removed when the graph server is shutdown. Supported values: The default value is FALSE.
  • ADD_TABLE_MONITOR: Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, if save_persist is also set to true, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, see /create/tablemonitor. Supported values: The default value is FALSE.
  • GRAPH_TABLE: If specified, the created graph is also created as a table with the given name, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table will have the following identifier columns: 'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If left blank, no table is created. The default value is ''.
  • REMOVE_LABEL_ONLY: When RESTRICTIONS on labeled entities requested, if set to true this will NOT delete the entity but only the label associated with the entity. Otherwise (default), it'll delete the label AND the entity. Supported values: The default value is FALSE.
  • ADD_TURNS: Adds dummy 'pillowed' edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Supported values: The default value is FALSE.
  • TURN_ANGLE: Value in degrees modifies the thresholds for attributing right, left, sharp turns, and intersections. It is the vertical deviation angle from the incoming edge to the intersection node. The larger the value, the larger the threshold for sharp turns and intersections; the smaller the value, the larger the threshold for right and left turns; 0 < turn_angle < 90. The default value is '60'.
  • USE_RTREE: Use an range tree structure to accelerate and improve the accuracy of snapping, especially to edges. Supported values: The default value is TRUE.
  • LABEL_DELIMITER: If provided the label string will be split according to this delimiter and each sub-string will be applied as a separate label onto the specified edge. The default value is ''.
  • ALLOW_MULTIPLE_EDGES: Multigraph choice; allowing multiple edges with the same node pairs if set to true, otherwise, new edges with existing same node pairs will not be inserted. Supported values: The default value is TRUE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 22804 of file KineticaFunctions.cs.

QueryGraphResponse kinetica.Kinetica.queryGraph ( QueryGraphRequest  request_)
inline

Employs a topological query on a network graph generated a-priori by Kinetica.createGraph(string,bool,IList{string},IList{string},IList{string},IList{string},IDictionary{string, string}) and returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.


To determine the node(s) or edge(s) adjacent to a value from a given column, provide a list of values to . This field can be populated with column values from any table as long as the type is supported by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave empty.
IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 22861 of file KineticaFunctions.cs.

QueryGraphResponse kinetica.Kinetica.queryGraph ( string  graph_name,
IList< string >  queries,
IList< string >  restrictions = null,
string  adjacency_table = "",
int  rings = 1,
IDictionary< string, string >  options = null 
)
inline

Employs a topological query on a network graph generated a-priori by Kinetica.createGraph(string,bool,IList{string},IList{string},IList{string},IList{string},IDictionary{string, string}) and returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.


To determine the node(s) or edge(s) adjacent to a value from a given column, provide a list of values to queries . This field can be populated with column values from any table as long as the type is supported by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave adjacency_table empty.
IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.

Parameters
graph_nameName of the graph resource to query.
queriesNodes or edges to be queried specified using query identifiers. Identifiers can be used with existing column names, e.g., 'table.column AS QUERY_NODE_ID', raw values, e.g., '{0, 2} AS QUERY_NODE_ID', or expressions, e.g., 'ST_MAKEPOINT(table.x, table.y) AS QUERY_NODE_WKTPOINT'. Multiple values can be provided as long as the same identifier is used for all values. If using raw values in an identifier combination, the number of values specified must match across the combination.
restrictionsAdditional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or raw values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using raw values in an identifier combination, the number of values specified must match across the combination. The default value is an empty List.
adjacency_tableName of the table to store the resulting adjacencies, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If left blank, the query results are instead returned in the response. If the 'QUERY_TARGET_NODE_LABEL' query identifier is used in , then two additional columns will be available: 'PATH_ID' and 'RING_ID'. See Using Labels for more information. The default value is ''.
ringsSets the number of rings around the node to query for adjacency, with '1' being the edges directly attached to the queried node. Also known as number of hops. For example, if it is set to '2', the edge(s) directly attached to the queried node(s) will be returned; in addition, the edge(s) attached to the node(s) attached to the initial ring of edge(s) surrounding the queried node(s) will be returned. If the value is set to '0', any nodes that meet the criteria in and will be returned. This parameter is only applicable when querying nodes. The default value is 1.
optionsAdditional parameters
  • FORCE_UNDIRECTED: If set to true, all inbound edges and outbound edges relative to the node will be returned. If set to false, only outbound edges relative to the node will be returned. This parameter is only applicable if the queried graph is directed and when querying nodes. Consult Directed Graphs for more details. Supported values: The default value is FALSE.
  • LIMIT: When specified (>0), limits the number of query results. The size of the nodes table will be limited by the limit value. The default value is '0'.
  • OUTPUT_WKT_PATH: If true then concatenated wkt line segments will be added as the WKT column of the adjacency table. Supported values: The default value is FALSE.
  • AND_LABELS: If set to true, the result of the query has entities that satisfy all of the target labels, instead of any. Supported values: The default value is FALSE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth.
  • OUTPUT_CHARN_LENGTH: When specified (>0 and <=256), limits the number of char length on the output tables for string based nodes. The default length is 64. The default value is '64'.
  • FIND_COMMON_LABELS: If set to true, for many-to-many queries or multi-level traversals, it lists the common labels between the source and target nodes and edge labels in each path. Otherwise (zero rings), it'll list all labels of the node(s) queried. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23074 of file KineticaFunctions.cs.

RepartitionGraphResponse kinetica.Kinetica.repartitionGraph ( RepartitionGraphRequest  request_)
inline

Rebalances an existing partitioned graph.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23104 of file KineticaFunctions.cs.

RepartitionGraphResponse kinetica.Kinetica.repartitionGraph ( string  graph_name,
IDictionary< string, string >  options = null 
)
inline

Rebalances an existing partitioned graph.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.

Parameters
graph_nameName of the graph resource to rebalance.
optionsOptional parameters.
  • NEW_GRAPH_NAME: If a non-empty value is specified, the original graph will be kept (non-default behaviour) and a new balanced graph will be created under this given name. When the value is empty (default), the generated 'balanced' graph will replace the original 'unbalanced' graph under the same graph name. The default value is ''.
  • SOURCE_NODE: The distributed shortest path solve is run from this source node to all the nodes in the graph to create balaced partitions using the iso-distance levels of the solution. The source node is selected by the rebalance algorithm automatically (default case when the value is an empty string). Otherwise, the user specified node is used as the source. The default value is ''.
  • SQL_REQUEST_AVRO_JSON: The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23163 of file KineticaFunctions.cs.

RevokePermissionResponse kinetica.Kinetica.revokePermission ( RevokePermissionRequest  request_)
inline

Revoke user or role the specified permission on the specified object.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23250 of file KineticaFunctions.cs.

RevokePermissionResponse kinetica.Kinetica.revokePermission ( string  principal,
string  _object,
string  object_type,
string  permission,
IDictionary< string, string >  options = null 
)
inline

Revoke user or role the specified permission on the specified object.

Parameters
principalName of the user or role for which the permission is being revoked. Must be an existing user or role. The default value is ''.
_objectName of object permission is being revoked from. It is recommended to use a fully-qualified name when possible.
object_typeThe type of object being revoked Supported values:
permissionPermission being revoked. Supported values:
  • ADMIN: Full read/write and administrative access on the object.
  • CONNECT: Connect access on the given data source or data sink.
  • DELETE: Delete rows from tables.
  • EXECUTE: Ability to Execute the Procedure object.
  • INSERT: Insert access to tables.
  • READ: Ability to read, list and use the object.
  • UPDATE: Update access to the table.
  • USER_ADMIN: Access to administer users and roles that do not have system_admin permission.
  • WRITE: Access to write, change and delete objects.
optionsOptional parameters.
  • COLUMNS: Revoke table security from these columns, comma-separated. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23400 of file KineticaFunctions.cs.

RevokePermissionCredentialResponse kinetica.Kinetica.revokePermissionCredential ( RevokePermissionCredentialRequest  request_)
inline

Revokes a credential-level permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23423 of file KineticaFunctions.cs.

RevokePermissionCredentialResponse kinetica.Kinetica.revokePermissionCredential ( string  name,
string  permission,
string  credential_name,
IDictionary< string, string >  options = null 
)
inline

Revokes a credential-level permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role. Supported values:
credential_nameName of the credential on which the permission will be revoked. Must be an existing credential, or an empty string to revoke access on all credentials.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23465 of file KineticaFunctions.cs.

RevokePermissionDatasourceResponse kinetica.Kinetica.revokePermissionDatasource ( RevokePermissionDatasourceRequest  request_)
inline

Revokes a data source permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23488 of file KineticaFunctions.cs.

RevokePermissionDatasourceResponse kinetica.Kinetica.revokePermissionDatasource ( string  name,
string  permission,
string  datasource_name,
IDictionary< string, string >  options = null 
)
inline

Revokes a data source permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role Supported values:
  • ADMIN: Admin access on the given data source
  • CONNECT: Connect access on the given data source
datasource_nameName of the data source on which the permission will be revoked. Must be an existing data source, or an empty string to revoke permission from all data sources.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23528 of file KineticaFunctions.cs.

RevokePermissionDirectoryResponse kinetica.Kinetica.revokePermissionDirectory ( RevokePermissionDirectoryRequest  request_)
inline

Revokes a KiFS directory-level permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23551 of file KineticaFunctions.cs.

RevokePermissionDirectoryResponse kinetica.Kinetica.revokePermissionDirectory ( string  name,
string  permission,
string  directory_name,
IDictionary< string, string >  options = null 
)
inline

Revokes a KiFS directory-level permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role. Supported values:
  • DIRECTORY_READ: For files in the directory, access to list files, download files, or use files in server side functions
  • DIRECTORY_WRITE: Access to upload files to, or delete files from, the directory. A user or role with write access automatically has read acceess
directory_nameName of the KiFS directory to which the permission revokes access
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23593 of file KineticaFunctions.cs.

RevokePermissionProcResponse kinetica.Kinetica.revokePermissionProc ( RevokePermissionProcRequest  request_)
inline

Revokes a proc-level permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23614 of file KineticaFunctions.cs.

RevokePermissionProcResponse kinetica.Kinetica.revokePermissionProc ( string  name,
string  permission,
string  proc_name,
IDictionary< string, string >  options = null 
)
inline

Revokes a proc-level permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role. Supported values:
proc_nameName of the proc to which the permission grants access. Must be an existing proc, or an empty string if the permission grants access to all procs.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23652 of file KineticaFunctions.cs.

RevokePermissionSystemResponse kinetica.Kinetica.revokePermissionSystem ( RevokePermissionSystemRequest  request_)
inline

Revokes a system-level permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23673 of file KineticaFunctions.cs.

RevokePermissionSystemResponse kinetica.Kinetica.revokePermissionSystem ( string  name,
string  permission,
IDictionary< string, string >  options = null 
)
inline

Revokes a system-level permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role. Supported values:
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23721 of file KineticaFunctions.cs.

RevokePermissionTableResponse kinetica.Kinetica.revokePermissionTable ( RevokePermissionTableRequest  request_)
inline

Revokes a table-level permission from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23740 of file KineticaFunctions.cs.

RevokePermissionTableResponse kinetica.Kinetica.revokePermissionTable ( string  name,
string  permission,
string  table_name,
IDictionary< string, string >  options = null 
)
inline

Revokes a table-level permission from a user or role.

Parameters
nameName of the user or role from which the permission will be revoked. Must be an existing user or role.
permissionPermission to revoke from the user or role. Supported values:
table_nameName of the table to which the permission grants access, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, view or schema.
optionsOptional parameters.
  • COLUMNS: Apply security to these columns, comma-separated. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23804 of file KineticaFunctions.cs.

RevokeRoleResponse kinetica.Kinetica.revokeRole ( RevokeRoleRequest  request_)
inline

Revokes membership in a role from a user or role.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23825 of file KineticaFunctions.cs.

RevokeRoleResponse kinetica.Kinetica.revokeRole ( string  role,
string  member,
IDictionary< string, string >  options = null 
)
inline

Revokes membership in a role from a user or role.

Parameters
roleName of the role in which membership will be revoked. Must be an existing role.
memberName of the user or role that will be revoked membership in . Must be an existing user or role.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23847 of file KineticaFunctions.cs.

void kinetica.Kinetica.SetKineticaSourceClassToTypeMapping ( Type  objectType,
KineticaType  kineticaType 
)
inline

Saves an object class type to a KineticaType association.

If the class type already exists in the map, replaces the old KineticaType value.

Parameters
objectTypeThe type of the object.
kineticaTypeThe associated KinetiaType object.

Definition at line 184 of file Kinetica.cs.

ShowCredentialResponse kinetica.Kinetica.showCredential ( ShowCredentialRequest  request_)
inline

Shows information about a specified credential or all credentials.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23897 of file KineticaFunctions.cs.

ShowCredentialResponse kinetica.Kinetica.showCredential ( string  credential_name,
IDictionary< string, string >  options = null 
)
inline

Shows information about a specified credential or all credentials.

Parameters
credential_nameName of the credential on which to retrieve information. The name must refer to a currently existing credential. If '*' is specified, information about all credentials will be returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23919 of file KineticaFunctions.cs.

ShowDatasinkResponse kinetica.Kinetica.showDatasink ( ShowDatasinkRequest  request_)
inline

Shows information about a specified data sink or all data sinks.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23936 of file KineticaFunctions.cs.

ShowDatasinkResponse kinetica.Kinetica.showDatasink ( string  name,
IDictionary< string, string >  options = null 
)
inline

Shows information about a specified data sink or all data sinks.

Parameters
nameName of the data sink for which to retrieve information. The name must refer to a currently existing data sink. If '*' is specified, information about all data sinks will be returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23958 of file KineticaFunctions.cs.

ShowDatasourceResponse kinetica.Kinetica.showDatasource ( ShowDatasourceRequest  request_)
inline

Shows information about a specified data source or all data sources.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 23975 of file KineticaFunctions.cs.

ShowDatasourceResponse kinetica.Kinetica.showDatasource ( string  name,
IDictionary< string, string >  options = null 
)
inline

Shows information about a specified data source or all data sources.

Parameters
nameName of the data source for which to retrieve information. The name must refer to a currently existing data source. If '*' is specified, information about all data sources will be returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 23997 of file KineticaFunctions.cs.

ShowDirectoriesResponse kinetica.Kinetica.showDirectories ( ShowDirectoriesRequest  request_)
inline

Shows information about directories in KiFS.

Can be used to show a single directory, or all directories.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24014 of file KineticaFunctions.cs.

ShowDirectoriesResponse kinetica.Kinetica.showDirectories ( string  directory_name = "",
IDictionary< string, string >  options = null 
)
inline

Shows information about directories in KiFS.

Can be used to show a single directory, or all directories.

Parameters
directory_nameThe KiFS directory name to show. If empty, shows all directories. The default value is ''.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24034 of file KineticaFunctions.cs.

ShowEnvironmentResponse kinetica.Kinetica.showEnvironment ( ShowEnvironmentRequest  request_)
inline

Shows information about a specified user-defined function (UDF) environment or all environments.

Returns detailed information about existing environments.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24052 of file KineticaFunctions.cs.

ShowEnvironmentResponse kinetica.Kinetica.showEnvironment ( string  environment_name = "",
IDictionary< string, string >  options = null 
)
inline

Shows information about a specified user-defined function (UDF) environment or all environments.

Returns detailed information about existing environments.

Parameters
environment_nameName of the environment on which to retrieve information. The name must refer to a currently existing environment. If '*' or an empty value is specified, information about all environments will be returned. The default value is ''.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If true and if the environment specified in does not exist, no error is returned. If false and if the environment specified in does not exist, then an error is returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24102 of file KineticaFunctions.cs.

ShowFilesResponse kinetica.Kinetica.showFiles ( ShowFilesRequest  request_)
inline

Shows information about files in KiFS.

Can be used for individual files, or to show all files in a given directory.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24120 of file KineticaFunctions.cs.

ShowFilesResponse kinetica.Kinetica.showFiles ( IList< string >  paths,
IDictionary< string, string >  options = null 
)
inline

Shows information about files in KiFS.

Can be used for individual files, or to show all files in a given directory.

Parameters
pathsFile paths to show. Each path can be a KiFS directory name, or a full path to a KiFS file. File paths may contain wildcard characters after the KiFS directory delimeter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24145 of file KineticaFunctions.cs.

ShowGraphResponse kinetica.Kinetica.showGraph ( ShowGraphRequest  request_)
inline

Shows information and characteristics of graphs that exist on the graph server.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24290 of file KineticaFunctions.cs.

ShowGraphResponse kinetica.Kinetica.showGraph ( string  graph_name = "",
IDictionary< string, string >  options = null 
)
inline

Shows information and characteristics of graphs that exist on the graph server.

Parameters
graph_nameName of the graph on which to retrieve information. If left as the default value, information about all graphs is returned. The default value is ''.
optionsOptional parameters.
  • SHOW_ORIGINAL_REQUEST: If set to true, the request that was originally used to create the graph is also returned as JSON. Supported values: The default value is TRUE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to get information about all the servers.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24338 of file KineticaFunctions.cs.

ShowProcResponse kinetica.Kinetica.showProc ( ShowProcRequest  request_)
inline

Shows information about a proc.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24413 of file KineticaFunctions.cs.

ShowProcResponse kinetica.Kinetica.showProc ( string  proc_name = "",
IDictionary< string, string >  options = null 
)
inline

Shows information about a proc.

Parameters
proc_nameName of the proc to show information about. If specified, must be the name of a currently existing proc. If not specified, information about all procs will be returned. The default value is ''.
optionsOptional parameters.
  • INCLUDE_FILES: If set to true, the files that make up the proc will be returned. If set to false, the files will not be returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24455 of file KineticaFunctions.cs.

ShowProcStatusResponse kinetica.Kinetica.showProcStatus ( ShowProcStatusRequest  request_)
inline

Shows the statuses of running or completed proc instances.

Results are grouped by run ID (as returned from Kinetica.executeProc(string,IDictionary{string, string},IDictionary{string, byte[]},IList{string},IDictionary{string, IList{string}},IList{string},IDictionary{string, string})) and data segment ID (each invocation of the proc command on a data segment is assigned a data segment ID).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24474 of file KineticaFunctions.cs.

ShowProcStatusResponse kinetica.Kinetica.showProcStatus ( string  run_id = "",
IDictionary< string, string >  options = null 
)
inline

Shows the statuses of running or completed proc instances.

Results are grouped by run ID (as returned from Kinetica.executeProc(string,IDictionary{string, string},IDictionary{string, byte[]},IList{string},IDictionary{string, IList{string}},IList{string},IDictionary{string, string})) and data segment ID (each invocation of the proc command on a data segment is assigned a data segment ID).

Parameters
run_idThe run ID of a specific proc instance for which the status will be returned. If a proc with a matching run ID is not found, the response will be empty. If not specified, the statuses of all executed proc instances will be returned. The default value is ''.
optionsOptional parameters.
  • CLEAR_COMPLETE: If set to true, if a proc instance has completed (either successfully or unsuccessfully) then its status will be cleared and no longer returned in subsequent calls. Supported values: The default value is FALSE.
  • RUN_TAG: If is specified, return the status for a proc instance that has a matching run ID and a matching run tag that was provided to /execute/proc. If is not specified, return statuses for all proc instances where a matching run tag was provided to /execute/proc. The default value is ''.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24532 of file KineticaFunctions.cs.

ShowResourceGroupsResponse kinetica.Kinetica.showResourceGroups ( ShowResourceGroupsRequest  request_)
inline

Requests resource group properties.

Returns detailed information about the requested resource groups.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24701 of file KineticaFunctions.cs.

ShowResourceGroupsResponse kinetica.Kinetica.showResourceGroups ( IList< string >  names,
IDictionary< string, string >  options = null 
)
inline

Requests resource group properties.

Returns detailed information about the requested resource groups.

Parameters
namesList of names of groups to be shown. A single entry with an empty string returns all groups.
optionsOptional parameters.
  • SHOW_DEFAULT_VALUES: If true include values of fields that are based on the default resource group. Supported values: The default value is TRUE.
  • SHOW_DEFAULT_GROUP: If true include the default and system resource groups in the response. This value defaults to false if an explicit list of group names is provided, and true otherwise. Supported values: The default value is TRUE.
  • SHOW_TIER_USAGE: If true include the resource group usage on the worker ranks in the response. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24781 of file KineticaFunctions.cs.

ShowResourceObjectsResponse kinetica.Kinetica.showResourceObjects ( ShowResourceObjectsRequest  request_)
inline

Returns information about the internal sub-components (tiered objects) which use resources of the system.

The request can either return results from actively used objects (default) or it can be used to query the status of the objects of a given list of tables. Returns detailed information about the requested resource objects.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24555 of file KineticaFunctions.cs.

ShowResourceObjectsResponse kinetica.Kinetica.showResourceObjects ( IDictionary< string, string >  options = null)
inline

Returns information about the internal sub-components (tiered objects) which use resources of the system.

The request can either return results from actively used objects (default) or it can be used to query the status of the objects of a given list of tables. Returns detailed information about the requested resource objects.

Parameters
optionsOptional parameters.
  • TIERS: Comma-separated list of tiers to query, leave blank for all tiers.
  • EXPRESSION: An expression to filter the returned objects. Expression is limited to the following operators: =,!=,<,<=,>,>=,+,-,*,AND,OR,LIKE. For details see Expressions. To use a more complex expression, query the ki_catalog.ki_tiered_objects table directly.
  • ORDER_BY: Single column to be sorted by as well as the sort direction, e.g., 'size asc'. Supported values:
  • LIMIT: An integer indicating the maximum number of results to be returned, per rank, or (-1) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. The default value is '100'.
  • TABLE_NAMES: Comma-separated list of tables to restrict the results to. Use '*' to show all tables.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24651 of file KineticaFunctions.cs.

ShowResourceStatisticsResponse kinetica.Kinetica.showResourceStatistics ( ShowResourceStatisticsRequest  request_)
inline

Requests various statistics for storage/memory tiers and resource groups.

Returns statistics on a per-rank basis.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24667 of file KineticaFunctions.cs.

ShowResourceStatisticsResponse kinetica.Kinetica.showResourceStatistics ( IDictionary< string, string >  options = null)
inline

Requests various statistics for storage/memory tiers and resource groups.

Returns statistics on a per-rank basis.

Parameters
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24685 of file KineticaFunctions.cs.

ShowSchemaResponse kinetica.Kinetica.showSchema ( ShowSchemaRequest  request_)
inline

Retrieves information about a schema (or all schemas), as specified in .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24799 of file KineticaFunctions.cs.

ShowSchemaResponse kinetica.Kinetica.showSchema ( string  schema_name,
IDictionary< string, string >  options = null 
)
inline

Retrieves information about a schema (or all schemas), as specified in schema_name .

Parameters
schema_nameName of the schema for which to retrieve the information. If blank, then info for all schemas is returned.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If false will return an error if the provided does not exist. If true then it will return an empty result if the provided does not exist. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24845 of file KineticaFunctions.cs.

ShowSecurityResponse kinetica.Kinetica.showSecurity ( ShowSecurityRequest  request_)
inline

Shows security information relating to users and/or roles.

If the caller is not a system administrator, only information relating to the caller and their roles is returned.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24862 of file KineticaFunctions.cs.

ShowSecurityResponse kinetica.Kinetica.showSecurity ( IList< string >  names,
IDictionary< string, string >  options = null 
)
inline

Shows security information relating to users and/or roles.

If the caller is not a system administrator, only information relating to the caller and their roles is returned.

Parameters
namesA list of names of users and/or roles about which security information is requested. If none are provided, information about all users and roles will be returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24904 of file KineticaFunctions.cs.

ShowSqlProcResponse kinetica.Kinetica.showSqlProc ( ShowSqlProcRequest  request_)
inline

Shows information about SQL procedures, including the full definition of each requested procedure.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24920 of file KineticaFunctions.cs.

ShowSqlProcResponse kinetica.Kinetica.showSqlProc ( string  procedure_name = "",
IDictionary< string, string >  options = null 
)
inline

Shows information about SQL procedures, including the full definition of each requested procedure.

Parameters
procedure_nameName of the procedure for which to retrieve the information. If blank, then information about all procedures is returned. The default value is ''.
optionsOptional parameters.
  • NO_ERROR_IF_NOT_EXISTS: If true, no error will be returned if the requested procedure does not exist. If false, an error will be returned if the requested procedure does not exist. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 24962 of file KineticaFunctions.cs.

ShowStatisticsResponse kinetica.Kinetica.showStatistics ( ShowStatisticsRequest  request_)
inline

Retrieves the collected column statistics for the specified table(s).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 24978 of file KineticaFunctions.cs.

ShowStatisticsResponse kinetica.Kinetica.showStatistics ( IList< string >  table_names,
IDictionary< string, string >  options = null 
)
inline

Retrieves the collected column statistics for the specified table(s).

Parameters
table_namesNames of tables whose metadata will be fetched, each in [schema_name.]table_name format, using standard name resolution rules. All provided tables must exist, or an error is returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25000 of file KineticaFunctions.cs.

ShowSystemPropertiesResponse kinetica.Kinetica.showSystemProperties ( ShowSystemPropertiesRequest  request_)
inline

Returns server configuration and version related information to the caller.

The admin tool uses it to present server related information to the user.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25017 of file KineticaFunctions.cs.

ShowSystemPropertiesResponse kinetica.Kinetica.showSystemProperties ( IDictionary< string, string >  options = null)
inline

Returns server configuration and version related information to the caller.

The admin tool uses it to present server related information to the user.

Parameters
optionsOptional parameters.
  • PROPERTIES: A list of comma separated names of properties requested. If not specified, all properties will be returned.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25044 of file KineticaFunctions.cs.

ShowSystemStatusResponse kinetica.Kinetica.showSystemStatus ( ShowSystemStatusRequest  request_)
inline

Provides server configuration and health related status to the caller.

The admin tool uses it to present server related information to the user.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25060 of file KineticaFunctions.cs.

ShowSystemStatusResponse kinetica.Kinetica.showSystemStatus ( IDictionary< string, string >  options = null)
inline

Provides server configuration and health related status to the caller.

The admin tool uses it to present server related information to the user.

Parameters
optionsOptional parameters, currently unused. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25078 of file KineticaFunctions.cs.

ShowSystemTimingResponse kinetica.Kinetica.showSystemTiming ( ShowSystemTimingRequest  request_)
inline

Returns the last 100 database requests along with the request timing and internal job id.

The admin tool uses it to present request timing information to the user.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25094 of file KineticaFunctions.cs.

ShowSystemTimingResponse kinetica.Kinetica.showSystemTiming ( IDictionary< string, string >  options = null)
inline

Returns the last 100 database requests along with the request timing and internal job id.

The admin tool uses it to present request timing information to the user.

Parameters
optionsOptional parameters, currently unused. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25112 of file KineticaFunctions.cs.

ShowTableResponse kinetica.Kinetica.showTable ( ShowTableRequest  request_)
inline

Retrieves detailed information about a table, view, or schema, specified in .

If the supplied is a schema the call can return information about either the schema itself or the tables and views it contains. If is empty, information about all schemas will be returned.
If the option get_sizes is set to true, then the number of records in each table is returned (in and ), along with the total number of objects across all requested tables (in and ).
For a schema, setting the show_children option to false returns only information about the schema itself; setting show_children to true returns a list of tables and views contained in the schema, along with their corresponding detail.
To retrieve a list of every table, view, and schema in the database, set to '*' and show_children to true. When doing this, the returned and will not include the sizes of non-base tables (e.g., filters, views, joins, etc.).

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25160 of file KineticaFunctions.cs.

ShowTableResponse kinetica.Kinetica.showTable ( string  table_name,
IDictionary< string, string >  options = null 
)
inline

Retrieves detailed information about a table, view, or schema, specified in table_name .

If the supplied table_name is a schema the call can return information about either the schema itself or the tables and views it contains. If table_name is empty, information about all schemas will be returned.
If the option get_sizes is set to true, then the number of records in each table is returned (in and ), along with the total number of objects across all requested tables (in and ).
For a schema, setting the show_children option to false returns only information about the schema itself; setting show_children to true returns a list of tables and views contained in the schema, along with their corresponding detail.
To retrieve a list of every table, view, and schema in the database, set table_name to '*' and show_children to true. When doing this, the returned and will not include the sizes of non-base tables (e.g., filters, views, joins, etc.).

Parameters
table_nameName of the table for which to retrieve the information, in [schema_name.]table_name format, using standard name resolution rules. If blank, then returns information about all tables and views.
optionsOptional parameters.
  • FORCE_SYNCHRONOUS: If true then the table sizes will wait for read lock before returning. Supported values: The default value is TRUE.
  • GET_SIZES: If true then the number of records in each table, along with a cumulative count, will be returned; blank, otherwise. Supported values: The default value is FALSE.
  • GET_CACHED_SIZES: If true then the number of records in each table, along with a cumulative count, will be returned; blank, otherwise. This version will return the sizes cached at rank 0, which may be stale if there is a multihead insert occuring. Supported values: The default value is FALSE.
  • SHOW_CHILDREN: If is a schema, then true will return information about the tables and views in the schema, and false will return information about the schema itself. If is a table or view, show_children must be false. If is empty, then show_children must be true. Supported values: The default value is TRUE.
  • NO_ERROR_IF_NOT_EXISTS: If false will return an error if the provided does not exist. If true then it will return an empty result. Supported values: The default value is FALSE.
  • GET_COLUMN_INFO: If true then column info (memory usage, etc) will be returned. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25349 of file KineticaFunctions.cs.

ShowTableMetadataResponse kinetica.Kinetica.showTableMetadata ( ShowTableMetadataRequest  request_)
inline

Retrieves the user provided metadata for the specified tables.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25365 of file KineticaFunctions.cs.

ShowTableMetadataResponse kinetica.Kinetica.showTableMetadata ( IList< string >  table_names,
IDictionary< string, string >  options = null 
)
inline

Retrieves the user provided metadata for the specified tables.

Parameters
table_namesNames of tables whose metadata will be fetched, in [schema_name.]table_name format, using standard name resolution rules. All provided tables must exist, or an error is returned.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25387 of file KineticaFunctions.cs.

ShowTableMonitorsResponse kinetica.Kinetica.showTableMonitors ( ShowTableMonitorsRequest  request_)
inline

Show table monitors and their properties.

Table monitors are created using Kinetica.createTableMonitor(string,IDictionary{string, string}). Returns detailed information about existing table monitors.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25407 of file KineticaFunctions.cs.

ShowTableMonitorsResponse kinetica.Kinetica.showTableMonitors ( IList< string >  monitor_ids,
IDictionary< string, string >  options = null 
)
inline

Show table monitors and their properties.

Table monitors are created using Kinetica.createTableMonitor(string,IDictionary{string, string}). Returns detailed information about existing table monitors.

Parameters
monitor_idsList of monitors to be shown. An empty list or a single entry with an empty string returns all table monitors.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25431 of file KineticaFunctions.cs.

ShowTablesByTypeResponse kinetica.Kinetica.showTablesByType ( ShowTablesByTypeRequest  request_)
inline

Gets names of the tables whose type matches the given criteria.

Each table has a particular type. This type comprises the schema and properties of the table and sometimes a type label. This function allows a look up of the existing tables based on full or partial type information. The operation is synchronous.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25450 of file KineticaFunctions.cs.

ShowTablesByTypeResponse kinetica.Kinetica.showTablesByType ( string  type_id,
string  label,
IDictionary< string, string >  options = null 
)
inline

Gets names of the tables whose type matches the given criteria.

Each table has a particular type. This type comprises the schema and properties of the table and sometimes a type label. This function allows a look up of the existing tables based on full or partial type information. The operation is synchronous.

Parameters
type_idType id returned by a call to /create/type.
labelOptional user supplied label which can be used instead of the type_id to retrieve all tables with the given label.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25475 of file KineticaFunctions.cs.

ShowTriggersResponse kinetica.Kinetica.showTriggers ( ShowTriggersRequest  request_)
inline

Retrieves information regarding the specified triggers or all existing triggers currently active.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25492 of file KineticaFunctions.cs.

ShowTriggersResponse kinetica.Kinetica.showTriggers ( IList< string >  trigger_ids,
IDictionary< string, string >  options = null 
)
inline

Retrieves information regarding the specified triggers or all existing triggers currently active.

Parameters
trigger_idsList of IDs of the triggers whose information is to be retrieved. An empty list means information will be retrieved on all active triggers.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25512 of file KineticaFunctions.cs.

ShowTypesResponse kinetica.Kinetica.showTypes ( ShowTypesRequest  request_)
inline

Retrieves information for the specified data type ID or type label.

For all data types that match the input criteria, the database returns the type ID, the type schema, the label (if available), and the type's column properties.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25530 of file KineticaFunctions.cs.

ShowTypesResponse kinetica.Kinetica.showTypes ( string  type_id,
string  label,
IDictionary< string, string >  options = null 
)
inline

Retrieves information for the specified data type ID or type label.

For all data types that match the input criteria, the database returns the type ID, the type schema, the label (if available), and the type's column properties.

Parameters
type_idType Id returned in response to a call to /create/type.
labelOption string that was supplied by user in a call to /create/type.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25583 of file KineticaFunctions.cs.

ShowVideoResponse kinetica.Kinetica.showVideo ( ShowVideoRequest  request_)
inline

Retrieves information about rendered videos.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25599 of file KineticaFunctions.cs.

ShowVideoResponse kinetica.Kinetica.showVideo ( IList< string >  paths,
IDictionary< string, string >  options = null 
)
inline

Retrieves information about rendered videos.

Parameters
pathsThe fully-qualified KiFS paths for the videos to show. If empty, shows all videos.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 25618 of file KineticaFunctions.cs.

SolveGraphResponse kinetica.Kinetica.solveGraph ( SolveGraphRequest  request_)
inline

Solves an existing graph for a type of problem (e.g., shortest path, page rank, travelling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 25648 of file KineticaFunctions.cs.

SolveGraphResponse kinetica.Kinetica.solveGraph ( string  graph_name,
IList< string >  weights_on_edges = null,
IList< string >  restrictions = null,
string  solver_type = SolveGraphRequest.SolverType.SHORTEST_PATH,
IList< string >  source_nodes = null,
IList< string >  destination_nodes = null,
string  solution_table = "graph_solutions",
IDictionary< string, string >  options = null 
)
inline

Solves an existing graph for a type of problem (e.g., shortest path, page rank, travelling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.


IMPORTANT: It's highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.

Parameters
graph_nameName of the graph resource to solve.
weights_on_edgesAdditional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or constant values, e.g., '{4, 15, 2} AS WEIGHTS_VALUESPECIFIED'. Any provided weights will be added (in the case of 'WEIGHTS_VALUESPECIFIED') to or multiplied with (in the case of 'WEIGHTS_FACTORSPECIFIED') the existing weight(s). If using constant values in an identifier combination, the number of values specified must match across the combination. The default value is an empty List.
restrictionsAdditional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or constant values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using constant values in an identifier combination, the number of values specified must match across the combination. If remove_previous_restrictions option is set to true, any provided restrictions will replace the existing restrictions. Otherwise, any provided restrictions will be added (in the case of 'RESTRICTIONS_VALUECOMPARED') to or replaced (in the case of 'RESTRICTIONS_ONOFFCOMPARED'). The default value is an empty List.
solver_typeThe type of solver to use for the graph. Supported values:
  • SHORTEST_PATH: Solves for the optimal (shortest) path based on weights and restrictions from one source to destinations nodes. Also known as the Dijkstra solver.
  • PAGE_RANK: Solves for the probability of each destination node being visited based on the links of the graph topology. Weights are not required to use this solver.
  • PROBABILITY_RANK: Solves for the transitional probability (Hidden Markov) for each node based on the weights (probability assigned over given edges).
  • CENTRALITY: Solves for the degree of a node to depict how many pairs of individuals that would have to go through the node to reach one another in the minimum number of hops. Also known as betweenness.
  • MULTIPLE_ROUTING: Solves for finding the minimum cost cumulative path for a round-trip starting from the given source and visiting each given destination node once then returning to the source. Also known as the travelling salesman problem.
  • INVERSE_SHORTEST_PATH: Solves for finding the optimal path cost for each destination node to route to the source node. Also known as inverse Dijkstra or the service man routing problem.
  • BACKHAUL_ROUTING: Solves for optimal routes that connect remote asset nodes to the fixed (backbone) asset nodes.
  • ALLPATHS: Solves for paths that would give costs between max and min solution radia - Make sure to limit by the 'max_solution_targets' option. Min cost shoudl be >= shortest_path cost.
  • STATS_ALL: Solves for graph statistics such as graph diameter, longest pairs, vertex valences, topology numbers, average and max cluster sizes, etc.
  • CLOSENESS: Solves for the centrality closeness score per node as the sum of the inverse shortest path costs to all nodes in the graph.
The default value is SHORTEST_PATH.
source_nodesIt can be one of the nodal identifiers - e.g: 'NODE_WKTPOINT' for source nodes. For BACKHAUL_ROUTING, this list depicts the fixed assets. The default value is an empty List.
destination_nodesIt can be one of the nodal identifiers - e.g: 'NODE_WKTPOINT' for destination (target) nodes. For BACKHAUL_ROUTING, this list depicts the remote assets. The default value is an empty List.
solution_tableName of the table to store the solution, in [schema_name.]table_name format, using standard name resolution rules. The default value is 'graph_solutions'.
optionsAdditional parameters
  • MAX_SOLUTION_RADIUS: For ALLPATHS, SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Sets the maximum solution cost radius, which ignores the list and instead outputs the nodes within the radius sorted by ascending cost. If set to '0.0', the setting is ignored. The default value is '0.0'.
  • MIN_SOLUTION_RADIUS: For ALLPATHS, SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Applicable only when max_solution_radius is set. Sets the minimum solution cost radius, which ignores the list and instead outputs the nodes within the radius sorted by ascending cost. If set to '0.0', the setting is ignored. The default value is '0.0'.
  • MAX_SOLUTION_TARGETS: For ALLPATHS, SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Sets the maximum number of solution targets, which ignores the list and instead outputs no more than n number of nodes sorted by ascending cost where n is equal to the setting value. If set to 0, the setting is ignored. The default value is '1000'.
  • UNIFORM_WEIGHTS: When specified, assigns the given value to all the edges in the graph. Note that weights provided in will override this value.
  • LEFT_TURN_PENALTY: This will add an additonal weight over the edges labelled as 'left turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • RIGHT_TURN_PENALTY: This will add an additonal weight over the edges labelled as' right turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • INTERSECTION_PENALTY: This will add an additonal weight over the edges labelled as 'intersection' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • SHARP_TURN_PENALTY: This will add an additonal weight over the edges labelled as 'sharp turn' or 'u-turn' if the 'add_turn' option parameter of the /create/graph was invoked at graph creation. The default value is '0.0'.
  • NUM_BEST_PATHS: For MULTIPLE_ROUTING solvers only; sets the number of shortest paths computed from each node. This is the heuristic criterion. Default value of zero allows the number to be computed automatically by the solver. The user may want to override this parameter to speed-up the solver. The default value is '0'.
  • MAX_NUM_COMBINATIONS: For MULTIPLE_ROUTING solvers only; sets the cap on the combinatorial sequences generated. If the default value of two millions is overridden to a lesser value, it can potentially speed up the solver. The default value is '2000000'.
  • OUTPUT_EDGE_PATH: If true then concatenated edge ids will be added as the EDGE path column of the solution table for each source and target pair in shortest path solves. Supported values: The default value is FALSE.
  • OUTPUT_WKT_PATH: If true then concatenated wkt line segments will be added as the Wktroute column of the solution table for each source and target pair in shortest path solves. Supported values: The default value is TRUE.
  • SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth. For SHORTEST_PATH solver type, the input is split amongst the server containing the corresponding graph.
  • CONVERGENCE_LIMIT: For PAGE_RANK solvers only; Maximum percent relative threshold on the pagerank scores of each node between consecutive iterations to satisfy convergence. Default value is 1 (one) percent. The default value is '1.0'.
  • MAX_ITERATIONS: For PAGE_RANK solvers only; Maximum number of pagerank iterations for satisfying convergence. Default value is 100. The default value is '100'.
  • MAX_RUNS: For all CENTRALITY solvers only; Sets the maximum number of shortest path runs; maximum possible value is the number of nodes in the graph. Default value of 0 enables this value to be auto computed by the solver. The default value is '0'.
  • OUTPUT_CLUSTERS: For STATS_ALL solvers only; the cluster index for each node will be inserted as an additional column in the output. Supported values:
    • TRUE: An additional column 'CLUSTER' will be added for each node
    • FALSE: No extra cluster info per node will be available in the output
    The default value is FALSE.
  • SOLVE_HEURISTIC: Specify heuristic search criterion only for the geo graphs and shortest path solves towards a single target Supported values:
    • ASTAR: Employs A-STAR heuristics to speed up the shortest path traversal
    • NONE: No heuristics are applied
    The default value is NONE.
  • ASTAR_RADIUS: For path solvers only when 'solve_heuristic' option is 'astar'. The shortest path traversal front includes nodes only within this radius (kilometers) as it moves towards the target location. The default value is '70'.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 26041 of file KineticaFunctions.cs.

UpdateRecordsResponse kinetica.Kinetica.updateRecords< T > ( UpdateRecordsRequest< T >  request_)
inline

Runs multiple predicate-based updates in a single call.

With the list of given expressions, any matching record's column values will be updated as provided in . There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through .
The update_on_existing_pk option specifies the record primary key collision policy for tables with a primary key, while ignore_existing_pk specifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.

Template Parameters
TThe type of object being added.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 26171 of file KineticaFunctions.cs.

UpdateRecordsResponse kinetica.Kinetica.updateRecords< T > ( string  table_name,
IList< string >  expressions,
IList< IDictionary< string, string >>  new_values_maps,
IList< T >  data = null,
IDictionary< string, string >  options = null 
)
inline

Runs multiple predicate-based updates in a single call.

With the list of given expressions, any matching record's column values will be updated as provided in new_values_maps . There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through options .
The update_on_existing_pk option specifies the record primary key collision policy for tables with a primary key, while ignore_existing_pk specifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.

Template Parameters
TThe type of object being added.
Parameters
table_nameName of table to be updated, in [schema_name.]table_name format, using standard name resolution rules. Must be a currently existing table and not a view.
expressionsA list of the actual predicates, one for each update; format should follow the guidelines /filter.
new_values_mapsList of new values for the matching records. Each element is a map with (key, value) pairs where the keys are the names of the columns whose values are to be updated; the values are the new values. The number of elements in the list should match the length of .
dataAn optional list of new binary-avro encoded records to insert, one for each update. If one of does not yield a matching record to be updated, then the corresponding element from this list will be added to the table. The default value is an empty List.
optionsOptional parameters.
  • GLOBAL_EXPRESSION: An optional global expression to reduce the search space of the predicates listed in . The default value is ''.
  • BYPASS_SAFETY_CHECKS: When set to true, all predicates are available for primary key updates. Keep in mind that it is possible to destroy data in this case, since a single predicate may match multiple objects (potentially all of records of a table), and then updating all of those records to have the same primary key will, due to the primary key uniqueness constraints, effectively delete all but one of those updated records. Supported values: The default value is FALSE.
  • UPDATE_ON_EXISTING_PK: Specifies the record collision policy for updating a table with a primary key. There are two ways that a record collision can occur. The first is an "update collision", which happens when the update changes the value of the updated record's primary key, and that new primary key already exists as the primary key of another record in the table. The second is an "insert collision", which occurs when a given filter in finds no records to update, and the alternate insert record given in (or ) contains a primary key matching that of an existing record in the table. If update_on_existing_pk is set to true, "update collisions" will result in the existing record collided into being removed and the record updated with values specified in taking its place; "insert collisions" will result in the collided-into record being updated with the values in / (if given). If set to false, the existing collided-into record will remain unchanged, while the update will be rejected and the error handled as determined by ignore_existing_pk. If the specified table does not have a primary key, then this option has no effect. Supported values:
    • TRUE: Overwrite the collided-into record when updating a record's primary key or inserting an alternate record causes a primary key collision between the record being updated/inserted and another existing record in the table
    • FALSE: Reject updates which cause primary key collisions between the record being updated/inserted and an existing record in the table
    The default value is FALSE.
  • IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for updating a table with a primary key, only used when primary key record collisions are rejected (update_on_existing_pk is false). If set to true, any record update that is rejected for resulting in a primary key collision with an existing table record will be ignored with no error generated. If false, the rejection of any update for resulting in a primary key collision will cause an error to be reported. If the specified table does not have a primary key or if update_on_existing_pk is true, then this option has no effect. Supported values:
    • TRUE: Ignore updates that result in primary key collisions with existing records
    • FALSE: Treat as errors any updates that result in primary key collisions with existing records
    The default value is FALSE.
  • UPDATE_PARTITION: Force qualifying records to be deleted and reinserted so their partition membership will be reevaluated. Supported values: The default value is FALSE.
  • TRUNCATE_STRINGS: If set to true, any strings which are too long for their charN string fields will be truncated to fit. Supported values: The default value is FALSE.
  • USE_EXPRESSIONS_IN_NEW_VALUES_MAPS: When set to true, all new values in are considered as expression values. When set to false, all new values in are considered as constants. NOTE: When true, string constants will need to be quoted to avoid being evaluated as expressions. Supported values: The default value is FALSE.
  • RECORD_ID: ID of a single record to be updated (returned in the call to /insert/records or /get/records/fromcollection).
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 26464 of file KineticaFunctions.cs.

UpdateRecordsBySeriesResponse kinetica.Kinetica.updateRecordsBySeries ( UpdateRecordsBySeriesRequest  request_)
inline

Updates the view specified by to include full series (track) information from the for the series (tracks) present in the .

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 26490 of file KineticaFunctions.cs.

UpdateRecordsBySeriesResponse kinetica.Kinetica.updateRecordsBySeries ( string  table_name,
string  world_table_name,
string  view_name = "",
IList< string >  reserved = null,
IDictionary< string, string >  options = null 
)
inline

Updates the view specified by table_name to include full series (track) information from the world_table_name for the series (tracks) present in the view_name .

Parameters
table_nameName of the view on which the update operation will be performed, in [schema_name.]view_name format, using standard name resolution rules. Must be an existing view.
world_table_nameName of the table containing the complete series (track) information, in [schema_name.]table_name format, using standard name resolution rules.
view_nameName of the view containing the series (tracks) which have to be updated, in [schema_name.]view_name format, using standard name resolution rules. The default value is ''.
reservedThe default value is an empty List.
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 26529 of file KineticaFunctions.cs.

UpdateRecordsResponse kinetica.Kinetica.updateRecordsRaw ( RawUpdateRecordsRequest  request_)
inline

Runs multiple predicate-based updates in a single call.

With the list of given expressions, any matching record's column values will be updated as provided in . There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through .
The update_on_existing_pk option specifies the record primary key collision policy for tables with a primary key, while ignore_existing_pk specifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 26109 of file KineticaFunctions.cs.

UploadFilesResponse kinetica.Kinetica.uploadFiles ( UploadFilesRequest  request_)
inline

Uploads one or more files to KiFS.

There are two methods for uploading files: load files in their entirety, or load files in parts. The latter is recommeded for files of approximately 60 MB or larger.
To upload files in their entirety, populate with the file names to upload into on KiFS, and their respective byte content in .
Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:

  1. Provide the file name in , the UUID in the multipart_upload_uuid key in , and a multipart_operation value of init.
  2. Upload one or more parts by providing the file name, the part data in , the UUID, a multipart_operation value of upload_part, and the part number in the multipart_upload_part_number. The part numbers must start at 1 and increase incrementally. Parts may not be uploaded out of order.
  3. Complete the upload by providing the file name, the UUID, and a multipart_operation value of complete.
    Multipart uploads in progress may be canceled by providing the file name, the UUID, and a multipart_operation value of cancel. If an new upload is initialized with a different UUID for an existing upload in progress, the pre-existing upload is automatically canceled in favor of the new upload.
    The multipart upload must be completed for the file to be usable in KiFS. Information about multipart uploads in progress is available in Kinetica.showFiles(IList{string},IDictionary{string, string}).
    File data may be pre-encoded using base64 encoding. This should be indicated using the file_encoding option, and is recommended when using JSON serialization.
    Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in Kinetica.showDirectories(string,IDictionary{string, string}). The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are deliniated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
    These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 26624 of file KineticaFunctions.cs.

UploadFilesResponse kinetica.Kinetica.uploadFiles ( IList< string >  file_names,
IList< byte[]>  file_data,
IDictionary< string, string >  options = null 
)
inline

Uploads one or more files to KiFS.

There are two methods for uploading files: load files in their entirety, or load files in parts. The latter is recommeded for files of approximately 60 MB or larger.
To upload files in their entirety, populate file_names with the file names to upload into on KiFS, and their respective byte content in file_data .
Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:

  1. Provide the file name in file_names , the UUID in the multipart_upload_uuid key in options , and a multipart_operation value of init.
  2. Upload one or more parts by providing the file name, the part data in file_data , the UUID, a multipart_operation value of upload_part, and the part number in the multipart_upload_part_number. The part numbers must start at 1 and increase incrementally. Parts may not be uploaded out of order.
  3. Complete the upload by providing the file name, the UUID, and a multipart_operation value of complete.
    Multipart uploads in progress may be canceled by providing the file name, the UUID, and a multipart_operation value of cancel. If an new upload is initialized with a different UUID for an existing upload in progress, the pre-existing upload is automatically canceled in favor of the new upload.
    The multipart upload must be completed for the file to be usable in KiFS. Information about multipart uploads in progress is available in Kinetica.showFiles(IList{string},IDictionary{string, string}).
    File data may be pre-encoded using base64 encoding. This should be indicated using the file_encoding option, and is recommended when using JSON serialization.
    Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in Kinetica.showDirectories(string,IDictionary{string, string}). The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are deliniated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
    These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
Parameters
file_namesAn array of full file name paths to be used for the files uploaded to KiFS. File names may have any number of nested directories in their paths, but the top-level directory must be an existing KiFS directory. Each file must reside in or under a top-level directory. A full file name path cannot be larger than 1024 characters.
file_dataFile data for the files being uploaded, for the respective files in .
optionsOptional parameters.
  • FILE_ENCODING: Encoding that has been applied to the uploaded file data. When using JSON serialization it is recommended to utilize base64. The caller is responsible for encoding the data provided in this payload Supported values:
    • BASE64: Specifies that the file data being uploaded has been base64 encoded.
    • NONE: The uploaded file data has not been encoded.
    The default value is NONE.
  • MULTIPART_OPERATION: Multipart upload operation to perform Supported values:
    • NONE: Default, indicates this is not a multipart upload
    • INIT: Initialize a multipart file upload
    • UPLOAD_PART: Uploads a part of the specified multipart file upload
    • COMPLETE: Complete the specified multipart file upload
    • CANCEL: Cancel the specified multipart file upload
    The default value is NONE.
  • MULTIPART_UPLOAD_UUID: UUID to uniquely identify a multipart upload
  • MULTIPART_UPLOAD_PART_NUMBER: Incremental part number for each part in a multipart upload. Part numbers start at 1, increment by 1, and must be uploaded sequentially
  • DELETE_IF_EXISTS: If true, any existing files specified in will be deleted prior to start of upload. Otherwise the file is replaced once the upload completes. Rollback of the original file is no longer possible if the upload is cancelled, aborted or fails if the file was deleted beforehand. Supported values: The default value is FALSE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 26832 of file KineticaFunctions.cs.

UploadFilesFromurlResponse kinetica.Kinetica.uploadFilesFromurl ( UploadFilesFromurlRequest  request_)
inline

Uploads one or more files to KiFS.


Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in Kinetica.showDirectories(string,IDictionary{string, string}). The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are deliniated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 26866 of file KineticaFunctions.cs.

UploadFilesFromurlResponse kinetica.Kinetica.uploadFilesFromurl ( IList< string >  file_names,
IList< string >  urls,
IDictionary< string, string >  options = null 
)
inline

Uploads one or more files to KiFS.


Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in Kinetica.showDirectories(string,IDictionary{string, string}). The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are deliniated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.

Parameters
file_namesAn array of full file name paths to be used for the files uploaded to KiFS. File names may have any number of nested directories in their paths, but the top-level directory must be an existing KiFS directory. Each file must reside in or under a top-level directory. A full file name path cannot be larger than 1024 characters.
urlsList of URLs to upload, for each respective file in .
optionsOptional parameters. The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 26912 of file KineticaFunctions.cs.

VisualizeImageChartResponse kinetica.Kinetica.visualizeImageChart ( VisualizeImageChartRequest  request_)
inline

Scatter plot is the only plot type currently supported.

A non-numeric column can be specified as x or y column and jitters can be added to them to avoid excessive overlapping. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The image is contained in the field.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 27575 of file KineticaFunctions.cs.

VisualizeImageChartResponse kinetica.Kinetica.visualizeImageChart ( string  table_name,
IList< string >  x_column_names,
IList< string >  y_column_names,
double  min_x,
double  max_x,
double  min_y,
double  max_y,
int  width,
int  height,
string  bg_color,
IDictionary< string, IList< string >>  style_options,
IDictionary< string, string >  options = null 
)
inline

Scatter plot is the only plot type currently supported.

A non-numeric column can be specified as x or y column and jitters can be added to them to avoid excessive overlapping. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The image is contained in the field.

Parameters
table_nameName of the table containing the data to be drawn as a chart, in [schema_name.]table_name format, using standard name resolution rules.
x_column_namesNames of the columns containing the data mapped to the x axis of a chart.
y_column_namesNames of the columns containing the data mapped to the y axis of a chart.
min_xLower bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.
max_xUpper bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.
min_yLower bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.
max_yUpper bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.
widthWidth of the generated image in pixels.
heightHeight of the generated image in pixels.
bg_colorBackground color of the generated image.
style_optionsRendering style options for a chart.
  • POINTCOLOR: The color of points in the plot represented as a hexadecimal number. The default value is '0000FF'.
  • POINTSIZE: The size of points in the plot represented as number of pixels. The default value is '3'.
  • POINTSHAPE: The shape of points in the plot. Supported values: The default value is SQUARE.
  • CB_POINTCOLORS: Point color class break information consisting of three entries: class-break attribute, class-break values/ranges, and point color values. This option overrides the pointcolor option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point color values are separated by cb_delimiter, e.g. {"price", "20:30;30:40;40:50", "0xFF0000;0x00FF00;0x0000FF"}.
  • CB_POINTSIZES: Point size class break information consisting of three entries: class-break attribute, class-break values/ranges, and point size values. This option overrides the pointsize option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point size values are separated by cb_delimiter, e.g. {"states", "NY;TX;CA", "3;5;7"}.
  • CB_POINTSHAPES: Point shape class break information consisting of three entries: class-break attribute, class-break values/ranges, and point shape names. This option overrides the pointshape option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point shape names are separated by cb_delimiter, e.g. {"states", "NY;TX;CA", "circle;square;diamond"}.
  • CB_DELIMITER: A character or string which separates per-class values in a class-break style option string. The default value is ';'.
  • X_ORDER_BY: An expression or aggregate expression by which non-numeric x column values are sorted, e.g. "avg(price) descending".
  • Y_ORDER_BY: An expression or aggregate expression by which non-numeric y column values are sorted, e.g. "avg(price)", which defaults to "avg(price) ascending".
  • SCALE_TYPE_X: Type of x axis scale. Supported values:
    • NONE: No scale is applied to the x axis.
    • LOG: A base-10 log scale is applied to the x axis.
    The default value is NONE.
  • SCALE_TYPE_Y: Type of y axis scale. Supported values:
    • NONE: No scale is applied to the y axis.
    • LOG: A base-10 log scale is applied to the y axis.
    The default value is NONE.
  • MIN_MAX_SCALED: If this options is set to "false", this endpoint expects request's min/max values are not yet scaled. They will be scaled according to scale_type_x or scale_type_y for response. If this options is set to "true", this endpoint expects request's min/max values are already scaled according to scale_type_x/scale_type_y. Response's min/max values will be equal to request's min/max values. The default value is 'false'.
  • JITTER_X: Amplitude of horizontal jitter applied to non-numeric x column values. The default value is '0.0'.
  • JITTER_Y: Amplitude of vertical jitter applied to non-numeric y column values. The default value is '0.0'.
  • PLOT_ALL: If this options is set to "true", all non-numeric column values are plotted ignoring min_x, max_x, min_y and max_y parameters. The default value is 'false'.
optionsOptional parameters.
  • IMAGE_ENCODING: Encoding to be applied to the output image. When using JSON serialization it is recommended to specify this as base64. Supported values:
    • BASE64: Apply base64 encoding to the output image.
    • NONE: Do not apply any additional encoding to the output image.
    The default value is NONE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 27834 of file KineticaFunctions.cs.

VisualizeIsochroneResponse kinetica.Kinetica.visualizeIsochrone ( VisualizeIsochroneRequest  request_)
inline

Generate an image containing isolines for travel results using an existing graph.

Isolines represent curves of equal cost, with cost typically referring to the time or distance assigned as the weights of the underlying graph. See Network Graphs & Solvers for more information on graphs.

Parameters
request_Request object containing the parameters for the operation.
Returns
Response object containing the result of the operation.

Definition at line 29560 of file KineticaFunctions.cs.

VisualizeIsochroneResponse kinetica.Kinetica.visualizeIsochrone ( string  graph_name,
string  source_node,
double  max_solution_radius,
IList< string >  weights_on_edges,
IList< string >  restrictions,
int  num_levels,
bool  generate_image,
string  levels_table,
IDictionary< string, string >  style_options,
IDictionary< string, string >  solve_options = null,
IDictionary< string, string >  contour_options = null,
IDictionary< string, string >  options = null 
)
inline

Generate an image containing isolines for travel results using an existing graph.

Isolines represent curves of equal cost, with cost typically referring to the time or distance assigned as the weights of the underlying graph. See Network Graphs & Solvers for more information on graphs.

Parameters
graph_nameName of the graph on which the isochrone is to be computed.
source_nodeStarting vertex on the underlying graph from/to which the isochrones are created.
max_solution_radiusExtent of the search radius around . Set to '-1.0' for unrestricted search radius. The default value is -1.0.
weights_on_edgesAdditional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', or expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED'. Any provided weights will be added (in the case of 'WEIGHTS_VALUESPECIFIED') to or multiplied with (in the case of 'WEIGHTS_FACTORSPECIFIED') the existing weight(s). The default value is an empty List.
restrictionsAdditional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', or expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED'. If remove_previous_restrictions is set to true, any provided restrictions will replace the existing restrictions. If remove_previous_restrictions is set to false, any provided restrictions will be added (in the case of 'RESTRICTIONS_VALUECOMPARED') to or replaced (in the case of 'RESTRICTIONS_ONOFFCOMPARED'). The default value is an empty List.
num_levelsNumber of equally-separated isochrones to compute. The default value is 1.
generate_imageIf set to true, generates a PNG image of the isochrones in the response. Supported values: The default value is TRUE.
levels_tableName of the table to output the isochrones to, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. The table will contain levels and their corresponding WKT geometry. If no value is provided, the table is not generated. The default value is ''.
style_optionsVarious style related options of the isochrone image.
solve_optionsSolver specific parameters
  • REMOVE_PREVIOUS_RESTRICTIONS: Ignore the restrictions applied to the graph during the creation stage and only use the restrictions specified in this request if set to true. Supported values: The default value is FALSE.
  • RESTRICTION_THRESHOLD_VALUE: Value-based restriction comparison. Any node or edge with a 'RESTRICTIONS_VALUECOMPARED' value greater than the restriction_threshold_value will not be included in the solution.
  • UNIFORM_WEIGHTS: When specified, assigns the given value to all the edges in the graph. Note that weights provided in will override this value.
The default value is an empty Dictionary.
contour_optionsSolver specific parameters
  • PROJECTION: Spatial Reference System (i.e. EPSG Code). Supported values: The default value is PLATE_CARREE.
  • WIDTH: When is set to true, width of the generated image. The default value is '512'.
  • HEIGHT: When is set to true, height of the generated image. If the default value is used, the height is set to the value resulting from multiplying the aspect ratio by the width. The default value is '-1'.
  • SEARCH_RADIUS: When interpolating the graph solution to generate the isochrone, neighborhood of influence of sample data (in percent of the image/grid). The default value is '20'.
  • GRID_SIZE: When interpolating the graph solution to generate the isochrone, number of subdivisions along the x axis when building the grid (the y is computed using the aspect ratio of the output image). The default value is '100'.
  • COLOR_ISOLINES: Color each isoline according to the colormap; otherwise, use the foreground color. Supported values: The default value is TRUE.
  • ADD_LABELS: If set to true, add labels to the isolines. Supported values: The default value is FALSE.
  • LABELS_FONT_SIZE: When add_labels is set to true, size of the font (in pixels) to use for labels. The default value is '12'.
  • LABELS_FONT_FAMILY: When add_labels is set to true, font name to be used when adding labels. The default value is 'arial'.
  • LABELS_SEARCH_WINDOW: When add_labels is set to true, a search window is used to rate the local quality of each isoline. Smooth, continuous, long stretches with relatively flat angles are favored. The provided value is multiplied by the labels_font_size to calculate the final window size. The default value is '4'.
  • LABELS_INTRALEVEL_SEPARATION: When add_labels is set to true, this value determines the distance (in multiples of the labels_font_size) to use when separating labels of different values. The default value is '4'.
  • LABELS_INTERLEVEL_SEPARATION: When add_labels is set to true, this value determines the distance (in percent of the total window size) to use when separating labels of the same value. The default value is '20'.
  • LABELS_MAX_ANGLE: When add_labels is set to true, maximum angle (in degrees) from the vertical to use when adding labels. The default value is '60'.
The default value is an empty Dictionary.
optionsAdditional parameters
  • SOLVE_TABLE: Name of the table to host intermediate solve results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table will contain the position and cost for each vertex in the graph. If the default value is used, a temporary table is created and deleted once the solution is calculated. The default value is ''.
  • IS_REPLICATED: If set to true, replicate the solve_table. Supported values: The default value is TRUE.
  • DATA_MIN_X: Lower bound for the x values. If not provided, it will be computed from the bounds of the input data.
  • DATA_MAX_X: Upper bound for the x values. If not provided, it will be computed from the bounds of the input data.
  • DATA_MIN_Y: Lower bound for the y values. If not provided, it will be computed from the bounds of the input data.
  • DATA_MAX_Y: Upper bound for the y values. If not provided, it will be computed from the bounds of the input data.
  • CONCAVITY_LEVEL: Factor to qualify the concavity of the isochrone curves. The lower the value, the more convex (with '0' being completely convex and '1' being the most concave). The default value is '0.5'.
  • USE_PRIORITY_QUEUE_SOLVERS: sets the solver methods explicitly if true Supported values:
    • TRUE: uses the solvers scheduled for 'shortest_path' and 'inverse_shortest_path' based on solve_direction
    • FALSE: uses the solvers 'priority_queue' and 'inverse_priority_queue' based on solve_direction
    The default value is FALSE.
  • SOLVE_DIRECTION: Specify whether we are going to the source node, or starting from it. Supported values:
    • FROM_SOURCE: Shortest path to get to the source (inverse Dijkstra)
    • TO_SOURCE: Shortest path to source (Dijkstra)
    The default value is FROM_SOURCE.
The default value is an empty Dictionary.
Returns
Response object containing the result of the operation.

Definition at line 30321 of file KineticaFunctions.cs.

Member Data Documentation

const string kinetica.Kinetica.API_VERSION = "7.1.10.0"

Definition at line 19 of file KineticaFunctions.cs.

const int kinetica.Kinetica.END_OF_SET = -9999

No Limit

Definition at line 45 of file Kinetica.cs.

Property Documentation

int kinetica.Kinetica.ThreadCount = false
getset

Thread Count

Definition at line 112 of file Kinetica.cs.

string kinetica.Kinetica.Url
getset

URL for Kinetica Server (including "http:" and port) as a string

Definition at line 82 of file Kinetica.cs.

Uri kinetica.Kinetica.URL
getset

URL for Kinetica Server (including "http:" and port)

Definition at line 87 of file Kinetica.cs.

string kinetica.Kinetica.Username
getset

Optional: User Name for Kinetica security

Definition at line 92 of file Kinetica.cs.

bool kinetica.Kinetica.UseSnappy = null
getset

Use Snappy

Definition at line 107 of file Kinetica.cs.


The documentation for this class was generated from the following files: